In this module, we're going to talk about is how you take your concept, what you want to know about and convert that into questions that people understand. So, a couple of basic topics we're going to talk about is what do you consider about your concept before you start writing questions and how do you turn a research need into a set of understandable research questions. What concepts do I need to measure? What are you interested in? Why are you doing a survey in the first place? We talked about this early in this lecture series about being very concrete about what it is you want to accomplish with your survey. Think about what it is that you want to know. In my home of academia, we would refer to that as our research question but there's lots of different ways that you can generate what you need to know. It could be a business need. So, for instance, you may want to know how to your customers react to a change in your product or it could be that you have a new marketing campaign and you want to know about its effectiveness helping people identify your brand. There's lots of different business needs that will drive the concepts that you have that you're then going to operationalize as questions in your survey. A great way to generate concepts and to think about how do you start surveys is through more qualitative types of data collection. So, for instance, user interviews or user observations can generate an idea about what's important to your users that you then want to test more broadly through surveys. Now, remember back to the beginning, surveys are great at testing the prevalence of different types of activities or beliefs within a population or large group of people. Knowing that there is such a belief that there is a behavior that you're interested in can often be derived from other types of UX research that you're doing. One way a really common way that you know that you're going to want to do more surveys is because you've already done surveys. Most of us who have done survey research have a instance where we wish we would've asked one good key question during our last survey or that responses to a set of questions on a survey are surprising and we want to follow up or that we're trying to repeat questions over time to see how the beliefs of our population or bad behaviors of our population change over time. All of these are great ways that you can derive your concepts, how you actually come up with, what you're interested in is really going to be driven by your needs and working with your organization. The problem becomes is that it can be very tricky moving from a concept that you're interested in to a question that everybody can understand and answer. So, let's use an example that comes from the domain book that we've been using, how does education affect well-being? Now, this seems like a really nice straight forward research question. You've probably answered a survey question like this, how does education affect well-being? The problem becomes once you start to dig into the details, you're looking at a few different components of this research question or research need that you want to start unpacking. So, here we've talked about two separate concepts, we have education in the blue and well-being in the red. The word affect means that we're looking at what's the relationship between these two things, what's the correlation between education and well-being. Well, we could take each of these terms and start unpacking them in such a way that it becomes less clear what we actually mean here very quickly. So, for instance, education, how are we going to measure education level? We've all filled out demographic questions where we've answered this question or have you do to have a high school education or Bachelor's education, graduate degree, but what if somebody who has interprets that as formal or informal education, now you including apprenticeships or internships or fellowships that you've done. Does it change if you're doing an international survey because most other countries don't have the same educational system that the United States does. If you're doing an international survey, how are you going to measure education level? Is it just going to be years in school in which case you're going to miss a lot of that informal education that you might be interested in? When you are asking this question of yourself, what did you really mean by education and that's the important thing, getting very concrete about how do we think about education, what is it that we're really interested in here. On the other side of the equation, we're also really interested in well-being but that has so many directions we could go into. Do we mean financial well-being or mental well-being or physical well-being? Even if we just dug into physical well-being, how do we measure that? Is it nutrition? How well you eat? Is it fitness? How many miles can you run or weights can you lift? Is it illness? Do you have any chronic illnesses that we're concerned about or allergies? There are a dozen if not hundreds of different ways to measure physical well-being and of course that host truth to both financial and mental well-being too. The term well-being even though when I set it in the first slide, you probably had a pretty good sense of what you thought I meant by that, it quickly becomes clear that there's lots of room for uncertainty and if you're guessing what I think when I ask you about your well-being, then somebody next to you might be guessing another thing and that creates error. If it's unclear what I meant by well-being and you're answering based off your presumption of what I meant, then we've introduced an open door for bad survey results. Most survey questions are trying to capture a ephemeral human experiences. We have the whole set of survey questions that are very much about who are you and what types of things that you do, that's a separate set of things. The hard question to write about are those that are more about things like joy or satisfaction or displeasure or likelihood to engage in future behaviors, all of these are really ephemeral and the problem becomes is that individuals in your population are going to believe different things about each of these ephemeral experiences very differently from one another and sometimes even differently from themselves. If we take a concept like happiness, everybody's definition of happiness is going to differ. How do we then come up with a measurement of happiness that all members of our sample are going to interpret in the same way and that's the real trick. As I say here, it's not just that members of our sample are different from one another, sometimes people are even different from themselves time one or two. There's a lot of research that's been done in survey methodology that shows that you can ask the same person one question at time one say today and if you ask them the same question tomorrow, more likely than not, their answer could vary depending on the types of response patterns that you have available to them. Time one and time two for even an individual can create a variance in your survey responses. Now that makes sense, maybe today that you caught me right after lunch and I was happy or maybe tomorrow I had a bad morning and I'm crankier, it's very easy to imagine that a person's internal state and how they interpret some of these vaguer concepts that we're trying to measure, could change from time one or time two, that's why it's so important to write crisp clear questions that really clearly define what we mean by some of these more amorphous terms. In general, because we can't control that as well for what two people think are the difference between these terms, we really depend on large samples of respondents to help basically distribute the air randomly across that large sample. The more people we ask, we can imagine that, so I think happiness is one thing and you think it's another and a third person thinks that it's another thing altogether, that incorrectness, how far we are from what the researchers actually trying to measure, if that's random enough, then it washes out in our analysis, it becomes over enough samples, enough measurements not important, becomes random noise. Now, that's most true with probability samples and as we talked about a lot of samples, we'll be doing in UX or not probability samples. So, in non-probability samples, that randomness isn't as protective, so it's even more important that we write really good questions when you're thinking about non-probability samples. So, moving from concept to question, what are the types of things that you want to do? The first thing you want to do is define the concept that you're interested in. So, in this case, where this question, I'm talking about use. How difficult or easy was it to get to the homepage of WEBSITE from the page you're on? For websites of course, any website you want to pick. So, define what type of response will be useful in your analysis. You can see here I picked a seven-point likert scale or ordinal scale for people to respond to, from extremely easy to extremely difficult. I'm trying to compare two states. I'm interested in this ease of use of this website as my overall concept. I operationalize that in the terms difficult and easy, and then I've created a response category that tries to measure a comparison between those two poles. That helps me to get a set of responses and to create a consistent experience across all the people who are going to respond to my survey. So, this is another format of that question. Was there anything too difficult to find on WEBSITE? This is just a binary, yes or no question. So, it's similar in a way to our last question, and then we're asking about the experience of the website that we're trying to measure. But you can see here that you'd get a very different set of responses. One of the points we're going to keep coming back to is one way to think about what's the right way to ask a question, is to think about what are you going to use the data for. If you think about what responses you would have, where say 500 people respond to your survey. In this question, you're going to have 250 yes's and 250 no's. In the previous question, you're going to get a lot more variants. How you actually use that data might define how you're going to operationalize your concept into a question. You want to pretest some alternatives, and this is of course just good UX process. Would you ship an untested version of your product? No. We want to use small survey pre-tests and focus groups to test our instrument. Again as a reminder, instrument just means our questionnaire. Survey pre-tests are just what you would imagine. You basically create your survey and you float it out to a smaller sample of people so that they can do this survey and you can see what you would get back. How many people you want, how many people you actually invite to do your pre-test, is really dependent on what you're trying to accomplish with your pre-test. If you're just doing this as a way of testing the flow and making sure people understand the questions and that you're getting data back, you don't need that large of a participant pool to take your pre-test survey. If however, you also want to include analysis to make sure that you're getting enough data that you can do the analysis that you want to do, that's going to mean that you're going to need a larger sample of people to participate. A trick that you can do with the survey pre-tests is of course, do them as a cognitive walk-through. You have the person do the survey at a computer in your office or your lab, and you watch them do it and as they make decisions about question choices, you can actually interview them about the choices that they're making, and you can get a richer more deep sense of how the person is experiencing the survey, then you would get through just a random pre-test. Another option for really digging into how your concepts smash your questions is to do focus groups. So, you can perhaps have a group of people do your survey and then they get together into a locus group format. This is nice because in a focus group, what you get is more detailed information about why people believe certain things. As people start to reflect on for instance, how we all think about the term happiness, that might help you to come up with terms or words that you can use to operationalize that concept that didn't occur to you. All UX researchers should be suspicious of our own intuition. That's why we do user research, is to understand what people think about a topic. A focus group is a great type of user research you can do to make sure that the words you're using in your question are really matching the concepts that you're trying to ask about. Of course, you want to iterate. It's hard for me to understate how important it is for you to not just send off the first draft of your survey questionnaire. I participated in a survey with pure research a couple of years ago, which was a national survey, we were asking about social media use. I looked back and this is an example of just one pass of edits that we did. You can see from this pass, there are lots of different people chiming in, we're editing word choices. I counted and between the four main authors of this survey questionnaire, we did over 80 drafts of this questionnaire before we launched it to our sample population. That kind of iteration, fine tuning every question, making sure that every single question was tuned to the concept that we wanted and had the best chance possible getting the data that we needed was a hugely important process for us, because running a survey of this type was so expensive and such an effort that we wanted to make sure we were launching the best instrument we probably could out into the world. So, in summary, it's really important that you think about your concept and how you're going to take that concept from the thing that you're interested in, all the way through an operationalized concrete question that people are going to understand. Again, people have a hard time understanding vaguer concepts because individuals are different or even individual states can be different from time to time. Some tips for how do you get started in this process. One is to clearly define your research goals. What is it that you want to know. Another is to operationalize your concepts. Move that vague concept into something you can actually measure and that people will understand. Use focus groups and pre-testing to make sure that people actually do understand the questions, like you would hope, and then iterate on your design. Keep fine tuning those questions and launch the absolute best survey questionnaire that you can possibly have. In the next module, we're going to talk more about, now how do you actually write good questions and dig deeper into the process of question writing.