29 7.4 Designing effective questions and questionnaires
Learning Objectives
- Identify the steps one should take to write effective survey questions
- Describe some of the ways that survey questions might confuse respondents and how to overcome that possibility
- Apply mutual exclusivity and exhaustiveness to writing closed-ended questions
- Define fence-sitting and floating
- Describe the steps involved in constructing a well-designed questionnaire
- Discuss why piloting a questionnaire is important
Up to this point, we’ve considered several general points about surveys, including when to use them, some of their strengths and weaknesses, and how often and in what ways to administer surveys. In this section, we’ll get more specific and take a look at how to pose understandable questions that will yield useable data and how to present those questions on a questionnaire.
Asking effective questions
The first thing you need to do to write effective survey questions is identify what exactly you wish to know. Perhaps surprisingly, it is easy to forget to include important questions when designing a survey. Begin by looking at your research question. Perhaps you wish to identify the factors that contribute to students’ ability to transition from high school to college. To understand which factors shaped successful students’ transitions to college, you’ll need to include questions in your survey about all the possible factors that could contribute. How do you know what to ask? Consulting the literature on the topic will certainly help, but you should also take the time to do some brainstorming on your own and to talk with others about what they think may be important in the transition to college. Time and space limitations won’t allow you to include every single item you’ve come up with, so you’ll also need to think about ranking your questions so that you can be sure to include those that you view as most important. In your study, think back to your work on operationalization. How did you plan to measure your variables? If you planned to ask specific questions or use a scale, those should be in your survey.
We’ve discussed including questions on all topics you view as important to your overall research question, but you don’t want to take an everything-but-the-kitchen-sink approach by uncritically including every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your respondents to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you view as important.
Once you’ve identified all the topics about which you’d like to ask questions, you’ll need to actually write those questions. Questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and concise as possible. To reiterate, survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and concise will go a long way toward showing your respondents the gratitude they deserve.
Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experience with whatever events, behaviors, or feelings you are asking them to report. You probably wouldn’t want to ask a sample of 18-year-old respondents, for example, how they would have advised President Reagan to proceed when news of the United States’ sale of weapons to Iran broke in the mid-1980s. For one thing, few 18-year-olds are likely to have any clue about how to advise a president. Furthermore, the 18-year-olds of today were not even alive during Reagan’s presidency, so they have had no experience with Iran-Contra affair about which they are being questioned. In our example of the transition to college, heeding the criterion of relevance would mean that respondents must understand what exactly you mean by “transition to college” if you are going to use that phrase in your survey and that respondents must have actually experienced the transition to college themselves.
If you decide that you do wish to pose some questions about matters with which only a portion of respondents will have had experience, it may be appropriate to introduce a filter question into your survey. A filter question is designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Perhaps in your survey on the transition to college you want to know whether substance use plays any role in students’ transitions. You may ask students how often they drank during their first semester of college. But this assumes that all students drank. Certainly, some may have abstained from using alcohol, and it wouldn’t make any sense to ask the nondrinkers how often they drank. Nevertheless, it seems reasonable that drinking frequency may have an impact on someone’s transition to college, so it is probably worth asking this question even if doing means the question will not be relevant for some respondents. This is just the sort of instance when a filter question would be appropriate. With a filter question such as question # 10 in Figure 7.1, you can filter out respondents who have not had alcohol from answering questions about their alcohol use.
There are some ways of asking questions that are bound to confuse many survey respondents. Survey researchers should take great care to avoid these kinds of questions. These include questions that pose double negatives, those that use confusing or culturally specific terms, and those that ask more than one question within a single question. Any time respondents are forced to decipher questions that use double negatives, confusion is bound to ensue. Taking the previous question about drinking as our example, what if we had instead asked, “Did you not abstain from drinking during your first semester of college?” This example is obvious, but hopefully it drives home the point to be careful about question wording so that respondents are not asked to decipher double negatives. In general, avoiding negative terms in your question wording will help to increase respondent understanding.
You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). A similar issue arises when you use jargon, or technical language, that people do not commonly know. For example, if you asked adolescents how they experience imaginary audience, they likely would not be able to link that term to the concepts from David Elkind’s theory. Instead, you would need to break down that term into language that is easier to understand and common to adolescents.
Asking multiple questions as though they are a single question can also confuse survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question. Using our example of the transition to college, Figure 7.2 shows a double-barreled question.
Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also less interesting than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.
Another thing to avoid when constructing survey questions is the problem of social desirability. We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. (You may recall we covered social desirability bias in Chapter 5.) Let’s go back to our example about transitioning to college to explore this concept further.
Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college. Cheating on exams is generally frowned upon. So it may be difficult to get people taking a survey to admit to cheating on an exam. But if you could guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.
Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.
In sum, in order to pose effective survey questions, researchers should do the following:
- Identify what it is they wish to know.
- Keep questions clear and succinct.
- Make questions relevant to respondents.
- Use filter questions when necessary.
- Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, or pose more than one question at a time.
- Imagine how respondents would feel responding to questions.
- Get feedback, especially from people who resemble those in the researcher’s sample.
Response options
While posing clear and understandable questions in your survey is certainly important, so too is providing respondents with unambiguous response options. Response options are the answers that you provide to the people taking your survey. Generally, respondents will be asked to choose a single (or best) response to each question you pose, though certainly it makes sense in some cases to instruct respondents to choose multiple response options. One caution to keep in mind when accepting multiple responses to a single question, however, is that doing so may add complexity when it comes to tallying and analyzing your survey results.
Offering response options assumes that your questions will be closed-ended questions. In a quantitative written survey, which is the type of survey we’ve been discussing here, chances are good that most, if not all, your questions will be closed-ended. This means that you, the researcher, will provide respondents with a limited set of options for their responses. To write an effective closed-ended question, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive. Look back at Figure 7.1, which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive. In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 7.1, we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options.
Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher.
Earlier in this section, we discussed double-barreled questions, but response options can also be double barreled, and this should be avoided. Figure 7.3 provides an example of a question that uses double-barreled response options.
Other things to avoid when it comes to response options include fence-sitting and floating. Fence-sitters are respondents who choose neutral response options, even if they have an opinion. This can occur if respondents are given, say, five rank-ordered response options, such as strongly agree, agree, no opinion, disagree, and strongly disagree. You’ll remember this is called a Likert scale. Some people will be drawn to respond, “no opinion” even if they have an opinion, particularly if their true opinion is the not a socially desirable opinion. Floaters, on the other hand, are those that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion. If a respondent is only given four rank-ordered response options, such as strongly agree, agree, disagree, and strongly disagree, those who have no opinion have no choice but to select a response that suggests they have an opinion.
As you can see, floating is the flip side of fence-sitting. Thus, the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers specifically want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is okay to force respondents to choose an opinion. Other times, researchers can provide a scale with anchors at either end and ask the respondent to indicate where there answer fits between the two anchors. An example would be a question that says, “On a scale from 0 to 10 where 0 is completely disagree and 10 is completely agree, what number would indicate your level of agreement?” There is no always-correct solution to either problem.
Finally, using a matrix is a nice way of streamlining response options. A matrix is a question type that that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 7.4.
Using Standardized instruments
You may be thinking writing good survey questions and clear responses is a complicated task with a lot of pitfalls. In many ways it is! The good news is that for many of the constructs you would like to measure, other researchers have already designed and tested survey questions. You may remember from from Chapter 5 that there are scales, indices, and typologies to measure variables. Many of these instruments have already demonstrated reliability and validity. If there are validated instruments available, it is always advisable to use them rather than to write your own survey questions. Not only do you save time and effort, but you can have a fair amount of confidence that the validated instruments will avoid many of the question-writing pitfalls discussed above.
Designing questionnaires
In addition to constructing quality questions and posing clear response options, you’ll also need to think about how to present your written questions and response options to survey respondents. Questions are presented on a questionnaire, which is the document (either hard copy or online) that contains all your survey questions for respondents to read and answer. Designing questionnaires takes some thought.
One of the first things to do once you’ve come up with a set of survey questions you feel confident about is to group those questions thematically. In our example of the transition to college, perhaps we’d have a few questions asking about study habits, others focused on friendships, and still others on exercise and eating habits. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about pre-college life and then present a series of questions about life after beginning college. The point here is to be deliberate about how you present your questions to respondents.
Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will make respondents want to continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions, such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. But these are important pieces of data and you don’t want your participant to quit the survey without providing their demographic information. Another thing to consider if the placement of sensitive or difficult topics, such as child sexual abuse or other criminal activity. You don’t want to scare respondents away or shock them by beginning with your most intrusive questions.
In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research—only you, the researcher, hopefully in consultation with people who are willing to provide you with feedback, can determine how best to order your questions. To do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions.
You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a sizable number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of it will turn off respondents and may make them not want to complete your survey.
Second, and perhaps more important, is the length of time respondents are likely to be willing to spend completing the questionnaire. If you are studying college students, asking them to use their precious fun time away from studying to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you have the endorsement of a professor who is willing to allow you to administer your survey in class, students may be willing to give you a little more time (though perhaps the professor will not). The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some researchers advise that surveys should not take longer than about 15 minutes to complete (as cited in Babbie 2010), whereas others suggest that up to 20 minutes is acceptable (Hopper, 2012). As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered to determine how long to make your questionnaire.
A good way to estimate the time it will take respondents to complete your questionnaire is through piloting the questionnaire. Piloting allows you to get feedback on your questionnaire so you can improve it before you actually administer it. Piloting can be quite expensive and time consuming if you wish to test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pretesting with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By piloting your questionnaire, you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are boring or offensive, and learn whether there are places where you should have included filter questions. You can also time respondents as they take your survey. This will give you a good idea about the estimate to provide when you administer your survey for your study and whether you have some wiggle room to add additional items or need to cut a few items.
Perhaps this goes without saying, but your questionnaire should also have an attractive design. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page. Make your font size readable (at least 12 point or larger, depending on the characteristics of your sample), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions.
Key Takeaways
- Brainstorming and consulting the literature are two important early steps to take when preparing to write effective survey questions.
- Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary.
- Getting feedback on your survey questions is a crucial step in the process of designing a survey.
- When it comes to creating response options, the solution to the problem of fence-sitting might cause floating, whereas the solution to the problem of floating might cause fence sitting.
- Piloting is an important step for improving a survey before actually administering it.
Glossary
- Closed-ended questions- questions for which the researcher offers response options
- Double-barreled question- a question that asks two different questions at the same time, making it difficult to respond accurately
- Fence-sitters- respondents who choose neutral response options, even if they have an opinion
- Filter question- question that identifies some subset of survey respondents who are asked additional questions that are not relevant to the entire sample
- Floaters- respondents that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion
- Matrix question- lists a set of questions for which the answer categories are all the same
- Open-ended questions- questions for which the researcher does not include response options