Hello! I am Karen, Design Researcher. Today, I’d like to address some common concerns to survey design+creation through the lens of cognitive science and its practical advice.
Glossary in the context of survey research:
Construct: an abstract idea that one wishes to measure using survey questions.
Representation: a subset of a population that seeks to accurately reflect the characteristics of a larger group.
How does the cognitive process affect the way people answer to survey questions?
One of the leading theories within survey methodology is the “Survey Response Process” which is a 5 step process of what the human mind goes as they navigate a survey experience (Perception-Comprehension-Retrieval-Judgment-Response).
Rasinski and Tourangeau explained that the order of the questions will influence the way the respondent perceive the next question because prior questions activate information memory which modifies the interpretation of later questions.
When I am doing survey pre-testing, I find it helpful to think through each of those 5 stages. For example, if a respondent doesn’t know how to report their judgement then that might mean they are just going to skip the question or blindly guess their own answer. The key is to intentionally check how other people are responding to the questions by using the survey response process framework as a reference.
In the end, the researcher should want the question format to meet the respondent’s mental model for processing.
What is the difference between behavioral and attitude questions?
Behavioral questions rely more on what-when-how behavior factual events, for example:
How often have you seen a medical doctor about your own health since [reference period]?
Attitude questions rely on the affective space (likes & dislikes), cognitive space (knows or thinks) and action space (willingness to do or act on attitude object). For example:
What are the 3 things you would like to change about [company name]?
As you can see behavioral and attitude questions are getting at a similar source of information. The difference is that it is asking about the different attributes of the construct the researcher is trying to measure.
How should I consider the Total Survey Error framework when designing a survey?
I am going to explain the illustration below as it is a tool that helps minimize survey error or even pin point where the error happened during data estimates.
Each one of the pieces under the representation (right) side are walking you through who you are getting answers from. For example: if you have a target population and your sampling frame differs from your target population, then you have coverage error.
The measurement (left) side [my favorite error to talk about lol] is asking you “are you really measuring, what you think you are measuring?” In here, you start with a construct — Eg., Are people streaming content more during the pandemic? — A lot of people make a mistake in this area because they start to write a survey questions first rather than going one step above and ask themselves “What am I trying to measure?” So measurement error comes into play when respondents are unable to provide a “good or accurate” answer to individual questions because the error could be living either in any of the steps of representation or measurement pieces.
What a good survey writer often does is to try to reduce measurement error down by referencing the Total Survey Error framework and guide survey testing and its data collection.
How do I design “good” survey questions?
As mentioned above, sometimes we make the mistake of writing questions without knowing the data that really needs to be collected and the needs for that actual data to be collected. Always, make sure that each question is measuring what you intend it to measure to have a grounded and accurate survey. This can be achieved through a literate understanding survey methodology and statistics.
Good survey design questions are characterized by drawing on cognitive and social psychology pertaining to issues like language comprehension, attitudes, subjective judgments, visual heuristics, autobiographical memory, and the communicative dynamics of the survey. This process can be tested and validated using cognitive interviews.
Here are a few tips to consider, when we ask respondents to go back on their memories:
- Use the time period that is easy for respondents to think about (eg., last 12 months is easier to remember than last 10 months)
- Avoid asking about infrequent or minor behaviors
- Be careful to not mix reference periods. So if you pick one try to stay committed to that time reference throughout the survey.
A good word of advice is to take the construct that we want to measure and turn it into survey questions :)
What is a Cognitive Interview (CI)?
Cognitive interviewing is about testing your survey and trying to minimize errors and confusions the respondent might face.
If you are familiar with UX testing, you might find that survey usability testing is very similar. Some of the common methods to do CI are:
- Think-out-loud
- Verbal Probing techniques *Warning: this could have a biasing effect
- Concurrent and/or retrospective probing techniques
- etc.
Whichever method you end up choosing as appropriate for your survey test, I find it useful to go back to the Survey Response Process and create probing questions to test different stages.
Thank you so much for reading and I hope you can find this share of information helpful to broaden your perspective for survey design.