With thanks to Viveka Weiley, the ultimate wielder of lean UX research
Surveys have become a research method that is often avoided or mocked in UX circles. This is often with just cause. Surveys cannot offer the breadth and depth of an interview. Nor can surveys generally reveal much about the behaviours of a user. Even a focus group (a research method I could also defend) can give more contextual depth than a survey, although it may be affected by group think.
I, however, shall defend my use of the humble survey. I do not think it belongs in the ‘have not’ column. My point of difference is that it must be used in conjunction with other research methods. I use surveys as a beginning point to start a conversation, a workshop, user testing or an interview and I believe you should too. Not only do they save time, they save sanity. Here are my reasons:
1. Surveys can tailor an interview
Creating a survey for your interview participants saves on time for a session. This is not a money saving exercise but rather a sanity saving exercise (it’s all work worth compensating for!). A lot of research during the COVID19 pandemic has moved online and my rule has become “can this be answered in a pre engagement survey?” because no one wants to be on a conference call for longer than necessary. Add 2-3 questions to a pre engagement survey or recruitment screener that can contextualise what information you would need in the first 15 minutes of an interview. Being able to quickly ascertain background information such as what software the participant is currently using or whether they own the latest smart device is perfect survey material. Make time to go through this information and come up with the “why?” questions for your first five minutes of your interview. Reviewing this information to then tailor the interview for a participant to explain their answers can allow the participant to jump right in. You’ve shown you took the time to understand them and their context, and you have more questions, instead of them starting from scratch.
2. Participants are interested in what others, like them, think
Initially I was skeptical of this, but using aggregate results from surveys can provoke interesting discussion from participants. My experience with this has come from recent collaborations my Data61 colleague, Viveka Weiley. We displayed the results from an aggregate survey results on a Miro board and asked if they agreed with the distributions. This provoked participants to reflect on how their experiences are different or similar to the norm amongst the pool of answers. They then shared critical stories about their experiences that were invaluable to the project and understanding the context behind the results. I am incredibly doubtful these insights would have come to light through traditional contextual interviewing. Like user testing – sometimes seeing what the participant thinks is either wrong, unexpected or atypical, can provoke critical responses.
3. Make your questions fit for direct comparison
Not all questions are fit for surveys. Generally my rule of thumb is that if the question can be answered “it depends” and not clarified within two sentences then it doesn’t belong in a survey. Save that question for the interview. If the question is concise, simple and directly comparable with other answers then it is fit for a survey.
Surveys are a great tool for gathering data on a 5+ group of people and should be able to comfortably sit in a direct comparison table. Survey data should be presented in an accessible way to stake holders so they can comfortably grasp who your participants were and what was interesting about them. Generally I am against surveys that are too long, and too onerous, as I fear people will drop out of them. However, there has to be a good core reason why that survey must be so long. The shorter, the more concise the survey is, the better it’s usage can be when compiling final findings of user testing or interviewing. Less is more.
For tips on asking effective questions:
For considering the UX of surveys: