At a Glance
Screener questions identify specific participants for your tests. These best practices can help your team get better at knowing which questions work well for finding the best participants.
Interested in learning more about writing great test plans? Register to attend one of our upcoming virtual Live Training Events. You'll learn from a platform expert how to structure the flow of your test to make it clear to participants and how to get the best insights possible.
- When should you test with your exact target market?
- Guidelines for screener questions
- How to check that you screener is capturing the right users
- Screeners based on familiarity with a product
- Screeners based on frequency of use
- Screeners based on industry or occupation
- Screeners that deal with personal information
Many UX thought leaders encourage researchers not to be too granular about the users included in their studies. After all, a vast majority of products should be clear and intuitive enough that anyone can figure them out.
However, there are many circumstances in which researchers need to capture insights from a particular type of user because they’re the only ones who would know whether the tool could be helpful to them as they work.
If you’re in one of those circumstances, and you’re testing remotely, then you need to use screener questions—multiple-choice questions that can either eliminate users from taking part in your study or give them access to it.
UserTesting’s Research Team knows firsthand how important (and challenging) it is to write solid screener questions, so below are some guidelines and examples that can help you get just the right user for your next remote, unmoderated user test.
Many of the guidelines for writing good screener questions are the same as the guidelines for writing great Multiple Choice questions:
- Always provide a “None of the above”, “I don’t know” or “Other” option just in case you’ve forgotten to include an answer that applies to the user or the user is confused by the question. This is especially important to include in screeners, because if users don’t have this option and pick an answer at random, they might end up in your study accidentally.
- Provide clear and distinct answers that don’t overlap each other
- Avoid asking leading questions or yes/no questions because users will be inclined to give you the answer they think you want instead of the one that really applies to them. We find that instead of asking direct questions, instructing users to select the option that most closely applies to them, followed by a list of statements, is the most neutral way to phrase most screeners. This method ensures that users will answer honestly because it’s less obvious what answer is desired.
If you need someone with a particular background (like a medical degree) or someone who is going through a particular experience (like shopping for a new car), we recommend that in addition to screeners, you use the first task of your test to verify this:
“You indicated in the screener questions that you are currently shopping for a new car. Please describe what kind of car you are looking for, where you have looked so far, etc.”
Sometimes, just listening to a user describe their experience can let you know if they’re really the right fit.
One of the most common kinds of screener questions that researchers use is capturing users’ level of familiarity with a product or a brand. Sometimes they need fresh users to test out a new tutorial for their app. Other times they are looking for insight from their most frequent users.
Whatever the case, you don’t want to ask point-blank if users fit the mold; people are naturally inclined to say yes, just to please you! Instead, ask users to indicate their familiarity, and then define the different levels of familiarity.
Similar rules apply for the related—and equally popular—frequency-of-use screener. As with experience levels, it’s important to define frequency in solid terms, not just “rarely”, “sometimes”, “often”, etc.
Another common screener related to frequency of use might have to do with how recently a user has participated in a certain activity. For example, many e-commerce product researchers prefer to hear from users who purchase items online fairly often, and many travel product researchers want to hear from those who are planning a trip within the next year.
In those cases, it may be wise to create two screeners: one to confirm that they purchase items online/have an upcoming trip, and then a follow-up screener to determine time frames.
Another occasion when multiple screeners might be needed to reveal a single characteristic would be when you need users within a particular occupation.
For example, a massage therapy retailer might want to hear from people in the massage therapy industry.
Obviously, massage therapy is a very specific profession, and it would be hard to come up with an exhaustive list of options inside of one screener question. But you also want to avoid asking a yes/no question, so you might start by listing broader professional categories, including Health (which would encompass massage therapy), and then in a follow-up screener, have users indicate the role they occupy within the Health industry.
The last type of screener that the UserTesting Research Team relies on frequently involves users providing sensitive information, such as their income, their race, their Facebook profile, or their body type.
If the study requires the participant to disclose sensitive information during the user test, it's important to forewarn them with a screener question. Only accept users who are willing to be open about this personal information.
Need more information? Read these related articles.
Want to learn more about this topic? Check out our University courses.