There's a certain art to crafting a great test plan. Admittedly, it can take a bit of practice. Lucky for you, our Research Team has seen it all. Here are their best tips for avoiding common test plan pitfalls.
- Don't ask for Personally Identifiable Information (PII): Our panelists are never required to share their sensitive personal information during a test. In order to run effective studies while also refraining from collecting sensitive PII, you should instruct study participants to provide fake or “dummy” numbers. In the event that participants must complete a transaction while participating in a study, you should provide a gift card or promotional code. This document covers UserTesting's PII policy and the difference between basic and sensitive PII. See our Knowledgebase "Best Practices" article to how to steer clear of inadvertently collecting PII.
- Provide appropriate test credentials: If your test plan requires users to log in to your site or app, make sure you provide a set of credentials (username and password) for them to use. You can provide that in the text of your task or question at the point they need the information. If no credentials are provided, your test will be placed on hold, slowing down your results. See our Testing with Unique Logins training course to learn more.
- Run a pilot test: Few things in UX research are more discouraging than immediately launching your test to a full group of participants, only to get unusable results because the test was flawed or the participants proved unsuitable. Sidestep such disappointment by first conducting a pilot test with one person. Think of a pilot as a "test of a test": The results of a pilot can to help you assess whether the task instructions are clear or confusing, that the tasks are prompting the level of feedback needed, whether you’re capturing a desired audience, and how quickly your sessions will fill.
- Tip: If there are multiple segments or multiple audiences you want to test—say, you're comparing the desktop and mobile experience with a site, or comparing new shoppers to current shoppers— launch to one participant in each audience.
- Tip: If the pilot test prompts significant changes to the test plan, create a similar test from the original, edit the new test, then launch. This course of action ensures that the order of the tasks is consistent in the metrics and when exporting the results through Excel. Run the full launch in a new study; otherwise, the ordering will be off when you export data on the back end.
- Test with a smaller number of participants: Having conducted a pilot, you'll want to stay in that "less is more" mindset when settling on how many participants you'll want for your test. A range of 5–8 participants (per group, or "audience," if your test is made up of multiple audiences) is satisfactory for most tests. Deciding to test with a large number of participants means that it'll take longer to fill the sessions. You'll also want to keep in mind that UserTesting's chief value is providing qualitative research—since you're not running statistics off the results (quantitative research), a large sample size is unnecessary. See the UserTesting University's How Many Participants Should Be Included in a Test? course to learn more about getting the optimal number of test participants.
- Tip: Having said all this, if you are targeting multiple audiences for your study, be sure to have adequate representation in each group (5–8 participants) when the test is set for full launch.
- Avoid asking leading questions: For best results, it is crucial to ask balanced, unbiased questions. When participants can easily predict which answer you want from them, they’ll be more likely to choose that answer, even if it isn’t accurate. Examples of such biased or loaded questions are "How did the design help you complete your task?" (the assumption being that the design DID help) or "In what ways is this design better than what you are using today?":
- Tip: For multiple choice questions, always include an "Other" or "None of the above option."
- Avoid asking Yes/No questions: Another way participants can accurately predict the answers is when the question gives them by default a 50% chance of getting the answer right—even though the participants may, in fact, not understand the question. So you want to avoid yes/no questions 100% of the time.
- Tip: This is good for test tasks and screening questions. Instructing users to select the option that most closely applies to them (followed by a list of statements) is the most neutral way to phrase screeners.
- Do not use jargon: Keep it simple. You know a lot more about the design than the participants, so be careful about using words that make sense to your project team, but won't be clear to a participant. A confusing task can create stress for the participant, diminishing their ability to complete the task and compromising the value of your research. This is where a pilot test (see previous tip) is critically important.
- Set expectations about when/where to stop during a task: Not providing clear instructions on how long to work on a task or at what stage to stop working on a task risks surprising if not frustrating your participants. It can also impact your data because a participant went farther than you expected, and can no longer accurately answer some of your follow-up questions. Set clear time and task-completion parameters: "Stop once you’ve added an item to your shopping cart or 5 minutes have passed."
See the UserTesting University's Finding Your Right Audience course to learn more about how to target the right audiences for your test.