Creating a Test Plan: Common Mistakes and How to Avoid Them

At a Glance

Use these tips as a checklist to avoid pitfalls typical to crafting test plans, increase test success, reduce the time to collect feedback, and review others' tests prior to approval.

There's a certain art to crafting a great test plan, and knowing what NOT to do is as crucial to mastering the craft as knowing what steps to take. Here are some best tips for avoiding pitfalls common to devising test plans:

  1. Avoid asking for Personally Identifiable Information (PII). Our contributors are never required to share their personal information during a test. However, if you are a covered health entity and have signed a BAA with UserTesting, you may collect Protected Health Information (PHI). If you are not a covered health entity, follow our guidelines for PII.

  2. Do NOT ask contributors on the UserTesting Contributor Network to contact you outside the Platform or join a customer group. Contributors on the UserTesting Contributor Network are managed by UserTesting. Their information is kept confidential and in compliance with our terms and conditions. Don't ask contributors to contact you outside the UserTesting Platform or to join your customer group, because that exposes their personal information (and violates the Platform's terms of use). 

  3. Verify access to your test materials. If your test plan requires users to log in to your site or app, make sure you provide a set of credentials (username and password) for them to use. You can provide that in the text of your task at the point that they need the information. Note that if no credentials are provided, your test will be placed on hold, slowing down your results. Ensure any links to your prototypes (e.g., Figma, Adobe, InVision prototypes) load and are accessible outside your company. You may need to preview the link in your browser's incognito mode to ensure you don't have cached credentials. If you are asking people to download an app, ensure the app is available in their country's app store.

  4. Avoid asking leading questions. For best results, it is crucial to ask balanced, unbiased questions. When contributors are able to predict which answer you want from them, they’ll be more likely to choose that answer, even if it isn’t accurate. Examples of such biased or loaded questions are "How did the design help you complete your task?" (the assumption being that the design DID help) or "In what ways is this design better than what you are using today?": 
    • Tip: For multiple choice questions, always include an "Other" or "None of the above" option.

  5. Avoid asking Yes/No questions. it's easier to guess the correct answer when there are only two options. Even if the respondent doesn’t quite understand the question, they have a 50/50 chance of “guessing” correctly.
    • Tip: For screener questions, consider multiple-choice questions that lead with, "Which of the following..."

  6. Do not use jargon. Keep it simple. You know a lot more about the design than the contributors, so be careful about using words that make sense to your project team but won't necessarily be clear to a contributor. A confusing task can create stress for the contributor, diminishing their ability to complete the activity, and compromising the value of their feedback. This is where a pilot test (see following tips) is critically important. 

  7. Set expectations about when/where to stop during a task. Providing clear instructions on how long to work on a task or at what stage to stop working on a task risks is a sound best practice. Doing so spare contributors from being caught off guard by sudden instructions to stop. Properly setting such expectations also keeps contributors from going beyond the parameters of the task, and that can help ensure that the test results and the answers contributors give to follow-up questions you pose are accurate and reliable. With all that in mind, set clear time and task-completion parameters: "Stop once you’ve added an item to your shopping cart or five minutes have passed." 

  8. Preview your test. Understand what the contributors will experience when they answer your screener questions and take your test.
    • For each audience in your test, select the Preview screener button to ensure that the questions and answers are written clearly.
    • Upon entering all the tasks into your test, select the Preview test plan button to walk through your test just as a contributor would experience it.

  9. Run a pilot test. Think of a pilot as a "test of a test"—the results of a pilot can help you assess whether the task instructions are clear or confusing, that the tasks are prompting the level of feedback needed, whether you’re capturing a desired audience, and how quickly your sessions will fill. 
    • Tip: If there are multiple segments or multiple audiences you want to test—say, you're comparing the desktop and mobile experience with a site, or comparing new shoppers to current shoppers— launch to one contributor in each audience.
    • Tip: If the pilot test prompts significant changes to the test plan, create a similar test from the original, edit the new test, then launch. Doing so ensures that the order of the tasks remains consistent in the metrics and when exporting the results through Excel. 
  10. Test with a smaller number of contributors. Having conducted a pilot, you'll want to stay in that "less is more" mindset when settling on how many contributors you'll want for your test. A range of 5–8 contributors (per group, or "audience," if your test is made up of multiple audiences) is satisfactory for most tests. You'll also want to keep in mind that a primary objective at UserTesting is to provide qualitative feedback—since you're not running statistics off the results (quantitative research), a large sample size is unnecessary. See the UserTesting University's How Many Contributors Should Be Included in a Test? course to learn more about getting the optimal number of test contributors.
    • Tip: Having said all this, if you are targeting multiple audiences for your study, be sure to have adequate representation in each group (5–8 contributors) when the test is set for a full launch. 

Learn More

Need more information? Read these related articles.

Want to learn more about this topic? Check out our University courses.

Please provide any feedback you have on this article. Your feedback will be used to improve the article and should take no more than 5 minutes to complete. Article evaluations will remain completely confidential unless you request a follow-up. 

Was this article helpful?
0 out of 0 found this helpful