Creating a Test Plan: Common Mistakes and How to Avoid Them

At a Glance

Use these tips as a checklist to avoid common test plan pitfalls, increase test success, reduce the time to collect feedback, and review others' tests prior to approval.

 

There's a certain art to crafting a great test plan. Admittedly, it can take a bit of practice. Lucky for you, our Research Team has seen it all. Here are their best tips for avoiding common test plan pitfalls. Remember that errors may result in delays, contributors not being able to complete your test, and you getting inaccurate feedback.

  1. Don't ask for Personally Identifiable Information (PII): Our contributors are never required to share their sensitive personal information during a test. In order to run effective studies while also refraining from collecting sensitive PII, you should instruct study contributors to provide fake or “dummy” numbers. In the event that contributors must complete a transaction while participating in a study, you should provide a gift card or promotional code. This document covers UserTesting's PII policy and the difference between basic and sensitive PII. See our Knowledgebase "Best Practices" article to how to steer clear of inadvertently collecting PII.

  2. Do NOT ask contributors on the UserTesting Contributor Network to contact you outside the platform or join a customer group: contributors on the UserTesting Contributor Network are managed by UserTesting. Their information is kept confidential and in compliance with our terms and conditions. Don't ask contributors to contact you outside the UserTesting platform, or join your customer group, because that exposes their personal information (and violates the terms of use of the platform). 

  3. Verify access to your test materials:
    • If your test plan requires users to log in to your site or app, make sure you provide a set of credentials (username and password) for them to use. You can provide that in the text of your task at the point they need the information. Note that if no credentials are provided, your test will be placed on hold, slowing down your results. 
    • Ensure any links to your prototypes (e.g., Figma prototypes) load and are accessible outside your company. You may need to preview the link in incognito mode in your browser to ensure you don't have cached credentials.
    • If you are asking people to download an app, ensure the app is available in their country's app store.

  4. Avoid asking leading questions: For best results, it is crucial to ask balanced, unbiased questions. When contributors can easily predict which answer you want from them, they’ll be more likely to choose that answer, even if it isn’t accurate. Examples of such biased or loaded questions are "How did the design help you complete your task?" (the assumption being that the design DID help) or "In what ways is this design better than what you are using today?": 
    • Tip: For multiple choice questions, always include an "Other" or "None of the above option."

  5. Avoid asking Yes/No questions: Another way contributors can accurately predict the answers is when the question gives them by default a 50% chance of getting the answer right—even though the contributors may, in fact, not understand the question. So you want to avoid yes/no questions 100% of the time.
    • Tip: This is good for test tasks and screening questions. Instructing users to select the option that most closely applies to them (followed by a list of statements) is the most neutral way to phrase screeners.

  6. Do not use jargon: Keep it simple. You know a lot more about the design than the contributors, so be careful about using words that make sense to your project team, but won't be clear to a contributor. A confusing task can create stress for the contributor, diminishing their ability to complete the task and compromising the value of their feedback. This is where a pilot test (see following tips) is critically important. 

  7. Set expectations about when/where to stop during a task: Not providing clear instructions on how long to work on a task or at what stage to stop working on a task risks surprising if not frustrating your contributors. It can also impact your data because a contributor went farther than you expected, and can no longer accurately answer some of your follow-up questions. Set clear time and task-completion parameters: "Stop once you’ve added an item to your shopping cart or 5 minutes have passed." 

  8. Preview your test: Understand what the contributors will experience when they answer your screener questions and take your test.
  9. Run a pilot test: Few things in UX research are more discouraging than immediately launching your test to a full group of contributors, only to get unusable results because the test was flawed or the contributors proved unsuitable. Sidestep such disappointment by first conducting a pilot test with one person. Think of a pilot as a "test of a test": The results of a pilot can help you assess whether the task instructions are clear or confusing, that the tasks are prompting the level of feedback needed, whether you’re capturing a desired audience, and how quickly your sessions will fill. 
    • Tip: If there are multiple segments or multiple audiences you want to test—say, you're comparing the desktop and mobile experience with a site, or comparing new shoppers to current shoppers— launch to one contributor in each audience.
    • Tip: If the pilot test prompts significant changes to the test plan, create a similar test from the original, edit the new test, then launch. This course of action ensures that the order of the tasks is consistent in the metrics and when exporting the results through Excel.  Run the full launch in a new study; otherwise, the ordering will be off when you export data on the back end. 

  10. Test with a smaller number of contributors: Having conducted a pilot, you'll want to stay in that "less is more" mindset when settling on how many contributors you'll want for your test. A range of 5–8 contributors (per group, or "audience," if your test is made up of multiple audiences) is satisfactory for most tests. Deciding to test with a large number of contributors means that it'll take longer to fill the sessions. You'll also want to keep in mind that UserTesting's chief value is providing qualitative feedback—since you're not running statistics off the results (quantitative research), a large sample size is unnecessary. See the UserTesting University's How Many contributors Should Be Included in a Test? course to learn more about getting the optimal number of test contributors.
    • Tip: Having said all this, if you are targeting multiple audiences for your study, be sure to have adequate representation in each group (5–8 contributors) when the test is set for full launch. 

 

Learn More

Need more information? Read these related articles.

Want to learn more about this topic? Check out our University courses.

Was this article helpful?
61 out of 69 found this helpful