Use these tips as a checklist to avoid pitfalls typical to writing test plans. |
This article applies to: UserTesting
On this page:
Avoid asking for Personally Identifiable Information (PII)
- Our participants are never required to share their personal information during a test.
- If you are a covered health entity and have signed a BAA with UserTesting, you may collect Protected Health Information (PHI).
- If you are not a covered health entity, follow our guidelines for PII.
Don't ask to contact participants outside UserTesting
- Participants on the UserTesting Network are managed by UserTesting.
- Their information is kept confidential and in compliance with our terms and conditions.
- Don't ask participants to contact you outside the UserTesting Platform or to join your customer group, because that exposes their personal information (and violates the platform's terms of use).
- If you need to contact a participant, you may do so through the UserTesting Message Center.
Verify access to test materials
- If your test plan requires users to log in to your site or app, make sure you provide a set of credentials (username and password) for them to use.
- Provide login credentials in your task at the point where they need the information.
- Note that if no credentials are provided, your test will be placed on hold, slowing down your results.
- Ensure any links to your prototypes (e.g., Figma, Adobe prototypes) load and are accessible outside your company.
- You may need to preview the link in your browser's incognito mode to ensure you don't have cached credentials.
- If you are asking people to download an app, ensure the app is available in their country's app store.
Avoid asking leading questions
- For best results, it is crucial to ask balanced, unbiased questions.
- When participants can predict which answer you want from them, they’ll be more likely to choose that answer, even if it isn’t accurate.
- Examples of such biased or loaded questions are "How did the design help you complete your task?" (the assumption being that the design DID help) or "In what ways is this design better than what you are using today?"
Tip: For multiple choice questions, always include an "Other" or "None of the above" option.
Avoid asking yes/no questions
- It's easier to guess the correct answer when there are only two options.
- Even if the respondent doesn’t quite understand the question, they have a 50/50 chance of “guessing” correctly.
Tip: For screener questions, consider multiple-choice questions that lead with, "Which of the following..."
Don't use jargon
- Keep it simple.
- You know a lot more about the design than the participants, so be careful about using words that make sense to your project team but won't necessarily be clear to a participant.
- A confusing task can create stress for the participant, diminishing their ability to complete the activity, and compromising the value of their feedback.
- This is where a pilot test (see following tips) is critically important.
Set expectations for starting/stopping a task
- Provide clear instructions on how long to work on a task or at what stage to stop working on a task.
- Doing so spares participants from being caught off guard by sudden instructions to stop.
- Properly setting such expectations also keeps participants from going beyond the parameters of the task, and that can help ensure that the test results and the answers participants give to follow-up questions you pose are accurate and reliable.
- Set clear time and task-completion parameters: "Stop once you’ve added an item to your shopping cart or five minutes have passed."
Preview your test
- Understand what the participants will experience when they answer your screener questions and take your test.
- For each audience in your test, select the Preview screener button to ensure that the questions and answers are written clearly.
- Upon entering all the tasks into your test, select the Preview test plan button to walk through your test just as a participant would.
Run a pilot test
- Think of a pilot test as a "test of a test."
- The results of a pilot can help you:
- Assess whether the task instructions are clear or confusing
- Determine if the tasks are prompting the level of feedback needed
- Decide if you’re capturing a desired audience
- Estimate how quickly your sessions will fill
- If there are multiple segments or multiple audiences you want to test—say, you're comparing the desktop and mobile experience with a site, or comparing new shoppers to current shoppers— launch to one participant in each audience.
- If the pilot test prompts significant changes to the test plan, duplicate the test, edit the new test, and then launch it. Doing so ensures that the order of the tasks remains consistent in the metrics and Excel export.
Test with a smaller audience
- Have a "less is more" mindset when deciding how many participants you want for your test.
- A range of 5–8 participants (per group, or "audience," if your test is made up of multiple audiences) is satisfactory for most tests. Review our sample size recommendations for more information.
- Keep in mind that a primary objective of UserTesting is to provide qualitative feedback.
- Since you're not running statistics off the results (quantitative research), a large sample size is unnecessary.
Tip: Having said all this, if you are targeting multiple audiences for your study, be sure to have adequate representation in each group (5–8 participants) when the test is set for a full launch.
Related content
|
|
Want to learn more? Check out these Knowledge Base articles... |
Interested in growing your skills? Check out our University courses... |
|
|
Need hands-on training?
|
Can't find your answer?
|