At a Glance
When getting started using UserTesting, best practices can help you get the most value out of the feedback from your sessions. Build the habit of using these best practices starting with the first few sessions that you run.
Limit your test to 5 contributors per audience
We recommend a sample size of five contributors per audience, per test. A range of 5–8 contributors (per group, or "audience," if your test is made up of multiple audiences) is satisfactory for most tests. See the UserTesting University's How Many contributors Should Be Included in a Test? course to learn more about getting the optimal number of test contributors.
See this training course to learn more.
Many UX thought leaders encourage researchers not to be too granular about the users included in their studies. After all, a vast majority of products should be clear and intuitive enough that anyone can figure them out. However, there are many circumstances in which researchers need to capture insights from a particular type of user. If you’re in one of those circumstances, then you need to use screener questions—multiple-choice questions, that can narrow down your audience. Here are some helpful tips for screening for your audience:
- Always provide a “None of the above,” “I don’t know,” or “Other” option just in case you’ve forgotten to include an answer that applies to the user or the user is confused by the question.
- Provide clear and distinct answers that don’t overlap each other. For example, if the options are 1 - 10, 10 - 20, and 20 - 30 - a user could answer 2 or more options if they need to answer 10 or 20.
- Avoid asking leading questions or yes/no questions because users will be inclined to give you the answer they think you want instead of the one that applies to them.
- Use screener questions to collect extra information about your participants. This is a great way to better understand the context in which the test was taken. You can use the multiple select question options to help you accomplish this.
- Validate your screener questions by asking a confirmation question at the start of your test. An example is: “You indicated in the screener questions that you are currently shopping for a new car. Please describe what kind of car you are looking for, where you have looked so far, etc.” Sometimes, just listening to a user describe their experience can let you know if they’re the right fit.
To learn more about screening questions and finding the right audience, check out the Best practices for Screeners course through the UserTesting University.
- Avoid leading or biased questions.
- Use rating scale and other metrics questions to validate and confirm your verbal or task questions. Rating scales, written, multiple-choice, multiple-select questions are more quantitative in nature and will help you understand ‘what’ is happening and to what magnitude.
- Use rich qualitative questions to capture the “why” and “how”. This is best accomplished through task-based questions or deep verbal questions. This will allow you to gain empathy with your contributors best and achieve that core human insight.
- If you need to emphasize something in a question or task, try these shortcuts to format emphasis like bolding or italicizing.
- Plan for about 15-20 minutes for an unmoderated session.
Find more information about writing great tasks here.
Preview your test yourself and then run a pilot of one contributor (per audience). Pilot tests help to ensure that all assets included in the study work properly, all tasks are made clear to contributors and that you get the desired feedback from the study. Review the results of that pilot and make any updates that are needed before sending the test to a larger number of contributors. Neglecting to preview and pilot your test can result in a wasted test or a test that gets put on hold because assets weren’t entered correctly. Save time and effort, and preview and pilot.
You can launch the same study by adding more contributors. First, in the dashboard, click on the 3 dots to the right of the project you want to modify. From the dropdown, select ‘add contributors’.
Next, You'll see this modal appear. Add the desired number of contributors and select Add contributors to launch the additional sessions.
If you want to run a similar test, You can save time by using the Create similar test option from the drop-down menu for any test you'd like to replicate. Once you make a copy, you can make any changes necessary, then launch it to a new set of contributors. Please note: If you need to change a website study to an app study, you'll have to create a new study.
To duplicate a test, first start by selecting the 3 dots on the right side of the test you want to duplicate. Then select ‘Create similar test”. This will leave you with an unlaunched copy of the test. It will have the same name but (Copy) will be added to the title.
By creating a draft of a test, you can edit, finish, or launch it at a later time. Drafts are automatically saved as you edit your test plan. You can also create drafts to have colleagues review a study before launching it.
You can access drafts from your dashboard. You can sort for just drafts by selecting the Drafts filter in the navigation menu.
From there, you can Share your test plan with colleagues or Notify (via Slack integration or e-mail) team members about the results.
Pro tip: Give the study a descriptive title so you can easily find it in your drafts. Avoid editing the same draft in multiple tabs simultaneously as this may result in a failure to save the desired changes.
The videos are easily the most fun part of gathering human insights. But watching every single video can be time-consuming and your entire team may not have the bandwidth to comb through each and every one.
Fortunately, you can watch your videos up to 3x speed to help you pick out memorable insights even faster. Just select 1x at the bottom of your video player and select the speed you'd like to use. If you hear or see something unclear, you can also use the video player to slow down the video as well.