Learn how many participants you should recruit for your study. |
This article applies to: UserTesting
On this page:
How many participants do I need?
See our short video for information on sample sizes.
Sample size considerations
Consider the following to determine the optimal number of participants for your test:
Test type
- Depending on your test type, you might consider a different number of participants.
- For instance, a benchmarking study might need more participants than an early-stage concept test.
- See the table below for recommendations based on the type of test you are building.
Testing goals
- What are the goals of the test?
- For instance, if you’re trying to identify glaring usability issues, a sample size of 5 may be enough.
- But if you want to go deeper to find more issues, use larger sample sizes, like 15 participants.
Decision-makers
- With whom will you be sharing the insights?
- Do you need a certain sample size to be persuasive to your decision-makers?
Confidence levels
- Depending on the approach, you might need sufficient data to build confidence around surfacing themes.
- Read more about how confidence and intervals can impact your sample size.
Availability
- The UserTesting Network offers broad coverage in the US, UK, and Canada - including niche audiences.
- In other markets like Australia, France, India, Mexico, Spain, and Germany, reaching niche audiences may be slightly harder.
- Our Audience Solutions team is always available to help find your target audience through our partner networks regardless of where you’re testing, but you may consider adjusting your sample size based on availability.
Surveys
For most surveys, the default of 100 participants is a good starting point, but you should customize it based on your needs. Sample size depends on multiple factors including, but not limited to:
- The type of feedback being collected (for example, observational vs. single metric vs. comparison).
- The total size of the population being tested (for example, general population vs. internal employees).
- Decision-maker expectations.
- Your acceptable confidence level and margin of error (see tables below).
Note: To activate AI themes, you need at least 20 participants to complete your survey.
Sample size estimates for binary UX metrics (e.g., task success)
Margin of Error (+/-) |
90% Confidence |
95% Confidence |
20% | 15 | 21 |
15% | 28 | 39 |
10% | 65 | 93 |
5% | 268 | 381 |
Sample size estimates for comparing metrics (e.g., A/B testing)
Difference to 90% confidence |
within subjects |
between subjects |
50% | 17 | 22 |
35% | 29 | 64 |
20% | 50 | 150 |
10% | 115 | 664 |
Sample size recommendations
Our research experts at UserTesting recommend the following sample sizes based on your study type, goals, and approach.
Approach | Description | Ideal sample size | How do we do it at UserTesting? |
A/B testing: before | Gain confidence in the alternatives you want to test by asking people to interact with and comment on each option. | 5-8 | Participants complete activities as they think aloud, along with rating scale questions to gauge or "measure" their experience. |
A/B testing: after | Understand why one version "won" over another by asking people to interact with and comment on the options tested. | 5-8 | Participants complete activities as they think aloud, along with rating scale questions to gauge or "measure" their experience. |
Best-in-class analysis | Gather insights on highly-rated and well-known experiences to understand how to emulate in your own experience. | 10 | Participants complete key tasks on "great" experiences. |
Card sorting | Understand how and why people group and label related items or information to inform navigational structure and organization. | 30 | Participants "think aloud" as they sort items, concepts, etc. into categories. Tests may be open (participants name the categories) closed (the user defines categories for contributors) or hybrid (some categories are predefined, but more can be added by contributor). |
Case study | Build a story around an individual to establish thinking around a potential hypothesis that merits further investigation. | 1-2 | Participants are asked questions about their experiences, impressions, or preferences, or to interact with one or more interfaces to provide feedback. |
Competitive testing | Understand participant preferences between competitive experiences (ease of use, understandability, visual appeal, etc.). | 10-12 | Each participant completes key tasks on a single competitive experience. Responses to rating scale questions (like SUS, SUPR-Q) and performance metrics are compared across competitors. |
Concept testing | Gauge people's reactions to new ideas or concepts with or without a design to view or interact with. | 5-8 | Participants view or comment on a concept. A concept could be just a description or an experience to view or interact with. |
Copy testing | Learn whether copy is clear, effective, and aligned with organizational or brand values and goals. | 8-10 | Participants read copy and are asked to describe, provide commentary on their perceptions, or rate aspects of the language or messaging. Research objectives might include resonance, understandability, or clarity, among others. |
Email copy and design | Get feedback on email subject lines, content, layout, clarity, etc. | 8-10 | Participants view subject lines and email messages and are asked to comment on comprehension, clarity, appeal, tone of voice, visual design, and/or other elements. |
Field studies | Understand behaviors and/or needs in someone's natural environment. | 10 | Participants turn on the front-facing camera using our mobile recorder to record real-life experiences. |
Image testing | Determine whether the message or intent of imagery, their perceptions, and whether it is effectively communicating value is understood. | 8-10 | Participants view the image and are asked to describe, provide commentary on their perceptions, or rate aspects of the image. Research objectives might include clarity, visual appeal, or comprehension and/or other attributes. |
Interviews | Gather impressions, attitudes, expectations, and/or goals through a self-guided test or live conversation. | 15 | Participants are asked questions about their impressions, attitudes, expectations and/or goals in a self-guided test or live conversation. |
Longitudinal/Diary study | Understand people's behaviors & needs over a period of time. | 15 | Participants record multiple interactions over a period of time. The experience could be across channels or on a single channel. |
Omnichannel and crosschannel | Capture the experience as participants move between channels to accomplish a goal. | 8-10 | Participants record interactions on more than one channel. May require use of all recording solutions: desktop, mobile (on-screen and front-facing camera), and IPEVO. |
Prototype testing | Observe people as they interact with an early version of a design or experience. | 5-8 | Participants view a prototype on Invision, Axure, Dropbox, or some other publically accessible URL. Could be static or interactive. |
Qualitative benchmarking | Baseline existing experience and measure progress on a regular cadence over time. | 10-20 | Participants complete key tasks and rate their experience regularly (monthly, quarterly, semi-annually, or annually). Rating questions can be industry-standard (SUS, SUPR-Q) or custom informal metrics defined by an organization. |
Social networks | Gather insights on people's perceptions and behaviors as they view and interact with social posts. | 8-10 | Participants view a brand or company's presence on a social network and are asked to complete tasks and answer questions related to comprehension, clarity, appeal, tone of voice, and/or other elements. |
Surveys | Collect feedback from high sample sizes via a questionnaire. | 100 | Participants complete a questionnaire for observational, comparison, or single metric feedback. Sample size depends on the total size of the population being tested, your acceptable confidence level and margin of error, and decision-maker expectations. |
Tree testing | Learn whether or not people can find what they need within a web navigation. | 30 | Participants "think aloud" as they complete a task that asks them to find a destination within a navigation "tree". The ability of contributors to find the right destination reflects how well the information is organized and named. |
Usability testing | Observe people as they complete tasks to identify what's working well and where they stumble. | 5-8 | Participants complete activities and tasks. A larger sample size can be used to test high-risk or complex applications if previous work has yielded inconclusive results and if you need to ensure that you uncover all potential barriers (15 contributors to find 90% of problems, 50 contributors to find 98% of problems). |
Video testing | Learn whether people understand content and value being communicated and/or find it appealing | 8-10 | Participants view video content and are asked to comment on comprehension, clarity, appeal, tone of voice, visual design, and/or other elements. They may also speak to calls to action, expectations versus reality, or complete tasks. |
Related content
|
|
Want to learn more? Check out these Knowledge Base articles... |
Interested in growing your skills? Check out our University courses... |
|
|
Need hands-on training?
|
Can't find your answer?
|