Anyone who’s done some user research knows how easy it is for a participant to accidentally stumble off track if the tasks aren’t written well. A misunderstood phrase or an overlooked question can sometimes derail a participant in completely unexpected ways—and if they are testing in a remote, unmoderated environment, there’s no way to get them back on track other than by writing a solid study plan in the first place.
Testing out your study
The UserTesting Research Team has learned that one of the key ingredients of a great study is performing a dry run of your script. (This is sometimes known as a “pilot test” as well.) Just one participant goes through the study, and then the team watches the video noting any possible challenges the participant encountered or ways the script could be improved.
A successful dry run is one in which:
- The user answered all of the questions The script didn’t accidentally use confusing terms or jargon that made the participant stumble
- The participant evaluated the correct areas of the web page, app, or prototype
If the answer is no to any of the above, the study can be altered as needed and tried again with another dry run participant. Continue iterating on your study script until test participants can successfully complete the study.
5 things to check for in a dry run study
1. Do your tasks and questions make sense to participants?
In a remote usability study, ultra-clear communication is important. A misphrased task can create stress for the participant, diminishing their ability to complete the task and compromising the value of your research.
While you watch the dry run video, focus on how the participant reads the tasks and instructions.
- Do they understand all the terminology used?
- Are they providing verbal feedback that directly answers your questions?
- Is there ever a time when you wanted them to demonstrate something, but they only discussed it?
Often, a simple edit to your tasks can keep participants on track. If they misunderstand your terms, questions, or assignments, rephrase them until they’re easily understood.
2. Can the participant adequately answer your questions?
Many questions have multiple layers which can sometimes lead participants to answer part of the question and forget to answer the rest. A dry run will quickly identify any tasks or questions that may need to be broken up.
Here’s what to look for:
- Do the participant’s answers provide enough detail?
- Does the participant provide quality sound bites that can be clipped and shared with others?
- Does the participant adequately address your goals and objectives for the study?
If not, consider breaking up these questions into individual tasks before launching the full study.
3. Can the participant complete all required steps like logging in to a specific account or interacting with the right pages?
There’s nothing worse than sitting down to watch some recordings, only to discover that all the participants checked out the live site instead of the prototype, or couldn’t complete the login because of a glitch on your app. Often, just an extra sentence or a well-placed URL can make all the difference.
4. Are all links in the script functioning properly?
A dry run is a perfect opportunity to verify that all the URLs included in your study are functioning properly.
5. Are the screener questions capturing the participants you need?
When the type of participant is important to your study, the dry run is a great way to tell if your screener questions are capturing the right demographic.
Add an additional question to your dry run study that asks participants to describe whatever trait that aligns with the demographic you’re trying to target, such as job title or industry.
Continue revising your screener questions until you’re capturing the best demographic for your study.
Making changes to your study
If you need to make changes to your study, select Create similar test to revise your existing draft as needed. We recommend conducting another dry run study after making any significant changes before launching with a larger group of participants.
Please note that practice studies will count towards your quarterly study usage limits. If you are on a plan that includes quarterly study limits, we suggest iterating on the practice study and then adding additional testers to avoid unnecessarily consuming study usage.
Once you’ve completed a successful dry run
After you’ve successfully completed a dry run, all you need to do is add more users to your existing study to reach more participants.
Just select Add testers from the Options drop down menu when you’re in the session view on your dashboard.