At a Glance
Running a pilot test allows you to "test the test" before releasing it to all your contributors. This article describes creating a pilot test and why it's important.
It can be easy for a contributor to accidentally stumble off track if the tasks aren’t written well. A misunderstood phrase or an overlooked question can sometimes derail a contributor in completely unexpected ways—and if they are testing in a remote, unmoderated environment, there’s no way to get them back on track other than by writing a solid test plan in the first place.
Testing out your script
UserTesting's Research Team has learned that one of the key ingredients of a great study is performing a pilot test. In a pilot test, just one contributor goes through the test plan, and then the team watches the video noting any possible challenges the contributor encountered or ways the script could be improved.
A successful pilot test is one in which:
- The contributor answered all of the questions.
- The script didn’t accidentally use confusing terms or jargon that made the contributor stumble.
- The contributor evaluated the correct areas of the webpage, app, or prototype.
If the answer is "no" to any of the above, the study can be altered as needed and tried again with another contributor. Continue iterating on your study script until test contributors can successfully complete the study and you collect the feedback you need.
5 things to check for in a pilot test
1. Do your tasks and questions make sense to contributors?
When collecting remote feedback, ultra-clear communication is important. A poorly phrased task can create stress for the contributor, diminishing their ability to complete the task and compromising the value of your research.
While you watch the video, focus on how the contributor reads the tasks and instructions.
- Do they understand all the terminology used?
- Are they providing verbal feedback that directly answers your questions?
- Is there ever a time when you wanted them to demonstrate something, but they only discussed it?
Often, a simple edit to your tasks can keep contributors on track. If they misunderstand your terms, questions, or assignments, rephrase them until they’re easily understood.
2. Can the contributor adequately answer your questions?
Many questions have multiple layers which can sometimes lead contributors to answer part of the question and forget to answer the rest. A pilot test will quickly identify any tasks or questions that may need to be broken up.
Here’s what to look for:
- Do the contributor’s answers provide enough detail?
- Does the contributor provide quality sound bites that can be clipped and shared with others?
- Does the contributor adequately address your goals and objectives for the study?
If not, consider breaking up these questions into individual tasks before launching the full study.
3. Can the contributor complete all required steps like logging in to a specific account or interacting with the right pages?
There’s nothing worse than sitting down to watch some recordings, only to discover that all the contributors checked out the live site instead of the prototype, or couldn’t complete the login because of a glitch on your app. Often, just an extra sentence or a well-placed URL can make all the difference.
4. Are all links in the script functioning properly?
A pilot test is a perfect opportunity to verify that all the URLs included in your study are functioning properly and accessible to your contributors.
5. Are the screener questions capturing the contributors you need?
When the type of contributor is important to your study, the pilot test is a great way to tell if your screener questions are capturing the right demographic.
Add an additional question to your pilot test that asks contributors to describe whatever trait aligns with the demographic you’re trying to target, such as job title or industry. Continue revising your screener questions until you’re capturing the best demographic for your study.
Making changes to your test plan
Based on your initial pilot test results, you may want to update your test plan. To do so, you can open the Options menu and select Edit test details.
If you found that the contributor wasn’t quite the right fit for your test, review and edit the screener, as appropriate.
If you need to make significant changes to your study, you can also navigate to the Options menu and select Create similar test to revise your existing draft as needed.
Please note that pilot tests will count towards any usage limits on your account. If you are on a plan that includes limits on the number of tests, we suggest iterating within a single test and then adding additional contributors.
Once you’ve completed a successful pilot test
After you’ve successfully completed a pilot test, all you need to do is add more users to your existing study to reach more contributors.
Please note: Be sure to run your pilot with each audience that you want to include in your actual test later; this includes device type. You won't be able to add additional audiences or devices to your test. If you have to make substantial changes to your pilot, copy the test, make your changes, and relaunch.
From the Options menu, select Add contributors.
If you have multiple audiences, you'll be able to add the desired number of contributors to each audience.
Add the desired number of contributors. Select Add Contributors to launch the additional sessions.
Want to learn more about this topic? Check out our University courses.
Please provide any feedback you have on this article. Your feedback will be used to improve the article, and when you submit your survey, you'll be entered into a drawing for a $50 Amazon gift card. This survey should take five minutes to complete. Article evaluations will remain completely confidential.