Conducting a Pilot Test

Anyone who’s done some user research knows how easy it is for a participant to accidentally stumble off track if the tasks aren’t written well. A misunderstood phrase or an overlooked question can sometimes derail a participant in completely unexpected ways—and if they are testing in a remote, unmoderated environment, there’s no way to get them back on track other than by writing a solid test plan in the first place.

Testing out your script

The UserTesting Research Team has learned that one of the key ingredients of a great study is performing a pilot test. In a pilot test, just one participant goes through the test plan, and then the team watches the video noting any possible challenges the participant encountered or ways the script could be improved.  

A successful pilot test is one in which:

  • The user answered all of the questions
  • The script didn’t accidentally use confusing terms or jargon that made the participant stumble
  • The participant evaluated the correct areas of the web page, app, or prototype

If the answer is "no" to any of the above, the study can be altered as needed and tried again with another participant. Continue iterating on your study script until test participants can successfully complete the study.

5 things to check for in a pilot test

1. Do your tasks and questions make sense to participants?

In a remote usability study, ultra-clear communication is important. A mis-phrased task can create stress for the participant, diminishing their ability to complete the task and compromising the value of your research.

While you watch the video, focus on how the participant reads the tasks and instructions.

  • Do they understand all the terminology used?
  • Are they providing verbal feedback that directly answers your questions?
  • Is there ever a time when you wanted them to demonstrate something, but they only discussed it?

Often, a simple edit to your tasks can keep participants on track. If they misunderstand your terms, questions, or assignments, rephrase them until they’re easily understood.


2. Can the participant adequately answer your questions?

Many questions have multiple layers which can sometimes lead participants to answer part of the question and forget to answer the rest. A pilot test will quickly identify any tasks or questions that may need to be broken up.

Here’s what to look for:

  • Do the participant’s answers provide enough detail?
  • Does the participant provide quality sound bites that can be clipped and shared with others?
  • Does the participant adequately address your goals and objectives for the study?

If not, consider breaking up these questions into individual tasks before launching the full study.

3. Can the participant complete all required steps like logging in to a specific account or interacting with the right pages?

There’s nothing worse than sitting down to watch some recordings, only to discover that all the participants checked out the live site instead of the prototype, or couldn’t complete the login because of a glitch on your app. Often, just an extra sentence or a well-placed URL can make all the difference.

4. Are all links in the script functioning properly?

A dry run is a perfect opportunity to verify that all the URLs included in your study are functioning properly.

5. Are the screener questions capturing the participants you need?

When the type of participant is important to your study, the pilot test is a great way to tell if your screener questions are capturing the right demographic.

Add an additional question to your pilot test that asks participants to describe whatever trait that aligns with the demographic you’re trying to target, such as job title or industry.
Continue revising your screener questions until you’re capturing the best demographic for your study.

Making changes to your test plan

Based on your initial pilot test results, you may want to update your test plan. To do so, you can open the Options menu and select Edit test details

 

EditTestDetails1.png
If you found that the participant wasn’t quite the right fit for your test, review and edit the screener, as appropriate.

     EditScreener1.png

If you have an Unlimited seat license and you need to make significant changes to your study, you can also navigate to the Options menu and select Create similar test to revise your existing draft as needed. 

Please note that pilot tests will count towards your quarterly study usage limits. If you are on a plan that includes quarterly study limits, we suggest iterating on the practice study and then adding additional testers to avoid unnecessarily consuming study usage.


CreateSimilar.png

Once you’ve completed a successful pilot test

After you’ve successfully completed a pilot test, all you need to do is add more users to your existing study to reach more participants.

From the Options menu, select Add more participants.


AddMore.png

If you have multiple audiences, you'll be able to add the desired number of participants to each audience.

 

AddParticipants.png


Add the desired number of participants. Select Add Participants to launch the additional sessions. 

Related Resources
Running Pilot Tests (UserTesting University)
Master Class: Test Creation (UserTesting University)

 

Was this article helpful?
15 out of 19 found this helpful