Conducting a Pilot Test

At a Glance

Running a pilot test allows you to "test the test" before releasing it to all your contributors. This article describes creating a pilot test and why it's important.

 

Testing out your script

Five things to check for in a pilot test

Making changes to your test plan

Adding more contributors

Learn More

It can be easy for a contributor to get off track whe the tasks aren’t written clearly. A misunderstood phrase or an overlooked question can sometimes derail a contributor in completely unexpected ways—and if they are testing in a remote, unmoderated environment, there’s no way to get them back on track other than by writing a solid test plan in the first place.

Testing out your script

UserTesting's Research Team has learned that one of the key ingredients of a great study is performing a pilot test. In a pilot test, just one contributor goes through the test plan, and then the team watches the video noting any possible challenges the contributor encountered or ways the script could be improved.  

A successful pilot test is one in which:

  • The contributor answered all of the questions.
  • The script didn't accidentally use confusing terms or jargon that made the contributor stumble.
  • The contributor evaluated the correct areas of the webpage, app, or prototype.

If the answer is "no" to any of the above, the test can be altered as needed and tried again with another contributor. Continue iterating on your test script until contributors can successfully complete the test and you collect the feedback you need.

Note: If you need to change the task order or remove unnecessary tasks, we advise creating a similar test to avoid malfunction with your Metrics tab.

Five things to check for in a pilot test

1. Do your tasks and questions make sense to contributors?

When collecting remote feedback, ultra-clear communication is important. A poorly phrased task can create stress for the contributor, making them less able to complete the task, and compromising the value of your research.

While watching a contributor's video, focus on how they read the tasks and execute instructions:

  • Do they understand all the terminology used?
  • Are they providing verbal feedback that directly answers your questions?
  • Is there ever a time when you wanted them to demonstrate something, but they only discussed it?

Often, a simple edit to your tasks can keep contributors on track. If they misunderstand either your terms, questions, or assignments, rephrase them until they can be easily understood.


2. Can contributors adequately answer your questions?

Some questions may have multiple layers, which can result in contributors answering part of the question and forgetting to answer the rest. Running a pilot test with one person will quickly identify any tasks or questions that may need to be broken up.

Here's what to look for. Do the contributors...

  • Provide answers that contain a sufficient amount of detail?
  • Present quality feedback (both in terms of content and sound quality) that can be clipped and shared with others?
  • Adequately address your goals and objectives for the study?

If not, consider breaking up these questions into individual tasks before launching the full study.

3. Are contributors able to complete all required steps (e.g., logging in to a specific account, interacting with the correct pages)?

There’s nothing worse than sitting down to watch contributor recordings, only to discover that all of the contributors checked out the live site when they should've been reviewing a prototype. Or they couldn't complete the login because of a glitch on your app. Oftentimes, just inserting an extra sentence of instructions, or inserting a URL in an expected location on a site or app can make all the difference.

4. Are all the links in the script functioning properly?

A pilot test is a perfect opportunity to verify that all the URLs included in your study are functioning properly and accessible to your contributors.

5. Are the screener questions landing the contributors you need?

When the type of contributor is important to your study, the pilot test is a great way to tell whether your screener questions capture the right demographic group.

Add an additional question to your pilot test that asks contributors to describe whatever trait aligns with the demographic group you're targeting. This can be a job title or an industry. Continue revising your screener questions until you've recruited the best group for your study.

Making changes to your test plan

Based on your initial pilot test results, you may just need to update a word or two in your test plan. To do so, open the Actions menu on your test results page and select Edit test details.

Screen_Shot_2022-07-13_at_11.56.38_AM.png

If you found that the contributor wasn't the right fit for your test, review and edit the screener, as appropriate. (Go to Sessions tab and click Edit Screener, as shown in this image.)Screen_Shot_2022-07-13_at_1.55.26_PM.png

If you need to make significant changes to your study, such as changing the task order or removing unnecessary tasks, navigate to the Actions menu and select Create similar test to revise your existing draft as needed. Creating a similar test when making significant changes to your test plan will help ensure that your Metrics tab accurately captures test results.

Screen_Shot_2022-07-13_at_11.58.54_AM.png

Please note that pilot tests will count towards any usage limits on your account. If you are on a plan that includes limits on the number of tests, we suggest selecting a single test, repeating it, and then adding additional contributors. However, if you need to add or remove tasks, we recommend creating a similar test so that your Metrics tab shows the results accurately. 

Adding more users

After successfully conducting a pilot test, you then simply add more users to the existing study.

Please note: If you have multiple audiences in your test, be sure to run that one pilot test with each of those audiences; this includes audiences distinguished by the type device (e.g., desktop, smartphone) they'll be asked to test with. You won't be able to add additional audiences or devices to your test.  In addition, the demographic requirements can not be changed, just the screener questions. If you have to make substantial changes to your pilot, copy the test, make your changes, and relaunch another pilot test. 

To add more contributors, go to the same Actions menu and select Add contributors.

Screen_Shot_2022-07-13_at_11.59.29_AM.png
If you do have multiple audiences, select from the drop-down the desired number of contributors you wish to add to each audience.

EditTestDetails03_Screenshot.png

Select Add Contributors to launch the additional sessions. 

 

Learn More

Want to learn more about this topic? Check out our University courses.

Please provide any feedback you have on this article. Your feedback will be used to improve the article and should take no more than 5 minutes to complete. Article evaluations will remain completely confidential unless you request a follow-up. 

Was this article helpful?
0 out of 0 found this helpful