Writing Great Tasks

At a Glance

A task is an action or activity that you want your user to accomplish. This article will explain the differences between open-ended and specific tasks, and when to use each one.

In any successful user-experience study, you’ll need to write a foolproof test plan to guide users through those tasks.

Designing your test plan isn’t always easy. There are no hard-and-fast rules—how you write your tasks will depend on research goals specific to your team. Sometimes, it’s best to leave tasks open-ended; in other situations, you need to be more specific.

Open-ended tasks

Open-ended tasks are intended to see how contributors explore or respond to a question, having been given little direction and little to no context (as in a scenario). Answers to these types of questions vary but also give an unvarnished view into the contributors' experiences and thoughts.


Imagine that you're testing a fitness app. Here's an example of how an open-ended task might be worded:

  • "Please explore the app as you normally would and provide your first impressions. Spend no more than three minutes to complete this task." 

When to use open-ended tasks

Use such tasks when you want to…

  • Understand what users are thinking. Open-ended questions/tasks allow contributors to fully explore and describe a genuine experience.  
  • Conduct exploratory research. Open-ended tasks can help you figure out how people actually use your product or service. Exploratory research often uncovers areas of interest that can be studied further, in a more targeted follow-up test. 
  • Identify usability issues. A test featuring open-ended tasks can reveal pain points or areas of friction with the product or prototype that you may not be aware of. 

Potential pitfalls when using open-ended tasks

Things to keep an eye out for when using open-ended tasks include...

  • Not having a clearly defined test objective. When using open-ended tasks, be sure to have a clear test objective and that tasks you devise support that objective. Not having a well-defined goal will likely make analyzing your test results more difficult and time-consuming. 
  • Contributors stop talking. Make sure you keep contributors talking while they're performing open-ended tasks. You don't want them to forget to speak their thoughts out loud as they explore, so remind them to explain why they're doing what they're doing.  

Specific tasks

Specific tasks guide contributors as to what actions to take and on what features to speak out loud. This type of task needs to be geared toward the exact issues you’re interested in investigating.


Returning to the same fitness-app scenario described above, here’s an example of how a specific task might be worded:

  • “Open up the heart-rate tracking feature and try to measure your heart rate.”

When to use specific tasks

Here are some scenarios when specific tasks are especially useful:

  • Testing particular features: Specific tasks are ideal for testing the usability of a certain feature or area of your product. Another example of such instructions would be: “Please use the Search bar to find a pair of Size 11 men’s black dress shoes.”

  • Dealing with complex products or scenarios: If you have a very specific product concern or are investigating a complex scenario, specific tasks can provide context and instruct contributors on how to use the product.

  • Optimizing a specific flow (e.g., a shopping funnel): If people are abandoning the site or app, specific tasks can reveal how contributors behave at each stage of the flow, what they are thinking, and how they engage with the product. This will help you better understand why they are leaving.

Potential pitfalls when using specific tasks

  • Giving overly specific instructions: Even when providing specific tasks, you need to keep a balance between guiding contributors and having them experience the product or prototype on their own. Telling them every single thing to do means you won’t learn much, so be judicious when doing any hand-holding.

Using open-ended and specific tasks together

We discuss in one of our University lessons how quantitative and qualitative data can be combined to generate meaningful insights. Likewise, using both open-ended and specific questions will result in more well-rounded and actionable contributor feedback and a more engaging testing experience for the contributors.

A best practice when crafting a test plan is to structure your questions like a funnel, working from general to more precise. Starting with open-ended tasks and then drilling down to the more specific questions will ensure that you capture the information that you intended in the research objectives. For example, you might ask about first impressions and then ask a more targeted question about whether contributors noticed a particular feature and what they thought about it.

Learn More

Need more information? Read these related articles.

Want to learn more about this topic? Check out our University courses.

Please provide any feedback you have on this article. Your feedback will be used to improve the article and should take no more than 5 minutes to complete. Article evaluations will remain completely confidential unless you request a follow-up. 

Was this article helpful?
0 out of 0 found this helpful