At a GlanceThis article explains when to use card sorts and tree tests and how to best leverage them in your studies. Use card sorts when you want to understand how people categorize information. Conduct tree tests to help you evaluate how users navigate and find information in a given site structure. |
Deciding Between a Card Sort and a Tree Test
What to Consider for Your Card Sort or Tree Test
Best Practices for Card Sorting
Best Practices for Tree Testing
Deciding Between a Card Sort and a Tree Test
As you plan your test, think about the types of questions and tasks you want contributors to complete. Card sorts and tree tests are two types of tests to help you understand how users categorize and comprehend how information is structured. But when should you use a card sort versus a tree test? Here's what you should know.
Card sorting is a test that helps you discover how people understand and categorize information. In a card sort, contributors sort "cards" containing different items into groups. Card sorting has many applications, from figuring out how content should be grouped on a website or in an app to deciding how to arrange the items in a retail store.
Conduct a card sort when you want to...
- Inform or evaluate a site's information architecture (i.e., the structural design of a website or information environment).
- Know if your terminology resonates with users.
- Get ideas as to how something should be labeled.
- Understand how content should be organized.
Tree testing is a test that helps you evaluate the findability of topics on a website. Tree tests are run on text-based versions of websites without navigation aids or design elements (similar to a sitemap). Contributors are asked to indicate where they would find specific items or topics, helping you evaluate the ease of locating content in a given structure.
Conduct a tree test when you wish to...
- Inform or evaluate a site's navigation and information architecture.
- Identify current issues with your site structure and provide data to compare any improvements.
- Get feedback on different versions of a proposed site structure.
- Understand whether your site's terminology resonates with your users.
What to Consider for Your Card Sort or Tree Test
What Are You Trying to Learn?
As with all tests, ask yourself what you hope to learn from contributors who take your test. Are users confused by your current labeling system? How easily do users locate specific information on your website? Develop a hypothesis for what you're trying to learn more about and use your card sort or tree test to test it.
Finding the Right Contributors
A crucial part of finding the right contributors for your card sort or tree test is writing screener questions that don't exclude potentially valuable contributors.
For tips on framing your screeners, read our "Screener Questions: Best Practices" Knowledgebase article and dive deeper with our University's Best Practices for Screeners course.
Best Practices for Card Sorting
Choosing the Right Test Type (Open, Closed, or Hybrid)
You have two ways you can run a card sort with UserTesting: 1) the classic card sorting app, or 2) the integrated card sorting tool in the UserTesting Platform.
From here, you can run different card sorts depending on what you want to know. These types are...
- Open card sorts: Users place items (cards) into groups and name the categories. This approach is typically used in the early stages of the development cycle. It allows you to capture the user's perception of the appropriate mental model (i.e., a person's thought process) for the information architecture.
- Closed card sorts: Users are given both items (cards) and categories that are already labeled. They then sort the cards into those established categories. This approach is typically used to validate or categorize/re-categorize the existing information architecture.
- Hybrid card sorts: Users are given items (cards) and categories that are already labeled, but they can create their own category labels as well.
Consider using these card sort types during these different stages in your development process:
- Discovery: An open card sort helps inform how contributors might understand and categorize information in a new design.
- Build and Design: A closed card sort helps evaluate how the proposed categorization works with design iterations.
- Optimize: A hybrid card sort helps test live designs to capture opportunities for improvement, especially if new elements were added to the existing categorizations.
How Many Contributors Do I Need?
We suggest a sample size of 30-50 contributors. To learn more, read our article about card sorting with UserTesting.
Setting Up Your Test
When you recruit your contributors, inform them in the Other requirements field that this is not a typical usability test and what they'll be required to do. It is also essential to provide contributors enough time to complete this exercise to not feel rushed to sort the cards.
Avoid similar terms in your cards and categories that create bias. For example, are you asking where garlic butter sauce goes when you have a category already called sauce? People want to match like items, so having items with similar terminology will bias contributor answers.
Don't frustrate contributors by giving them 500 cards to sort at a time. Most card sorts have 20–60 cards for contributors to sort. A good rule of thumb is the "30/30 rule"—about 30 contributors per group and 30 cards.
The number of categories for a closed or hybrid card sort is tricky. The main point to consider here is that you don't want to have too many categories so that it will be overly challenging to sort the cards. Many card sorts have between four to six categories. For hybrid or open card sorts, you will merge categories in your analysis to uncover common themes.
To help you learn the "why" behind the "what," consider asking the contributors which cards were tough to sort and which categories, labels, or cards were unclear/confusing to help develop your insights further.
Note: Card sort tasks and Tree test tasks can't be used inside a Balanced Comparison group.
Best Practices for Tree Testing
Types of Tree Tests
You have two ways you can run a tree test with UserTesting: 1) the classic tree testing app, or 2) the integrated tree testing tool in the UserTesting Platform.
From there, you have two main approaches to take when conducting a tree test.
- Evaluative tree test: Give contributors several tasks to understand the success of a specific navigation structure.
- Comparative tree test: Give contributors some tasks and compare the results against another similar or new navigation structure.
Tree tests are most typically done early in the research phase (the “Design and Build” stage) of the development lifecycle, as soon the basic outline of the navigational structure becomes clear. Tree tests during this phase help evaluate the current design and, in turn, inform subsequent updates to that design.
But tree tests can also be executed in the “Optimize” stage, after the design has been finalized, helping you determine just how effective the implemented design is.
How Many Contributors Do I Need?
We suggest a sample size of 30-50 contributors. To learn more, read our article about tree testing with UserTesting.
Setting Up Your Test
When you recruit your contributors, inform them in the Other requirements field that this is not a typical usability test and what they'll be required to do. It is also important that you provide contributors enough time to complete this exercise so that they don't feel rushed to find the content.
We recommend somewhere around five to seven tasks that you ask contributors to complete in the session for most tree tests. However, this can be flexible if you need it for a complex tree.
It is recommended that you test 2–4 tiers of your navigation. This should be sufficient to get reliable results and reveal the different ways contributors might locate information on your site or app. While more complex tree tests can sometimes involve going deeper into the navigation structure, be aware that anything more than four tiers risks confusing or tiring out your contributors.
The length of the session will depend on the number of tasks you want the contributors to complete and the complexity of the tree. As a best practice, launch a pilot test first to evaluate the time it takes for a contributor to complete the session and then set expectations for the remainder of the contributors.
To help you learn the "why" behind the "what," consider asking the contributors probing questions about their experience to get a deeper understanding of what content might have been challenging or confusing to find.
Learn MoreNeed more information? Read these related articles.
Want to learn more about this topic? Check out our University courses. |
Please provide any feedback you have on this article. Your feedback will be used to improve the article and should take no more than 5 minutes to complete. Article evaluations will remain completely confidential unless you request a follow-up.