At a Glance
Below is an alphabetical list of different ways you can collect insights, types of tests, and research methods. Most of these methods are covered in the Use Case Overview University training course. Additional resources specific to each method are also listed below.
Benchmarking tests focus on understanding and measuring the baseline experience with your design or service so that you can compare your future designs against that benchmark. Run a benchmark test on your current experiences and the same test when you’ve updated the design or have a competitor you want to compare your design against. Use consistent metrics in benchmark tests (such as rating scales or explicit success criteria) to make direct comparisons against the benchmarks. It’s critical that each test within a benchmark study use the same tasks and metrics. This helps capture a true measurement of design progress over time.
Card sort tests help you understand how respondents would group pieces of content and how they would label those groups. Run a card sort to discover how people understand and categorize information. Doing so will help you reorganize your content. The content organization you design after a card sort can be tested via a tree test.
Comparison test (Preference test, A/B test, Experiment)
Comparison tests provide feedback on one design as compared to other designs.
Run a comparison test during discovery, to compare your design against competitor designs. Early in your design process, run a comparison test to compare multiple design options before carrying one forward and, later in your design process, use it to compare an updated design to your current design. Some comparison tests may not result in one “winning” design but still provide feedback on the relative strengths and weaknesses of the different designs.
As a follow-up to initial comparison tests, preference tests, A/B tests, and other experiments can be used with a statistically significant sample size to decide on a winning design (but take more time and effort to execute).
Competitor tests involve running any of the types of tests listed in this document on a competitor’s design, rather than your own design. The key is to understand enough of the competitor’s design to create tasks and questions that apply to that design, while also learning information that will help you improve your own designs or services.
Concept tests help you collect insights on an idea before you build out too much detail. Run a concept test when you have an idea for a design that you can articulate in words, via a short video, or using a non-interactive illustration of a design. Use concept tests to confirm your design direction—or adjust it—before you’ve invested too much time or budget into your solution. (The concept stage is a great time to run Comparison tests, so also see Comparison test above.)
Content tests focus on collecting feedback regarding the content of your design, rather than the navigation or interaction. Run a content test any time you have a non-interactive illustration or words (such as taglines or email subject lines) that you want feedback on.
Discovery interviews allow you to learn about a customer’s background, goals, expectations, and attitudes about current experiences. Run discovery interviews when you are early in the process of defining a product or service or when you need to understand opportunities to fill a gap in needs that customers are currently experiencing.
Ethnography study (light ethnography, customer environment and context)
Ethnography studies allow you to deeply understand the user and their environment. “Light” ethnography is the term for doing limited ethnographic research to gain insights without embarking on a full project of extended site visits. Run these studies when you need to understand the environmental factors that influence your customers, such as their physical environment—including constraints and the people in their environment (for example, learn how having children or older adults in the home affects the technology they use).
First impression test/first-click test
First impression tests help you get insight into users’ initial reactions to a design. Run a first impression test as the first step in a longer user research activity in order to determine if contributors understand what a design is, who would use it, and what they would use it for. “Who do you think this is designed for?” is a great question to ask in a first impression test.
A first-click test assesses where a contributor would initially click to complete a task.
Longitudinal study (e.g., diary study, multi-touchpoint study)
Longitudinal studies are any study where you interact with the same contributor over a period of time, such as running a test with a contributor when they first download a piece of software and then running a weekly test with that same contributor for their first month of using the software. Run longitudinal studies to discover a user’s experience with existing products before you define a new service or product—or when you launch a new product and need to understand how it is used in the real world over time.
Needs assessment studies allow customers to articulate the gaps in current service or product they are using or to rank possible features based on how well those features are expected to meet their needs. Run a needs assessment to discover user needs early in the process of defining your new offerings or features. Also, run a needs assessment when you have identified new offerings or features and need feedback from users about how they expect these new offerings or features to work and/or which are the highest priorities.
Omnichannel studies collect insights from customers across different modes of interaction. For example, an omnichannel study may collect insights on the customer’s experience researching and purchasing a product online, checking the status of that purchase on their smartphone, and receiving that product in their home. Run omnichannel studies when you have an experience that spans multiple modes of interacting with contributors as they accomplish a single goal.
Surveys are used to collect feedback from contributors indirectly via a form they fill out on their own—primarily via closed-ended questions, but occasionally including open-ended questions. Surveys are usually sent out to a statistically significant number of contributors but should be piloted in a qualitative way by running an unmoderated test where you record a small number of contributors’ experiences as they follow the link to the survey and complete it. Run a survey when you want to collect a large amount of background information about contributors or what to collect their ratings of an experience—and always pilot the survey before sending it out to the full number of contributors.
Tree tests help you understand if contributors can find content in your design by navigating a text-only version of your content organization. Run a tree test when you want to understand if there are problems with your current content organization or after you reorganized content based on a card sort.
Usability test (Prototype test, Live site test, Navigation test)
Usability testing collects insights on the aspects of your designs that make accomplishing goals easy or difficult. Run usability tests throughout your product development life cycle—early on with low-fidelity prototypes, during the design and development phases with higher-fidelity prototypes, and with your live sites and apps. A usability test will provide feedback on navigation, interaction, content, and visual design.
Need more information? Read these related articles.
Want to learn more about this topic? Check out our University courses.
Please provide any feedback you have on this article. Your feedback will be used to improve the article and should take no more than 5 minutes to complete. Article evaluations will remain completely confidential unless you request a follow-up.