What is A/B Testing?

At a Glance

A/B testing is when users are presented with one of two versions of a design—an A version or a B version—and conversion rates for each version are compared to see which version performs best. A variation of A/B testing is multivariate testing, which is when you test three or more versions of a design.

 

How can A/B and multivariate testing help me?

How can UserTesting assist with my A/B testing?

How does A/B testing differ from preference testing?

 

How can A/B and multivariate testing help me?

A/B and multivariate test multiple versions of the same asset—a landing page, a shopping cart, the labeling of a navigation menu—usually on live site. Such testing (sometimes referred to as “split” testing) is most valuable when only one element in each version is different. In this type of study, researchers compare the two tests to see which is more effective.

For example, you want to compare two site pages of a checkout flow. Both pages are the same except each displays a different call-to-action (CTA): Version A of the page displays an 'Add to cart" CTA, while Version B shows a "Buy now” button. An A/B test is quantitative—if you run an A/B test and those who used Version A did better at completing the checkout flow, you can conclude it was because of the CTA.

How can UserTesting assist with my A/B Testing?

UserTesting refrains from conducting traditional A/B or multivariate testing, which is inherently more quantitative. But we can help inform your current or planned tests in the following ways:

  • Generate ideas of what to test with an A/B or multivariate test: Run a usability test through the UserTesting Platform to quickly collect contributor feedback that identifies weaknesses or areas of confusion with your site. Such feedback can point you to those specific things that would best be served by A/B or multivariate testing.
  • Illuminate why a design performed better than another during an A/B or multivariate test: An A/B or multivariate test will tell you which version of a site had a higher conversion or success rate. Pairing this numbers-driven data with the qualitative insights found in the test recordings leads to a fuller understanding of why one version outperformed another/the others. 

Your A/B test tells you WHICH version has a higher conversion rate; testing through UserTesting can tell you WHY that version was more successful. 

How does A/B testing differ from preference testing?

Because both forms of testing involve testing two or more options, it’s easy to think that A/B and preference testing are interchangeable. But there are some key differences:

A/B Testing Preference Testing
Objective and numbers-driven (quantitative). Whether one version “performs” better is the key measurement. User preference is not a criterion. Subjective and driven by user preference (qualitative). This test measures which of the two or more options a user prefers.

Half of the contributors see Design A, and the other half see Design B.

Contributors see both designs and evaluate which one they prefer.

Demands a larger number of test contributors. This ensures a statistically accurate and reliable sample.

Can be conducted with fewer contributors, but the results are not as statistically significant.

Typically conducted on a live site or finished product.

Can be conducted at any stage of product development. Often valuable at the beginning stages of the design process.

To dive deeper into preference testing, see our What Is Preference Testing? article

Learn More

Need more information? Read these related articles:

Want to learn more about this topic? Check out our University courses: 

Please provide any feedback you have on this article. Your feedback will be used to improve the article and should take no more than 5 minutes to complete. Article evaluations will remain completely confidential unless you request a follow-up. 

Was this article helpful?
11 out of 25 found this helpful