A/B Testing — Tips for Success

Aman Gupta
4 min readMay 17, 2021

This article assumes that you have a basic understanding of A/B Testing and statistical tests. Here we will discuss some tips to ensure the success of A/B Tests.

Image Credits: https://rapidboostmarketing.com

A/B testing (also known as split testing or bucket testing) is a method of comparing two versions of a webpage or app against each other to determine which one performs better. AB testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal.

A/B Testing Terminology

  1. Variant: Variant is the term for any new versions of a landing page you include in your A/B test. Though you’ll have at least two variants in your A/B test, you can conduct these experiments with as many pages as desired.
  2. Champion: You can think about A/B testing like gladiatorial combat. Two (or more) variants enter, but only one page leaves. This winner (the page with the best conversion performance, typically) is crowned the champion variant.
  3. Challenger: When starting a test, you create new versions (variants) to challenge your existing champion page. These are called challengers. If a challenger outperforms all other variants, it becomes the new champion.

Tips for Success

I will share some suggestions to run the A/B tests successfully. These are purely based on my personal experience, and you may have different perceptions and experiences. Let’s begin:

  1. High confidence level — Try to get as close to a 99% confidence level as possible in order to minimize the probability of getting to the wrong conclusions.
  2. Be patient — Don’t jump to conclusions too soon, or you’ll end up with premature results that can backfire. Stop peeking at the data as well! Wait until the predefined sample size is reached.
  3. Run continuous or prolonged tests for additional validations — If you don’t trust the results and want to rule out any potential errors to the test validity, try running the experiment for a longer period of time. You’ll get a larger sample size, which will boost your statistical power.
  4. Run an A/A test — Run a test with two identically segmented groups exposed to the same variation. In almost all cases, if one of the variations wins with high statistical confidence, it hints that something technically may be wrong with the test.
  5. Get a larger sample size or fewer variations — If you’re able to run the test on a larger sample size, you will get higher statistical power, which leads to more accurate and more reliable results.
  6. Test noticeable changes — Testing minor changes to elements on your site may get you farther away from any statistically significant conclusions. Even if you’re running a high-traffic site, test prominent changes.
  7. Don’t jump into behavioral causation conclusions — As marketers, we often base decisions on our intuition regarding the psychology of the visitor. A/B testing comes in to help us rely a bit less on our instincts and a bit more on concrete evidence.
  8. Don’t believe everything you read — Although reading case studies and peer testing recommendations is great fun, find out what really works for you. Test for yourself. Remember that, sometimes, published statistics tend to be over optimistic, and not representative.
  9. Keep your expectations real — More often than not, following the end of a successful A/B test, there’s an observed reduction in the performance metrics of the winning variation. So, to avoid making the wrong conclusions, lower your expectations once a test is over.
  10. Test continuously and never stop thinking and learning — The environment is dynamic, and so should be your ideas and thoughts. Evolve and think forward! Remember that the downside of all traditional A/B testing tools is that, eventually, these tools direct you to make static changes to your site, which may or may not fit all of your potential users in the long run.

Clearing the Air

A/B Testing is sometimes also called Split Testing. For most purposes both can be used interchangeably but there are nuances to both which needs to be understood:
A/B refers to the two web pages or website variations that are competing against each other.
Split refers to the fact that the traffic is equally split between the existing variations.

Like A/B testing, split testing can evaluate small changes to a single website element (such as a different image, header, call to action, button color, signup form, etc.) or be run between two completely different styles of design and web content.
All available users will be split into groups (without their knowledge) and half of them will see the original version (the control) while the other half will see a new version (the variation). Split tests are typically conducted on landing pages or product pages (if you’re an ecommerce company), though you can split test any page on your website. Once the test has reached a statistically significant sample size, the design and optimization team will investigate differences in behavior and declare a winner (or an inconclusive test result if no measurable differences are obtained).

I hope I was able to provide crisp suggestions to ensure the effectiveness and success of your A/B Tests. Please feel free to share your experiences and suggestions in the comments, to help the readers get a broader perspective on this amazing statistical technique.

Thanks for reading! Stay Safe!

--

--

Aman Gupta

Is a pantomath and a former entrepreneur. Currently, he is in a harmonious and a symbiotic relationship with Data.