March 14, 2018

Most A/B Tests Don't Produce Significant Results

Experimenting remains a crucial aspect of marketing
Experimenting is a crucial part of media and marketing, but many internal tests fail to produce tangible effects.

About two-thirds of brand marketers use A/B testing to improve conversion rates, according to research by Econsultancy and Red Eye. Marketers also rely on A/B testing to optimize the landing pages of their ads. And publishers use A/B testing to personalize content for users and find headlines and images that drive traffic.

While A/B testing is common among publishers and marketers, most A/B tests fail to produce statistically significantly results. According to a survey of 3,900 professionals worldwide by UserTesting, fewer than 20% of respondents reported that their A/B tests produce significant results 80% of the time.

Percent of Their A/B Tests That Produce a Statistically Significant Result


A similar analysis by Appsumo concluded that only one of every eight A/B tests lead to significant change. Although many A/B tests donÕt produce significant results, itÕd be irresponsible of marketers to eliminate A/B testing from their media plans, according to John Donahue, chief product officer of programmatic platform Sonobi.

ÒThe benefits of A/B testing are undeniable,Ó Donahue said. ÒDeveloping any creative project there are a lot of assumptions, A/B testing allows you to remove those assumptions.Ó

In some instances, A/B testing call-to-action features and ad headlines can save marketers 40% of their media budget on ad platforms like Facebook, according to Donahue. Part of the reason the listicle publisher Ranker is able to make money from buying traffic off Facebook is because Ranker frequently tests which audience targets it can reach at a low price.

Of course, itÕd be unrealistic to expect every A/B test to facilitate meaningful results. And similar to how scientists learn from their failed experiments, marketers can learn from A/B tests that didnÕt yield anything.

While some tests fail due to bad design, another reason many A/B tests donÕt produce significant results is because the sampling traffic thatÕs powering the test isnÕt large enough to lead to conclusive evidence, according to Mani Gandham, CEO of content marketing company Instinctive. Gandham said that if the size of the testÕs sample isnÕt properly put into context, marketers can end up with experiments that Òresult in rather fuzzy results, and a tiny relative difference in performance can easily be mistaken for a clear signal.Ó


Copyright 2018 eMarketer inc. All rights reserved. From https://www.emarketer.com. By Ross Benes.

To view all articles, check out the Internet Travel Monitor Archive