Saturday, October 31, 2020
Voices

5 A/B Testing Pitfalls Marketers Often Fall Into

Chad White
October 15, 2020

3 minute read

A/B testing allows marketers to determine whether their audience prefers version A of something or version B, and thankfully, it’s easy to do. However, it seems so easy that marketers sometimes don’t realize they’ve invalidated their results, missed out on golden opportunities, or—worst of all—confidently come to the wrong conclusions.

The truth is that A/B testing is only powerful if done right, and that means avoiding the many pitfalls that can undermine your testing. Here are 5 pitfalls to avoid.

Misunderstanding whether youre testing to learn or testing to win

Both testing and learning hold an important place in a marketer’s arsenal, but they offer different approaches during the setup and analysis. For instance, when testing to learn, you test themes, tactics, layouts, and other things you can apply broadly to future campaigns. Testing to learn examples include testing whether your audience responds better to lifestyle images or product images, or if they respond better to dollar-off or percentage-off offers.

However, when testing to win, the results of the campaign are usually only applicable to that campaign, Wade Hobbs, Senior Strategy Consultant for Strategic & Analytic Services at Oracle CX Marketing Consulting, said in a recent Oracle Modern Marketing Blog article. “This type of testing is perfect for bigger campaigns—think Black Friday, product launches, and other high-priority sends—where an incremental lift of 10% to 20% in open or click-through rate can be material,” he said.

Misunderstanding whether youre testing to find a new local maximum or global maximum

Are you looking for an incremental improvement or a paradigm shift? That’s the difference between seeking a new local maximum and new global maximum, respectively.

For instance, testing different colors for your call-to-action button could help you find a new local maximum, as you’re making just a small tweak to your approach. But testing short-form versus long-form copy could help find a new global maximum since you’re radically changing the experience. The former represents a low-risk, low-reward scenario, while the latter represents a high-risk, high-reward scenario. Both have their place, but don’t make small changes and expect to find radically superior results.

Only testing one-time campaigns

A/B testing automated email, SMS, push notifications, and other digital marketing campaigns can be significantly more involved than testing one-time campaigns, but the payoffs are much greater because the ROIs of triggered campaigns are generally much higher than for one-offs. However, according to Litmus’s 2019 State of Email Workflows, while 37% of brands rarely or never test their broadcast or segmented email campaigns, that percentage rises to 41% for triggered emails, and jumps to 50% for transactional emails.

In the same Oracle Modern Marketing Blog article, Helen Lillard, Principal B2B Consultant at Oracle CX Marketing Consulting, states, “Testing definitely isn’t just for one-time promo campaigns. Set up audience journey programs like welcomes, nurtures, and reengagements with testing so that you can test your content, offer, and other elements for weeks at a time and get really solid results.”

Choosing a victory metric thats not aligned with your campaigns goal

Marketers can lose sight of the true goal of their campaign when A/B testing. Sometimes they misunderstand how the element they’re testing affects the funnel. For example, many marketers mistakenly use open rates as a victory metric for subject line tests, rather than using clicks or conversions, because they don’t understand that who opens an email impacts the rest of the email interaction funnel.

Other times, they compromise on their victory metric and use a top-of-the-funnel metric so they can reach statistical significance more quickly. However, being quickly certain about the influence of an uncertain indicator isn’t the path to becoming a data-driven marketer.

Failing to verify your winners

In the world of marketing, victory isn’t eternal. Winners must be re-tested periodically to ensure they’re still winners. Two factors influence this. First, the novelty effect can give a change a short-term positive boost that then disappears or turns negative over time as people get used to the change. And second, the composition and expectations of your audience constantly changes, which means the same test can have a different result if repeated 6 months later.

For a deeper dive, including a look at 14 more A/B testing pitfalls to avoid, check out A/B Testing Pitfalls: How Marketers Can Avoid Costly Mistakes on Oracle’s Modern Marketing Blog.

Chad White
Chad S. White is the Head of Research at Oracle CX Marketing Consulting. He’s also the author of “Email Marketing Rules” and more than 3,000 articles and posts about email marketing. Chad was named the Email Experience Council’s 2018 Email Marketer Thought Leader of the Year. Find him on Twitter and LinkedIn @chadswhite.
SmarterCX.com by Oracle is the destination for professionals who are building the next generation of customer experience. Here, you can find breaking news, in-depth analyses, expert insights, and useful tools that will empower you to think and work progressively.
CLOSE