Is it time to break up with your A/B testing strategy?

You’re probably strapped for time and resources to understand what’s wrong with your A/B testing strategy. There are some common mistakes you should consider before heading to divorce court.
Sophie Lamarche
1 April 2018
Omni-Channel Marketing Campaign
4 min 30
est il temps de rompre avec votre strategie de tests a b

It happens to the best of couples marketers. You set goals to boost subscribers and sales. You decide to set up an omni-channel action plan to achieve those goals. You go as far as devising an A/B testing strategy for your website, landing pages, PPC ads, and the entire gamut to measure, optimize and score.

After all, isn’t that what’s in the books when digital marketing pundits dish out conversion love advice?

Alas, after a while—a few months, maybe even a year—you look at the stats and notice that, perhaps after a few quick wins, your conversion rates lack their initial spark or have not evolved into something more meaningful for your business (read: more sales).

And then you’re hit with the stark realization that you’ve lost that loving feeling with your A/B testing strategy.

That light breeze you just felt is the result of marketers around the world leaning back in their chairs and nodding in agreement.

It is something we often see with results-driven marketers who long to embark in a full-time relationship with their A/B testing strategy—but without understanding how much commitment it takes.

A/B testing cannot be approached as a fling or Tinder date; you can’t simply set up a control, put together a few tests over the course of a few months, make itty bitty tweaks to colours, images, copy and design layout, and call it a day.

As top experts in digital marketing purport, you’ve got to always be testing. And you’ve got to always delve deeper than grazing a few stats to glean true insight into what works and what doesn’t.

Unless you have a full-time data scientist or a harem of analysts (and kudos for you if you do!), you’re probably strapped for time and resources to fully understand what’s wrong with your A/B testing strategy.

But there are some common mistakes to A/B testing you should consider before heading to divorce court.

You’re calling A/B test results too early

How many times have you conducted A/B tests using a control and a variant, where the variant literally surges to become king of the mountain for a few weeks or months? Did you declare a winner and roll out? What happened in the long term? Did you consistently increase your conversion rates?

If not, the reason is simple: you called the test results too early. Remember, website traffic and visit behaviour can vary from day to day, week to week, sales period to sales period. Stopping tests too soon, without the right sample size, will likely skew results.

Also, make sure you run tests until you have at least a 95% confidence level for the winning variation, which means there is only a 5% chance your results are a fluke. Your results need to be statistically significant over time to ensure you are implementing the right changes. If not, you become the victim of web site traffic’s peaks and troughs. Try a A/B split and multi-variant test duration calculator to determine the best test length for you’re A/B tests.

Your sample size is way too small

An offshoot issue of conducting A/B tests over too short a timeframe is that your sample size, or number of conversions, is probably too small, which may incite you to jump to conclusions too quickly.

For example, just because your control page generated a few more conversions over three months does not necessarily mean you should pop open the champagne. It could be that you ran your test during a major promotion or other marketing campaign.

According to ConversionXL, there is no magic number of conversions you need to hit before finalizing your A/B tests. However, aiming for at least 100 clicks/conversion per variant or even more is a standard rule of thumb.

Not sure what time of A/B test sample size you need? Use a sample size calculator like this one or this one. While the science may not be exact, you can at least get a ballpark number as a gauge.

You’re testing too much at the same time

Another huge mistake is not conducting each A/B test from scratch—and using only one variant.

Some marketers start off with one test and with one variant, and then add over time contrasting variants to the same test. For example, you use a different headline in original test versus your control. Then, you add a completely new form field to that same test. After a while, you switch up the test’s design.

When you go on such a testing spree, how on Earth can you determine what change actually made the difference between your test and control? Keep focused on variant for one test.

Your’re forgetting your funnels

Your sales funnels are one of the most important factors that can influence the validity of A/B tests.

Here’s why. Imagine you are trying to A/B test a landing page and a product page. For each page, you concurrently run one or two variants against the control pages. You congratulate yourself for running multiple tests at the same time.

Too bad they are both in the same funnel. There could be a huge overlap of traffic between the tests, making it nigh on impossible to figure out the ideal path to conversion.

In order to mitigate the chaos, fully map out your paths to conversion by determining each page that falls into each path. Work from end to start, as the bottom of the funnel doesn’t affect the top of the funnel as much. By doing so, you are creating a controlled test environment in which you are more likely to isolate the good from the bad.

You don’t create competitive variants

So, you have identified the right A/B testing period, got the right sample size, mapped out your paths to conversion, and pinpointed one variant to test, whether it be copy, call-to-action (CTA) or a design element. And blast off! The testing commences.

While you should only test one variant at any given time, make the change between it and your control more significant and sexy. If you are testing copy, use completely different language. If you are testing overall layout, switch things up more than just placing your checkout button a tad higher up on the page. Conduct research with your competitors or other successful companies to see what they do with the specific variant you are testing and test their ideas.

A relationship with you’re A/B testing strategy takes a lot of work as there are many things you can test, including messaging, CTAs, forms, layout, gated vs non-gated content, and so much more. However, by keeping in mind these best practices, you’re sure to make progress and reignite you’re A/B testing flame.

Find out how your company can benefit from Dialog Insight.

Read also

Blog, 25 years

25 Years of Innovation: How Dialog Insight Continues to Ride the Wave of Emerging Technologies

Discover the key innovations that have shaped Dialog Insight's success and its SaaS relationship marketing platform since 1999, and explore the emerging trends that will transform marketing over the next 25 years.

Blog

The Strategic Importance of the CSM in Customer Satisfaction and Growth at Dialog Insight

Discover the strategic importance of Customer Success Managers (CSM) at Dialog Insight, key partners in customer satisfaction, growth, and ROI optimization through effective and personalized marketing solutions.

Analytics

15 marketing automation KPIs to measure your marketing automation

It's all well and good to do marketing automation, but if you do it without analyzing the results, it's hard to distinguish what works from what does not work.

Customer Data Platform

How CDPs Are Revolutionizing the ESP Landscape

Discover the key distinctions between CDPs and ESPs, as well as how to make the most of these marketing solutions for your campaigns.

Blog

Unlock the Full Potential of Your Retail Marketing with Dialog Insight’s Enhanced E-commerce Module

Streamline campaigns, automate product data, offer personalized recommendations. Gain insights, launch real-time campaigns, connect with Shopify/Magento.

Data Management

SSO : Why is Single Sign-On Important?

We are in an era where computer tools with single connections are multiplying and staff turnover is accentuated. Not to mention the sharing of increasingly sensitive data. Managing access and permissions can become a long and complex process. This is why Dialog Insight wants to simplify and secure everything by offering SSO to its customers. Indeed, it is possible to add single sign-on (SSO) for connection to the DI platform.