Is it time to break up with your A/B testing strategy?

You’re probably strapped for time and resources to understand what’s wrong with your A/B testing strategy. There are some common mistakes you should consider before heading to divorce court.
Sophie Lamarche
1 April 2018
Omni-Channel Marketing Campaign
4 min 30
est il temps de rompre avec votre strategie de tests a b

It happens to the best of couples marketers. You set goals to boost subscribers and sales. You decide to set up an omni-channel action plan to achieve those goals. You go as far as devising an A/B testing strategy for your website, landing pages, PPC ads, and the entire gamut to measure, optimize and score.

After all, isn’t that what’s in the books when digital marketing pundits dish out conversion love advice?

Alas, after a while—a few months, maybe even a year—you look at the stats and notice that, perhaps after a few quick wins, your conversion rates lack their initial spark or have not evolved into something more meaningful for your business (read: more sales).

And then you’re hit with the stark realization that you’ve lost that loving feeling with your A/B testing strategy.

That light breeze you just felt is the result of marketers around the world leaning back in their chairs and nodding in agreement.

It is something we often see with results-driven marketers who long to embark in a full-time relationship with their A/B testing strategy—but without understanding how much commitment it takes.

A/B testing cannot be approached as a fling or Tinder date; you can’t simply set up a control, put together a few tests over the course of a few months, make itty bitty tweaks to colours, images, copy and design layout, and call it a day.

As top experts in digital marketing purport, you’ve got to always be testing. And you’ve got to always delve deeper than grazing a few stats to glean true insight into what works and what doesn’t.

Unless you have a full-time data scientist or a harem of analysts (and kudos for you if you do!), you’re probably strapped for time and resources to fully understand what’s wrong with your A/B testing strategy.

But there are some common mistakes to A/B testing you should consider before heading to divorce court.

You’re calling A/B test results too early

How many times have you conducted A/B tests using a control and a variant, where the variant literally surges to become king of the mountain for a few weeks or months? Did you declare a winner and roll out? What happened in the long term? Did you consistently increase your conversion rates?

If not, the reason is simple: you called the test results too early. Remember, website traffic and visit behaviour can vary from day to day, week to week, sales period to sales period. Stopping tests too soon, without the right sample size, will likely skew results.

Also, make sure you run tests until you have at least a 95% confidence level for the winning variation, which means there is only a 5% chance your results are a fluke. Your results need to be statistically significant over time to ensure you are implementing the right changes. If not, you become the victim of web site traffic’s peaks and troughs. Try a A/B split and multi-variant test duration calculator to determine the best test length for you’re A/B tests.

Your sample size is way too small

An offshoot issue of conducting A/B tests over too short a timeframe is that your sample size, or number of conversions, is probably too small, which may incite you to jump to conclusions too quickly.

For example, just because your control page generated a few more conversions over three months does not necessarily mean you should pop open the champagne. It could be that you ran your test during a major promotion or other marketing campaign.

According to ConversionXL, there is no magic number of conversions you need to hit before finalizing your A/B tests. However, aiming for at least 100 clicks/conversion per variant or even more is a standard rule of thumb.

Not sure what time of A/B test sample size you need? Use a sample size calculator like this one or this one. While the science may not be exact, you can at least get a ballpark number as a gauge.

You’re testing too much at the same time

Another huge mistake is not conducting each A/B test from scratch—and using only one variant.

Some marketers start off with one test and with one variant, and then add over time contrasting variants to the same test. For example, you use a different headline in original test versus your control. Then, you add a completely new form field to that same test. After a while, you switch up the test’s design.

When you go on such a testing spree, how on Earth can you determine what change actually made the difference between your test and control? Keep focused on variant for one test.

Your’re forgetting your funnels

Your sales funnels are one of the most important factors that can influence the validity of A/B tests.

Here’s why. Imagine you are trying to A/B test a landing page and a product page. For each page, you concurrently run one or two variants against the control pages. You congratulate yourself for running multiple tests at the same time.

Too bad they are both in the same funnel. There could be a huge overlap of traffic between the tests, making it nigh on impossible to figure out the ideal path to conversion.

In order to mitigate the chaos, fully map out your paths to conversion by determining each page that falls into each path. Work from end to start, as the bottom of the funnel doesn’t affect the top of the funnel as much. By doing so, you are creating a controlled test environment in which you are more likely to isolate the good from the bad.

You don’t create competitive variants

So, you have identified the right A/B testing period, got the right sample size, mapped out your paths to conversion, and pinpointed one variant to test, whether it be copy, call-to-action (CTA) or a design element. And blast off! The testing commences.

While you should only test one variant at any given time, make the change between it and your control more significant and sexy. If you are testing copy, use completely different language. If you are testing overall layout, switch things up more than just placing your checkout button a tad higher up on the page. Conduct research with your competitors or other successful companies to see what they do with the specific variant you are testing and test their ideas.

A relationship with you’re A/B testing strategy takes a lot of work as there are many things you can test, including messaging, CTAs, forms, layout, gated vs non-gated content, and so much more. However, by keeping in mind these best practices, you’re sure to make progress and reignite you’re A/B testing flame.

Find out how your company can benefit from Dialog Insight.

Read also

Blog, Personalization

How to leverage customer data for hyper-personalization at every touchpoint  

Hyper-personalization should no longer be an option, but a standard. Every customer interaction is a valuable opportunity to enrich profiles, anticipate needs, and create tailored experiences. By intelligently leveraging behavioral, transactional, and relational data, brands can trigger real-time marketing actions that are contextual, effective, and human. Discover how to build a data-driven culture that turns every touchpoint into a performance driver.

Blog

Revolutionize your audience engagement through personalization and relationship marketing

In a media landscape saturated with content, capturing consumer attention is a major challenge for entertainment companies. To stand out, an attractive offer alone is no longer enough—brands must engage their audience in a personalized, relevant, and continuous way. This is where relationship marketing strategies come into play, supported by intelligent data analysis and automated interactions.

Blog

Create a consistent omnichannel experience to boost your sales through intelligent customer data activation

Today, consumers effortlessly move from one screen to another, from one channel to the next, before making a purchasing decision. In the face of this complexity, simply being present across multiple channels is no longer enough. What truly makes a difference is a brand’s ability to deliver a seamless, consistent, and personalized experience at every stage of the customer journey.

Data Management

Keep your database clean for better overall performance

The large amount of contacts that companies accumulate over the years is of course, very impressive but, how many of these contacts are really active?

Omni-Channel Marketing Campaign

Tips to create a successful automated welcome campaign

Welcoming new subscribers is important especially when you want to make a good first impression. That's why our topic of the day is: automated welcome emails.

Personalization

Marketing personas: Are you doing it right?

Defining who your audience is the basis of a good marketing strategy. Creating personas can help you in this matter.