A/B testing for startups

July 14, 2020

Generally, we tell clients that they need 1,000 conversions for a reliable A/B test, for each variation they’re going to test. (There are actual calculators you can use, too, but this is a rule of thumb.)

That means if you have, for example:

  • A landing page with 50,000 visitors every month, and a 5% conversion rate (= 2,500 conversions)
  • Or an ad with 250,000 impressions per month, and a 1% click-through rate (= 2,500 conversions)

You’re going to be able to run 1 reliable test each month. Of course, there’s a lot more to the story, and we recommend using a calculator like this one to know for sure. One other big factor is the size of the uplift; if you have larger differences between the test group (say they have a 10% conversion rate) and the control group (say they have a 1% conversion rate), the difference is also easier to detect. There are other parameters you can play with as well.

But the overall point is that it takes a lot of traffic. As you grow larger, you can run more tests! A million views of your homepage every month, with a 5% conversion rate, means 50 tests a month – and you can really get into things like buttons, form fields, copy, and more.

But what if you don’t have that much traffic? Is an A/B test still worthwhile, and how can you make it count?

Maximizing the usefulness of A/B testing

Given how few reliable A/B tests most marketers can run at a time, we suggest a few important practices to make sure they’re effective.

1) Test big. Testing slightly different landing page copy, or button size, or font color, is interesting! But ultimately, these tests often yield smaller improvements that take a long time to show up. Worse, by the time you’ve completed the test, or shortly thereafter, you’re embarking on a redesign or a new campaign that means you have to throw out your test and start again.

Instead, test an entirely different landing page design across all of your landing pages simultaneously. Try a completely different value prop on your homepage. Hide or show pricing in your nav bar. Hide or show live chat. Try to make big changes, see what happens, and use the results as evidence not just for marginal improvements in performance, but for significant changes in how you talk about, position, or promote your product.

2) Test all the way through. Your ads are a great place to test – super-easy to try different languages, instant learning about what resonates, and usually, something like click-through rate is a faster test than form conversions.

But in addition to testing click-through rates, you probably have a goal of converting your visitor. So you need to test conversions as well to see if your ad copy is simply drawing in lower-intent visitors more efficiently, or if it’s truly doing a better job at positioning you to prospects who would be interested. (You don’t have to A/B test your landing page in addition to your ad, though testing an ad in combination with a landing page might give you a more powerful signal.)

(If you’re an ecommerce business, this is a lot simpler, of course – and effective e-commerce tests do generally track all the way through to revenue. This point is directed mostly at B2B companies with a more complex sales cycle.)

3) Get the fundamentals in place before you test. A/B testing is useful, but talking directly to customers – and perhaps even showing them a landing page and soliciting their feedback – might be worth prioritizing. (And that approach will definitely give you more useful feedback.) There may be other fundamentals you need to work on first, too. How’s your design? Is your page showing up in search? Does it have a clear value proposition?

4) If you are going to A/B test, do it as a program, instead of as a one-off. Bake it into your process to always test your email subject lines, for example, and then choose the winner as the final send. By doing this, you’ll get better at testing, you’ll learn more, and your ultimate results will be a lot better.

5) Don’t hack your own test. Choose a timeline or an endpoint for the test – let’s say 1,000 actions – and then stop the test there. And don’t stop the test until you reach that point. Ending tests prematurely when a desirable outcome has been reached, even if that outcome is mathematically significant, is a major reason why marketers get false results from their A/B testing program.

6) Track your test. We don’t just mean keeping track of the results of the test, though of course that’s important! We also mean – what did you learn from each test? Why did you run it? What did you expect to see (your hypothesis), and what actually happened? This can add another layer of learning, since you don’t just learn from the test, you see how it compared with your thought process before you ran the test.

What kinds of A/B tests are useful?

In general, A/B tests should focus where learning will be most beneficial – and that isn’t necessary where you have the most conversions.

  • For example, if you have a page that lets users sign up for a demo, test 2 different versions of the page, with different value propositions, perhaps a description of what happens during the demo, social proof, and so on.
  • Consider A/B testing different page templates, not just individual blog posts or landing pages.
  • Make A/B testing part of an ongoing program, particularly for marketing emails, email outreach, and, if you have enough conversions, for paid advertising.

Conclusion

A/B testing is a powerful method for improving performance, but if you have less data, there are techniques you can use to really make your A/B tests count. In addition to ensuring a statistically valid test, make your tests bigger – more significant changes, a more thorough view of the entire sales funnel, and more consistent testing as part of the work you do every day.


Articles like this in your inbox

If you found this useful, you might want to subscribe to our newsletter. When we have enough interesting stuff to say, we send another edition (typically a couple times a month). No spam, of course.