Startup people, please do me a favor and stop talking about A/B testing. Please.
You might wonder: Thomas, why are you so grumpy today? Let me tell you. It has nothing to do with being grumpy. It’s just the truth. Sorry, startup people.
Before deciding I’ve gone crazy, make sure to read the following thoughts first.
What is A/B testing?
A/B testing, also known as split testing, is comparing two different versions of a web page. The goal is to evaluate which version is performing better.
Or as Wikipedia states:
“In marketing and business intelligence, A/B testing is a term for a randomized experiment with two variants, A and B, which are the control and variation in the controlled experiment.”
You get there by showing two different versions of a web page to a group of people who visit your site at a specific period of time. The two versions can only differ in the color of the Call-to-action button, or in some cases, you could a/b test two completely different design drafts.
See the difference in these two versions?
Basically, you can run A/B tests on almost anything on your website that affects visitors’ behavior.
The version that gains better conversion rates wins.
Why running A/B tests?
Running AB tests can help you make decisions about certain design drafts, copy ideas or arrangement of certain sections or features on your site.
You can focus on one idea at a time, collect data about the impact of your question and make better decisions on the basis of the A/B test.
However, there are a couple of constraints which should be considered.
Sounds easy, huh?
All about conversion rates
In theory, it sounds great. A/B tests help you make better decisions and improve conversion rates on your site. Isn’t that awesome? That’s what we all want, right?
As someone who’s thinking about conversion rates on a daily basis, I see the clear benefit of thinking about ways to improve your conversion rate on your website or online shop.
And A/B tests might be the way to do that.
The challenges of A/B tests for startups
However, there are a couple of things that people, especially in the startup ecosystem, need to consider when running A/B tests.
The problem of quantitative data
I think the most important thing when it comes to A/B testing is data and the amount of data available.
As I mentioned before, there’s nothing wrong on running A/B tests as long as you have enough data to work with when conducting A/B tests. My guess is that as a startup that just released a new product or landing page, you probably do not have that many visitors on your site.
As both versions must be tested simultaneously, you need a certain number of visitors on your site in order to measure relevant results.
An A/B test is a proper tool for testing variants on your site if you have a couple of thousand daily visitors. It’s not a proper tool if you just have a couple of daily users. As most startups can be found in the second scenario, I won’t bet my money on those A/B tests.
The problem of significance
Most A/B tests run for a limited time. It’s important to define that period in advance in order to create meaningful results.
I absolutely recommend that everyone use a calculator (like this one) to help determine exactly how long to run a test before giving up.
If you are dealing with a low number of daily visitors, you’ll be surprised how long your tests needs to run.
Here’s an example:
Let’s say you have 250 daily visitors on your site and you’d like to run an A/B test on all your visitors.
You have an existing conversion rate of 5% and you’d expect a minimum improvement in conversion rate by 15%.
All in all, you will need 162 days to run your A/B test in order to get a significant result. And 162 days can be a long period for startups.
Yep – that is a long time. Give this calculator a try and see how long an A/B test must run in your case.
In this blog post, you can learn more on statistical analysis and A/B testing.
The problem of your non-defined target group
In the early stages of your startup and/or product, chances are high that you do not know a lot about your (potential) customers. Maybe you aren’t even sure what your ideal customer profile is or what it should be.
You may have just released a first version of your product and gained some traction among certain niches or people. Running A/B tests in this early stage of your product might bring misleading results. Do yourself a favor and start thinking about the bigger picture here.
Without a proper definition of your Ideal Customer Profile (ICP), running an A/B test won’t get you anywhere.
The problem of not being a scientist
To startups and marketers, A/B tests may sound sexy. However, it shouldn’t prevent you from thinking like a scientist.
Running an A/B test might be easy but a scientist will tell you that it’s not about the test run itself. Just like scientists, you should care mostly about one thing: The result must be correct.
However, getting correct results can only be achieved by putting enough time and effort into it, which many startups and marketers do not have.
For additional reference, I recommend this article on How Not To Run An A/B Test.
The problem of your MVP
Having just released your minimal viable product, chances are high that your product itself will change over time. As your product changes, your website and website visitors wil change as well.
A/B testing might be problematic when the results of certain tests are influenced by your changing product.
An alternative to A/B testing for startups
I can’t emphasize enough how great A/B testing can be if well executed. However, it just might not be the golden egg that some startup founders and people expect it to be.
Instead of running A/B tests for your startup, I recommend creating so-called experiments and collecting qualitative feedback from your customers and other stakeholders. As (most) startups have limited resources (time and money), a well-prepared and executed experiment funnel can be your way to growth.
Here’s how we are running experiments at Usersnap.
(No, we didn’t invent this on our own – we were inspired by this blog post on Growth Experiments at SoundCloud. I won’t go into too much detail here so please check out the blog post if you want to know more.)
Step 1: Brainstorming experiments
In order to improve certain outputs (like conversion rate, sign-ups, new customers) you need to think about the input required to achieve that. As a first step, you need to come up with ideas on how to make the output happen.
All ideas should go into a backlog, in our case a simple google spreadsheet.
Step 2: Prioritize
Having collected a few ideas, you should start prioritizing. Take a look at your list of experiment ideas and prioritize by considering the following three things:
- The probability of success
- The impact it will have if successful
- The resources required to test and implement
Step 3: Test
Start testing your experiment. It doesn’t matter if you’re testing various call-to-action buttons or conducting high-level experiments. Always document your progress.
Step 4: Analyse
I think we can agree that this is the most important step when looking at the results of your experiments. Make sure to answer the following questions when analyzing the output.
- Impact – What are the results of your experiment?
- Accuracy – How close are the results to your hypothesis?
- Why – Why are you seeing these results?
Collecting results from your experiments and gathering feedback from your first website visitors absolutely pays off. Feedback tools like Usersnap help you to gather comprehensive feedback which you would not get through A/B testing.
Wrapping it up.
Don’t get me wrong. A/B testing is a great way to grow your business and improve user experience on your website. However, it requires certain conditions which can be difficult to manage for startups and small companies.
Instead of spending a lot of time and resources on A/B testing, I recommend that startups conduct experiments instead.
Experiments are a more efficient way of getting customer insights and growing your business. Otherwise, you’ll end up in situations like these:
This article was brought to you by Usersnap – a visual feedback & bug tracking software for every web project.