The importance of a water-tight hypothesis in your CRO experiments
We have a rule at Slipstream Digital that no experiment should be put live if it doesn't have a hypothesis.
But what exactly is a hypothesis? And why is a hypothesis so important?
A hypothesis can be defined as a statement predicting the outcome of a defined change. It can be confirmed or rejected with data.
The benefits of building a hypothesis for each of your experiments are numerous:
- It provides focus to everyone involved on the expected result
- It can help with brainstorming experiment ideas
- It shows why you have decided to run the experiment
- It provides a discussion point before the experiment is run (maybe the hypothesis will be challenged by stakeholders)
- It clearly articulates the business problem that the experiment is trying to solve
- There is less likelihood that the experiment is run on a 'hunch'
- It ultimately holds the experiment accountable to a successful result
So often, though, we come across flaky hypotheses that serve no useful purpose. And this is why it's so important that a hypothesis in CRO is as robust as possible.
A good hypothesis should have the following structure:
'If... then... because...'
Like a good objective it should be specific, measurable, actionable and results-focused. An example of a good hypothesis might be:
If we remove the global navigation from our checkout process, then we will see a 10% reduction in funnel drop-off, because visitors will be less distracted and more likely to complete the form.
It follows on logically from a problem statement, which should form the basis of any experiment brainstorming session. The problem statement is essentially the expression of the problem that customers are facing on your website, and which is leading to a conversion 'blocker'.
The beauty of including a concrete metric (such as the 10% reduction in the example) is that it forces you to pin the hypothesis back to a data insight.
Analyzing a funnel report may show that there is already an issue with customers dropping off these pages, and the existing drop-off rates will allow you to benchmark much more accurately what an expected reduction might look like.
Don't worry about proving your hypothesis wrong. This is just as valuable as proving it right, and can provide insights that are just as meaningful, sometimes even more so.
The whole purpose of testing is to learn about how your visitors interact with your web pages and journeys, so anything that you learn from an experiment is beneficial.
The hypothesis of a completed experiment should then form the basis of the next related experiment in the series (we will be writing a post about experiment series soon).
Worry more about inconclusive experiments, as these are more likely the result of the test variants not being sufficiently different from each other.
So to recap, here are the 5 steps to creating a water-tight hypothesis:
- Follow the format 'if... then... because'
- Make sure your hypothesis is SMART - I.e. specific, measurable, actionable, results-focused and, if appropriate, time-based
- Make sure it has been created on the back of a clear data insight (e.g. 'the drop-off rate between fields X & Y on our application form is a huge 12.5%')
- Include data to illustrate the predicted change in behaviour (e.g. 'If we remove the global navigation from our checkout process, then we will see a 10% reduction in funnel drop-off, because visitors will be less distracted and more likely to complete the form').
- Link your hypothesis to a problem statement.
Your hypothesis underpins your experiment in the same way that foundations support a building. Follow the above steps and you won't risk having a testing program that collapses around you.