October 22nd 2016

5 essential tools for a successful optimisation program

Written by Andy McKenna

To manage any project effectively requires documentation ??? to help you plan it, track it, and communicate its progress. Your testing and optimisation program is no different. In this blog post we outline five tools that we have been using with great success to help our clients with their CRO efforts.

Test prioritisation planner

This document provides a great way to plan your program of A/B tests and to prioritise them in order of importance. The format can be a Google or Excel spreadsheet, and should contain the following fields as a minimum:
  • Business objective
  • Problem statement
  • Hypothesis
  • Type of test (A/B; MVT)
  • Test name
  • Brief description of test
  • Target start date
  • Target end date
  • Success metrics
  • Predicted revenue
  • Prioritisation score
It should work as an overview of all your planned experiments, and the spreadsheet format allows you to quickly change the priority order based on business needs. Having all this information in one place makes it much easier to manage your testing program. Having it as a shared document will make it much easier to update, but we recommend you have an overall owner who can be the arbiter of priority!

Test requirements document

This is a document that provides all the information about a specific test/experiment. In many ways we view it as the most important document of all, as it is a catalogue of everything that needs to be included in an experiment. Anyone should be able to pick this document up and understand what the test involves from both a strategic and implementation perspective. For instance, a developer should be able to have a clear idea of exactly what is expected from them in setting the test up without having to ask any further questions of the test lead. It should serve the purpose of a clear brief and a functional specification document. After numerous iterations we have found that the format that works best for our test requirements document is as follows:
  • Summary & milestones: Strategic objective, hypothesis, test description, milestone summary
  • Technical Delivery: Success criteria, entry criteria, testing tool, what will happen post experiment
  • Experiences: Screenshots of all test variants (designs if possible, otherwise wireframes)
  • Test analysis planning: Questions for the data analyst to consider
  • QA: Brief overview of quality assurance process, who is responsible for it etc.
However we are constantly testing new versions of the document. Also, it's important that you use a version that suits your business, so you may feel it necessary to tailor it to your business needs.

Test results sheet

This can be a simple spreadsheet that lays out, in chronological order, all the tests you carry out in your testing program, with their results. It's generally a good idea to include comments about the experiment, as well as agreed next steps. It's amazing how easy it is to forget these details down the track if they aren't documented somewhere. We would recommend that you include at least the following fields in the spreadsheet:
  • Test ID
  • Test description
  • Success criteria (& maybe hypothesis)
  • Experiment start and end dates
  • Winning variant
  • Comments, including uplift/downturn %
  • Agreed next steps
This has served us as a great way to present results to stakeholders and senior management teams in 'macro'. However you might also want to consider some data visualisation to support it, as the spreadsheet format can come across a??little dry on its own.

Test report template

This should be used to track the results of individual experiments, providing top-level success metrics including winning variant and uplift (if applicable), segment splits and confidence score/statistical significance. We tend to split this report into 3 sections:
  • Test outline (strategic objective, problem statement, hypothesis, audiences, result)
  • Screenshots of experiment variations
  • Test result (winner, uplift % by success metric, increased revenue, results by segment, recommendations and next steps)
We???ve found the test report to be a good way of focusing attention on what didn't go so well with an experiment as much as what did go well, and it ensures that the learnings from an experiment are taken into account in future.

Test QA document

Creating new variants for an experiment often means changing the codebase on your website, and if that's not thoroughly tested the impact can be crippling. Changes implemented for A/B or multivariate tests should be treated no differently to any other code changes on your website, and should be subject to the same rigorous quality assurance process. There are some specific things that you should look out for though:
  • If you are running a single page A/B/n experiment, QA the functionality and design of all variants in various browsers and devices
  • Then QA all pages that follow that page in the customer journey, in case the changes have had a knock-on effect on them (known as regression testing)
  • If you are running a multi-page experiment, carry out the above two steps for each of the variants in the experiment funnel
  • Carry out a full end-to-end journey for all variants, to give you confidence that no bugs have been introduced
QA can often be the most time-consuming part of the whole A/B testing process, but we cannot overstate its importance. Every website's QA requirements will vary, but we recommend you adapt any testing plan templates that your website QA team use specifically for your A/B testing program.