By now you might have hear the story of how Google used experiments to decide on just the right shade of blue for its affiliate links. And how that shade of blue drove an 'extra $200m a year in ad revenue', according to Google UK's managing director Dan Cobley. Or maybe you've noticed different artwork for the same show on your Netflix home page when compared to someone else's (here's why). Running experiments is a big part of modern business but there are companies that struggle to implement experiments as part of how they operate. We've got a few tips on how to frame experiments within a business, the benefits of running them and some steps to get you started.
Framing an experiment
The easiest way to think of running an experiment in a business context is, a way to buy information. This immediately puts some value on knowing what impact a change has had on the business. If things are changed in a business and an experiment isn't run, then it can be difficult to attribute the impact to the change. For example, if we drop the price of a product and sales go up - is it because we dropped the price? What if the product is seasonal, for example, ice cream in summer? Part of the effect of the increase in sales will be because of the price, but how much of that can we attribute to the price versus the increase in temperature? These are important questions we can answer by running an experiment. So in the case of ice cream pricing in summer, we might drop the price in one store and compare it to another store that has similar sales in the summer time to measure the effect of price.
How we run an experiment is how we make sure we are going to be able to accurately measure the size of the change and how certain we are. This is known as Experimental Design. We can think of experimental design as the process that maximises the information from an experiment and minimises the cost. The early days of experimental design were led by Sir Ronald Aylmer Fisher, and the primary industry he looked for was agriculture around 1920. He looked at real data collected from 1842 and came up with approaches to experiments that would test things like the best ways to maximise crop yields. Today, businesses can run 1,000s of simultaneous experiments, testing things like pricing, marketing materials, website design, etc thanks to the internet (although experimentation certainly isn't limited to online businesses).
Benefits of running an experiment
One of the main reasons we want to run an experiment is to properly measure change. If a business is changing things and not understanding the effect these changes, then it can be difficult to know with any certainty what actions are improving (or hurting) the overall business. Being able to estimate the effect of a marketing campaign for example, could determine whether it should be run again next year or on a larger scale. Knowing what the effect of a change is can help determine the amount of resources devoted to refining that process as well.
The other great thing about running experiments is the ability to establish causality. This means we know what things that have been changed, have had an impact on the outcome. Having a structured approach to making changes (i.e. experimental design) means that, even if we are changing multiple things at once, we are still able to correctly attribute which change led to what outcome. This prevents the situation where there is ambiguity in what where the real drivers (or levers) of change are in the business.
Steps for running an experiment
Below is the general process I have for running an experiment within a business. With the process below, it's best if possible to run the experiment in a small controlled setting before rolling out the best version to the rest of the population. So for example, testing a marketing campaign at a suburb level before rolling out to a state or nation wide campaign.
Decide on what you want to change - what is the thing you will be changing and what number/metric will you be capturing at to record that change
Think about impact you believe it will have - do you believe the metric will go up or down? By how much do you think it will go up or down, i.e. what is the size of the effect you think the change will have in one group compared to another?
Identify any other factors that could affect the results - are there any other things you might have an impact on how the change will impact? For example, will a marketing campaign be more or less effective for different age groups?
Determine the groups you will experiment on - this will depend on what you are changing (step 1), the size of the effect you are trying to measure (step 2) and any other factors that could alter the results (step 3). It is important to have one group that isn't changed at all, this is known as the control group. This is what we compare the results of the experiment to measure the effect of the change. Determining the size and composition of the groups is really where experimental design comes into play.
At this point it might be worth having a conversation with an analyst or statistician - if you have an analyst or statistician in your organisation then this is the a good point to discuss the idea with them. It will be difficult to bring them on after this point if you need them to analyse the data.
Run the experiment - exciting! We're ready to run the experiment, although we won't draw any conclusions before the end of the experiment, it's important to observe how things are tracking. This is mainly to ensure the changes introduced have been implemented correctly.
Collect the data - if possible it's best to collect data as the experiment is being run. This prevents any problems with data collection at the end of the experiment and mean the analysis can be done closer to when the experiment is finished.
Analyse the results - carry out the analysis and see if there was an effect on the metrics and what the size of those effects were. At this point it's good to decide whether it's worth expanding the roll-out to the larger population. It's worthwhile keeping in mind that in some cases even though you might have a statistically significant effect (we are fairly certain that a difference in groups was due to the change we made) you might not have a practically significant effect (the experiment shows such a small uplift it's not worth doing to the rest of the population).
Some other tips for running experiments
Crawl before you walk - try simple A/B testing before more advanced methods. Basically just change one thing at a time and observe that.
Start small - practice with things like wording and messaging in an email to customers before trying with larger and more complicated things like a television campaign.
At the very least, measure - if you can't experiment then at the very least collect data on the process before and after the event. It might not be possible to accurately measure what has caused the change but you might be able to assess things that are correlated with those changes.