Why 90% of your experiments shouldn't last longer than 2 weeks

Posted by David Arnoux


(If you're too lazy to read..

tl;dr Experiments should be time-boxed to 2-weeks and to 1-month for larger or more rigid organisations.)


Time-boxed experiments

We run our company with an experiment-driven approach. I also like to call it an “evidence-based approach”. We treat almost everything as an experiment:

  • New business models
  • Hiring
  • Customer acquisition
  • Referral programmes
  • New pricing models
  • Conversion rate optimization
  • Revenue models*
  • New products*

*Note: It might seem impossible to test something like a new revenue model or new products in 2 weeks. We often rely on early signals (also know as early indicators) to validate the riskiest aspects of new ideas. Of course this sometimes leads to false positives or false negatives, so experiments that rely on early signals are followed up with deeper "strong signal" experiments. Why do we do this? Because they allow us to take decisions faster.


What does a year look like?

We believe that most experiments should be time-boxed to last 2 weeks maximum. Some exceptions exist, but 90% of experiments fall within this rule.

And here is why:

There are 365 days in a year


Let's assume your company generally doesn't work on weekends. So let's remove the weekends.


G20 countries have an average of 12 public holidays per year, so let's remove those:


G20 countries allow for an average of 21 paid vacation days so let's remove those too:


According to the top 6 results I found on Google, people average approximately 5 paid sick days per year.


There are also a bunch of days where you're working but you're not working on experiments. Other stuff monopolises your schedule whether it be team building, preparing quarterly reviews, setting up OKRs or helping onboard new colleagues. I estimated these at an optimistic 8 days total (2 days per quarter). Let's remove those:

The average company has 215 working days a year available to run experiments.


Why 2 weeks and not 1 week you might ask?

Experiments take time to organize and run. To remove "weekend" effects and get a large enough sample size we usually need 2 weeks to run through all the steps of Gathering ideas, Ranking them, Outlining the experiment design, Working hard to execute the experiment, letting it run a week and finally Studying the results, as outlined in our GROWS process...

Having established that the average company has 215 working days a year available to run experiments we end up with a bandwidth of 21.5 "2-week slots" per year to run experiments in.


Now let's play around with the numbers...


What percentage of experiments actually succeed?

  • 10% ?
  • 30% ?
  • 50% ?
  • 70% ? (you're lying, and if not please contact me asap I want to understand how you do it)

How many experiments does your team run per 2-week interval?

  • 1 ?
  • 3 ?
  • 5 ?
  • 10 ?
  • 20 ?
  • 200+? (You're Facebook, Booking.com or Amazon)


Now let's run some basic simulations:

Let's combine the number of experiments to the average success rate of experiments and get rid of decimals:


Now let's take a benchmark of high frequency testing companies I've witnessed amongst a couple hundred companies we've trained, helped, worked in or coached. 10 experiments per week and 30% success rate.


According to this, it should be possible to run 65 successful experiments per year. Which is a great number if you make sure you're testing stuff that matters and that will have a strong impact on your organization.

Now let's look at a corporate

Corporates (even those that have undergone an agile transformation) tend to be slower, bigger animals. They can't run significant experiments in 2 weeks because of the very departments and organizational structure that makes them so powerful in the first place:

  • Bureaucracy
  • Politics (egos and/ or current business models)
  • Legal
  • Branding
  • Agencies
  • Lack of resources in the team
  • Compliance
  • Governance
  • and a bunch more....

So in a corporate scenario their year usually looks something more like this:

Their experiments tend to take 1 month to run and you'll see that I've added a bunch more “green days” to their year.


Let's run some simulations for a corporate:

Now let's take the average we've witnessed with our best corporate clients, 50% success rate* and 5 experiments simultaneously:

*You'll note that corporates have a higher success rate. When they use their business intelligence correctly they are capable of making less mistakes than startups. They better as they can run 25 experiments per year.


Conclusion :

Experiments should be time boxed to 2-weeks and to 1-month for larger or more rigid organisations.



So I'm curious... How long do you take to run experiments?



Written by David Arnoux