Mark Zuckerberg famously said, “Move fast and break things. Unless you are breaking stuff, you are not moving fast enough.”
Since then, startups and growth marketers have latched onto the statement. “Move fast and break things” has become a way of life, an ideal for aspiring entrepreneurs who just want to hustle all day, hustle all night like Gary Vaynerchuk.
But how true is that statement, which Mark made many, many years ago?
Does it apply to testing and experimentation? The philosophy of high velocity testing, made popular by a number of different testing and growth experts, certainly makes the case that it does.
What Is High Velocity Testing?
High velocity testing, also known as high tempo testing, is the philosophy that rapid testing and experimentation is the key to major growth. Seems pretty simple, right? To speed up growth, test more things faster.
Sean Ellis of GrowthHackers.com, who has championed the philosophy in recent months, explains…
In practice, however, the philosophy is more complicated. For starters, our 2016 State of the Industry Report found that most respondents run fewer than 5 tests a month. 43% only run 1-2 tests per month. Not exactly high tempo.
In fact, only 5% of respondents are running 21+ tests a month, which is roughly 5 tests a week.
Here are just a few other reasons that high velocity testing is easier said than done…
- Optimization budgets are restrictive, resulting in small team sizes and less prioritization.
- Optimization is no small task, making most optimizers very busy people.
- Many optimizers are new to the industry (nearly 20% of all respondents have been working in their CRO role for less than a year) and still learning.
But just because something is difficult doesn’t mean it’s not worth pursuing.
Why Is High Velocity Testing Important?
In recent history, you’ve seen the impact of high velocity testing multiple times. Think of the last company that seemingly appeared out of nowhere overnight. For me, it’s Airbnb. One day, I had never heard anyone even mention the name and the next everyone was talking about it.
At ConversionXL Live, Morgan Brown of Inman confirmed that the key to unlocking that type of growth is high velocity testing…
While Twitter’s growth is currently disappointing investors, it grew rapidly from 2010 to 2012. Why? It had a lot to do with the fact that they exponentially increased their testing velocity. Twitter moved from 0.5 tests per week to 10…
Morgan adds that Twitter’s growth wasn’t the result of some wild growth hacking sorcery…
In fact, GrowthHackers.com did something similar to Twitter.
In an article, Sean explains that they had hit a monthly active users (MAUs) plateau. In the first year, they had 90,000 MAUs. Without spending a dollar or increasing the size of their team by even an intern, they grew to 152,000 MAUs in just 11 weeks by dedicating themselves to high velocity testing.
Why is high velocity testing so effective?
As Claire Vo of Experiment Engine explained at ConversionXL Live, it has a lot to do with shifting the focus from testing program outputs (wins and case studies) to inputs (speed and quality)…
What Goes Into High Velocity Testing?
High velocity testing means moving through the growth process at a rapid pace. Morgan shared a basic outline of the growth process with us…
So, to master high velocity testing, you’ll need to go through each stage and optimize for speed.
1. Constant Ideation
If you want to run a lot of tests, you’re going to need a lot of test ideas. Sean explains why that too is sometimes easier said than done…
We all know that the best way to come up with test ideas is to conduct conversion research. Another way to ensure there’s constant ideation is to involve the entire company in the process. Ask the engineers, ask customer support… get ideas from every corner of the company.
Of course, you can also generate ideas based on every step of the funnel. Here’s a look at pirate metrics (AARRR) as designed by Dave McClure of 500 Startups…
For optimizers, those descriptions are a bit different…
- Acquisition: Optimizing emails, PPC ads, etc.
- Activation: Optimizing for that first conversion.
- Retention: Optimizing for the second, third, fourth conversion.
- Revenue: Optimizing for actual money. (This is especially relevant for SaaS and lead gen sites.)
- Referral: Optimizing for current customers who are willing to tell a friend.
When most people think of optimization, they’re usually just thinking of the activation stage. That is, getting an email or a sale or whatever it might be. Fortunately, you have four other stages of the funnel to optimize as well.
If you’re conducting research, involving the entire company and expanding beyond optimizing for activation, you should have no shortage of ideas. In fact, your real issue will be idea overload.
2. Strategic Prioritization
Having a huge backlog of ideas doesn’t exactly sound conducive to speed. If those ideas are prioritized in a meaningful way, however, it is.
Now, prioritizing your ideas can be done in a number of ways. To better understand the process, it’s best to examine how other companies are prioritizing and the frameworks they’ve developed.
There are a lot of different models for prioritization out there. While we found most of them helpful, we found each one was lacking in one way or another.
We wanted something that forced a yes or no, binary decision to remove subjectivity. Here’s what we ended up with…
We prefer this framework for three reasons:
- It makes the “potential” or “impact” rating more objective.
- It makes the “ease” rating more objective.
- It helps to foster a data-informed culture.
The model demands that everyone bring data to the prioritization discussion:
- Is it addressing an issue discovered via user testing?
- Is it addressing an issue discovered via qualitative feedback?
- Is the hypothesis supported by mouse tracking, heat maps or eye tracking?
- Is it addressing insights found via digital analytics?
Sean and his team at GrowthHackers.com have built their prioritization framework, known as ICE, into Projects, a testing and experimentation program tool. Here’s how it works…
- Impact: If it works, how much of an impact will it have on KPIs and revenue?
- Confidence: How sure are you of that estimated impact?
- Ease: How easy is it to launch the test or experiment?
You give each of the three categories a number from 0 to 10, which spits out a number for the idea as a whole. For example…
- Impact: 7
- Confidence: 10
- Ease: 10
…results in a rating of 9. You’ll want to test that idea that’s rated 9 before moving on to an idea that’s rated 3.
Bryan Eisenberg’s TIR
Bryan Eisenberg uses yet another framework for prioritizing ideas, which focuses on three factors…
- Time: How long will it take to execute?
- Impact: What’s the revenue potential, the anticipated outcome?
- Resources: What’s the cost of running the test or experiment?
Each factor is assigned a score from 1 to 5, 5 being the best. So, for example, if a project won’t take long, it would be given a 5 for the “Time” factor.
Next, multiply the three factors. So, the best possible score is 125 (5 x 5 x 5). The higher the score, the better, so start with the ideas that come closest to 125.
3. Smart Testing Management
Here are some more unfortunate statistics from our 2016 State of the Industry survey…
- 26% of respondents meet with their optimization team to discuss CRO “only when necessary”. Another 23% don’t meet more than bi-weekly.
- 41% of respondents say there is no one directly accountable for conversion optimization at their company.
What that tells us is that testing programs need some management help… fast.
Morgan explains why accountability matters and how to achieve it…
In fact, he suggests blocking off an entire hour of your time every week to…
- Review your KPIs and update your growth focus.
- Look at how many tests were launched, how many were not.
- Discuss key learnings from the tests run the previous week.
- Choose tests from the backlog for the upcoming week.
- Create a list of your favorite upcoming tests for future weeks.
- Recognize how many new ideas were submitted and the top contributor for the previous week.
Aside from meeting regularly, it’s important to manage resources and be realistic about your testing velocity…
4. Insights > Wins
After the tests have been run, you need to archive the results for future learning. In my opinion, there are three core benefits to maintaining a thorough archive…
- You won’t repeat tests by accident. (This is very real issue for big teams and extensive programs.)
- It’s easier to communicate wins and learnings to clients, bosses and co-workers.
- You’ll emphasize learning from all tests, thus improving your knowledge and the quality of future tests.
If you’d like to learn more archiving your test results, we’ve written on it extensively in Archiving Test Results: How Effective Organizations Do It.
At ConversionXL Live, Claire mentioned Hotwire‘s learn rate. Instead of focusing on the amount of tests that result in a win, Hotwire focuses on the amount of tests that result in an insight. See, a test doesn’t need to win for you to learn something about your audience or site.
That’s what an archive is all about; prioritizing learning and sharing those insights across the company.
Where Does High Velocity Testing Go Wrong?
In practice, you’re liking to run into a few issues with high velocity testing. It’s your responsibility to anticipate and prepare for these issues in advance.
While there are many, there are three obvious issues you’ll have to deal with: your culture might not support it, validity threats will creep in, and quality could begin to decline.
1. There’s No Culture of Experimentation
Your goal from day one should be to establish a culture of experimentation and data-driven growth. We’ve written an entire article, 6 Clever Nudges To Build a Culture of Experimentation, on how to do just that.
Some examples include…
- Ensuring optimization updates and insights are shared throughout the entire company.
- Encouraging, even gamifying, optimization at all levels.
- Celebrating failure and prioritizing exploration / learning.
Whatever you have to do, do it and do it often. Without a culture of experimentation, high velocity testing will fall flat.
Example: 1% Experiments
Josh Aberant of SparkPost, formerly Twitter, shared the concept of 1% experiments at eMetrics in San Francisco. Essentially, everyone at Twitter is authorized to run 1% experiments (i.e. experiments on 1% of the traffic), not just the growth team. You don’t need approval of any kind.
In fact, if you show up to a meeting with an executive without an insight from a recent 1% experiment, it’s a major faux pas.
Now, you likely don’t have 100 million users that you can run valid 1% experiments on, so I’m not encouraging you to start doing that. What I’m saying is simply that high velocity testing has to be a company-wide commitment.
2. Tests Are Called Too Soon & Other Validity Threats
If you read ConversionXL regularly, you’re familiar with the concept of validity threats and sample pollution. If not, take the time to read more about how to minimize A/B test validity threats and how to manage sample pollution. It’ll be worth it, I promise.
You can use a sample size calculator to calculate how many people you need to reach before you can call your test one way or the other. Remember to test in full week increments to ensure you have a representative sample (e.g. day of week and time of day can have a major impact on results).
Of course, if you go too far in the opposite direction (i.e. call your test too late or wait for months to call it because of low traffic), you run into a similar problem. Ton puts it well…
When you’re moving quickly, it’s easier for validity threats and sample pollution to rear their heads. Be sure you remain quick, but vigilant.
3. Quality Begins to Suffer
Earlier, I mentioned two inputs: quantity and quality. Typically, when you shift focus to one, the other begins to suffer. If you want your high velocity testing program to work, you’ll need to maintain both. To be frank, it doesn’t matter how quickly you test bullshit ideas.
When it comes to quality, Claire talks about three metrics you’ll want to focus on…
So, it’s less about the quality of individual tests and more about the quality of the testing program over time. After all, would you want the entire ConversionXL community judging your CRO know-how based solely on the results of your last test?
Before you start your high velocity testing program, chart your quality based on the metrics above. Then, be aware of how quality is trending over time. You’ll know as soon as it begins to dip, so you can take immediate action.
How to Increase Your Testing Velocity
So, how do you go about increasing your testing velocity responsibly? According to Claire, it comes down to three factors: testing capacity, testing velocity and testing coverage. She explains…
Here’s how you can answer those very important questions that Claire is asking.
1. Testing Capacity
Your testing capacity is pretty simple. There are 52 weeks in a year, so you divide that by your average required test duration (in weeks). Then you multiply that number by the number of different pages / funnels that you can test at one time.
So, for example, if my traffic level typically indicates that I need to run tests for two weeks and I have ten different lead gen pages that I can test simultaneously, my testing capacity is 260 (52 / 2 * 10). That’s five tests a week.
If, for any reason, you are not using your full testing capacity, you’re losing money. So, calculate it and commit to a testing velocity that will ensure you’re not wasting your capacity.
2. Testing Velocity
How you measure your testing velocity depends on just how rapid your testing and experimentation is, Claire explains…
A key thing to note here is the trend over time. Is your velocity staying the same? Decreasing? Increasing? It’s not enough to know how many tests you’re running every month, you need to know whether that number is higher than it was the previous month. If you’re not getting better, you’re getting worse.
3. Testing Coverage
Once you have how many tests you can be running (testing capacity) and how many tests you are running (testing velocity), all that’s left is your testing coverage. Your testing coverage answers an important question: On what percent of testable days are you running a test?
How many days has it been since you had zero tests running? When you’re not running a test, you have to ask yourself why. Why are you wasting time and traffic not testing? You don’t get that traffic back when you’re finally ready to launch a test.
The goal, of course, is to have 100% testing coverage. Actually calculating waste can be eye-opening and inspire that culture of experimentation that we talked about above.
Should you be moving fast when it comes to testing and experimentation? Absolutely. Should you be breaking things? No, quality is just as important as its sister input, quantity. Even Zuckerberg revised his philosophy to “move fast with stable infrastructure”. [Tweet It!]
To implement a high velocity testing program, you have to commit to…
- Constant ideation, which stretches across the entire company and covers the entire funnel.
- Strategic idea prioritization using one of the many frameworks available today.
- Smart testing management, which means weekly meetings to create accountability and managing resources effectively / realistically.
- Prioritizing learning and sharing thoroughly archived insights across the company.
- Avoiding the various pitfalls of high velocity testing (e.g. quality reduction, validity threats, etc.)
You’ll also need to know three very important metrics…
- Testing capacity, which is how many tests you can possibly run.
- Testing velocity, which is how many tests you are running weekly / monthly.
- Testing coverage, which is how many testable days you’re running a test.