fbpx

“Don’t Build a Growth Team?”: 9 Reasons Why They Might Fail

“Don’t Build a Growth Team?”: 9 Reasons Why They Might Fail

I love running growth teams.

It’s everything I could want from a job. It directly impacts the company, is fairly autonomous, works great with a few high-caliber folks, and involves a ton of A/B tests.

I’ve spent years running these teams—but I don’t know if I’ll ever build one again. I doubt that I’ll even have a growth team at any company I’m managing in the future.

In fact, I believe most companies should not have a growth team. In the rest of this post, I’m going to try to talk you out of building one.

How I define “growth”

The term “growth” gets used loosely these days. A lot of folks treat it as a synonym for online marketing.

I use a more restrictive definition: A growth team is a product tech team that’s focused on acquisition instead of core product features. It other words, it’s a team of designers and engineers.

And the most common type of work I’ve done with my growth teams is using A/B tests to optimize an existing funnel.

The growth teams that I’ve built

I’ve had the opportunity to build growth teams across multiple companies:

KISSmetrics

I joined as employee 14 and spent a few years at this analytics startup, which was founded by Hiten Shah and Neil Patel. I had the great fortune of being able to learn growth from one of the original growth hackers, Hiten Shah. He was literally in the room when Sean Ellis coined the term “growth hacker.”

After working as an individual contributor for a while, I had the opportunity to build my first growth team. We had a designer, a front-end engineer, and a data scientist.

We ran A/B tests around the clock on our free-trial funnel for nine months and got these wins:

  • Quadrupled monthly lead volume in one year;
  • Tripled the conversion rate from visitor-to-trial signups on our homepage.

I Will Teach You to Be Rich

I was hired to level-up the marketing team. The team started as all marketers focused on lead generation. It was a typical lead-gen team when I joined. Then, I evolved it into a growth team by hiring a designer and two engineers.

Using the same playbook that I developed at KISSmetrics, we ran non-stop A/B tests on our email subscription funnel, driving 480,000 leads in 2016 and smashing our lead goals for the year.

We expanded to four growth teams, each assigned to different parts of the funnel. Two of the teams focused on the two main sources of revenue, one team on inbound leads, and the last team on site conversion rates. Every team included a mix of designers, engineers, marketers, and copywriters.

Things…did not go well.

My whole growth system fell apart, and I learned a lot of tough lessons about the limits of growth programs. Hopefully, the insights below will help you avoid the same mistakes that I made.

9 reasons why growth teams fail

After years of building and managing growth teams, I’ve come across nine difficulties.

1. Probability is very counter-intuitive.

What do you expect if I say there’s an 80% chance of Version A winning over Version B? Most people assume it’s practically a sure thing. I don’t. Version B still has a decent chance of winning.

I even consider a 95% chance to have too much uncertainty for long-term A/B testing. Because we need to rely on our funnels over the long-term, small amounts of volatility can erase all our previous gains. We earn gains one inch at a time, but we can lose all our gains with one bad test.

a/b testing platform.
Non–data scientists often overestimate the certainty of test results.

We all have a hard time intuitively understanding volatility, which increases the odds that we’ll make a bad call and erase our gains.

Take 95% certainty compared to 99%. Because 95 is pretty close to 99, it feels like the difference should be minimal. In reality, there’s a gulf between those two benchmarks:

  • At 95% certainty, you have 19 people saying “yes” and 1 person saying “no.”
  • At 99% certainty, you have 99 people saying “yes” and 1 person saying “no.”

It feels like a difference of four people when, in reality, it’s a difference of 80. That’s a much bigger difference than we expect.

Most folks never get a deep grasp for how this works. Even the ones who do take a good six months of mentoring and supervision. We all want to cut corners on testing because it feels like there’s less risk than there really is.

In my experience, only data scientists have an intuition for this stuff. I have yet to come across a designer, engineer, or marketer that intuitively understood probability on day one.

This makes it very difficult to scale up teams that do lots of A/B testing, one of the primary tasks for a growth team. Because of this, I would expect it to take me a good 3–5 years to build multiple growth teams in the future. That’s not an easily scalable strategy.

If you want to dig deeper, this whitepaper completely changed how I approach A/B testing. I still review it regularly.

2. Most department heads don’t want data.

Finding winners was never my biggest problem when running a growth program. It was trying to avoid having other executives kill my previous wins.

I wish I were joking.

I spent more time advocating to keep verified wins live in our funnels than I did looking for new wins. Take our winning homepage at KISSmetrics:

kissmetrics homepage.
Testing determined this homepage was a winner. Too bad executives hated it.

This thing crushed sign-up conversions. After a year of testing, we had this page almost perfectly optimized. The headline, the URL box, the call-to-action (CTA) button copy, the secondary CTA toward the bottom, even the random stock-photo dude. This page converted triple the number of signups compared to more generic SaaS homepages.

The thing is, multiple executives hated this page. I had to defend it regularly to keep it live. After I left, the homepage immediately changed to something more generic.

Now that I have more experience, I get it. When things aren’t going well, folks reach for easy scapegoats. An aggressive homepage makes for an easy scapegoat.

Remember this: Anything can be rationalized away. If a test or strategy goes against the instincts of the core people at the company, it will get nixed sooner or later.

A methodical program built on data and testing won’t spare you from the internal conflicts that happen at every company. Getting buy-in, building political capital, and wielding power internally are still essential for any growth program. Data isn’t a shortcut.

3. Following the data gets the growth team out of sync.

I have a rule-of-thumb for picking A/B test winners: Whichever version doesn’t make any sense or seems like it would never work, bet on that. More often than I like to admit, the dumber or weirder version wins.

This is actually how I tell if a company is truly optimizing their funnel. From a branding or UX perspective, it should feel a little “off.” If it’s polished and everything makes sense, they haven’t pushed that hard on optimization.

complex machine rusting out.
Keeping a growth team in sync with the rest of the company is hard. It’s easy to lose that alignment—and break down.

I believe this happens because we stumble across an insight about our market that breaks our previous frameworks and understanding. So what appears to make no sense to us makes perfect sense to our users.

At first, this is a quirky realization. “Haha, that variant seems ridiculous, but it worked—let’s ship it.” After you’ve been managing teams and working with executives for awhile, this becomes a huge liability.

Executives won’t care that you found a piece of data that potentially invalidates the entire positioning, company strategy, or brand. A single A/B test doesn’t carry enough weight internally to change the directions of other teams. As an executive, this makes sense. It’s just one data point, and most data is flawed. So why bet the company on it?

As a growth manager, that puts you in a bind. Your funnels will get out of whack with the direction the rest of the company is going. Headlines won’t align with positioning, branding will be slightly off, and UX won’t be consistent.

All these items leave the growth team vulnerable to criticism from other teams. Design, Product, Marketing, and Sales get pissed that you’re not in sync.

It took me a long time to realize this, but they’re right. The perfect plan executed poorly isn’t as good as a good plan executed well. There are many ways to win, but all require teams to work together. Being in sync is more important than optimizing my own area.

Take product positioning. I now believe that “good enough” positioning pushed consistently across the whole company will make more of an impact on company growth than “perfect” positioning used inconsistently across marketing assets. Markets absorb messages only if they’re delivered extremely consistently. 

While it is possible to run an optimization program that stays in sync, it takes a lot of judgment and experience to know when to follow the data and when to pull back.

4. Growth programs are really easy to screw up.

garbage overflowing from bin.
One bad test can force you to throw out months of wins.

Multiple times in my career, I’ve had to throw out 3–6 months worth of tests. Every winner, every data point, every insight had to be scrapped. One small bug in my testing environment threw off all my results.

One time, we ran a bunch of tests on our drip email campaigns over a five-month period. Our new email subscribers would receive email campaigns for different products. We were optimizing revenue from these campaigns by testing different versions.

Suddenly, our revenue from these campaigns dropped by 50%. It was a sharp drop on our conversion rates—like something changed in the funnel—not a gradual drop from a slower trend. 

I tore through our data and testing to find the problem. We re-ran tests. I personally looked for patterns in hundreds of email subscriber data profiles. We QA’d everything to death. We reversed changes.

We never got the conversions back. And I never found the answer for why the conversion rate dropped.

Maybe the drop in conversion was outside my control. Maybe not. I did find a number of serious bugs and infrastructure flaws in our testing during my audit. These discoveries gave me zero confidence that our program was airtight. So I threw out all our data, and we started over. 

Every time I’ve started testing with a new set of tools, I’ve gotten hit with a problem like this. At best, I have to throw out test data. At worst, I (may have) tanked the funnel. Even with QA and control vs. control testing, I wasn’t able to avoid errors like this.

5. Most companies don’t have the data.

The cold, hard truth: Most companies don’t have enough data for a full-time optimization team. 

In the companies where I’ve worked, we had hundreds of thousands or millions of visitors per month. That volume was just enough to optimize sign-up flows. We didn’t even have enough to A/B test against a revenue event. Total purchase volume was simply too low.

Yes, if you work at Facebook or Amazon, you’ll have plenty of data. Consumer tech companies tend to have enough data since consumer markets (and purchase volume) tend to be large. 

But for startups or enterprise tech companies, there’s just not enough volume for testing.

Even if you have enough volume to optimize your main funnel, you’ll run out of data once you hit a ceiling on your main funnel and need to optimize other flows.

This is exactly what happened to me. I had just enough data to optimize the main funnel, found a bunch of wins, then hit a hard wall and needed to rebuild the entire distribution strategy from scratch to keep growing.

My rules of thumb on testing volume:

  • Need 20,000 people per month to see an asset that you want to optimize.
  • Also prefer to have 1,000+ revenue conversions per month on that same funnel.

I know folks in this space say that you can test with 100 conversions per variant. When I look back on my optimization programs, we always had 20,000 people moving through the funnel per month on my most notable wins. When we tested lower-volume assets, we just couldn’t find enough wins fast enough to make a real difference.

6. Growth teams are expensive.

Growth teams get expensive fast. Even a bare-bones team needs 4 people:

  • Growth manager;
  • Designer;
  • Two engineers.

I’ve found that one engineer isn’t quite enough bandwidth to keep up with a full-time designer and growth manager. Two engineers keeps the whole team moving without downtime.

None of these roles is cheap. Even if you recruit in lower-cost areas and look for more junior folks, you’re looking at $150K each, fully loaded. That’s an optimistic estimate, too, so we’re at $600K in labor per year.

Now add tools and data infrastructure, easily another $20–50K per year. A/B testing tools used to be cheap—then they realized the only companies that can legitimately use them are large. So, they raised their prices substantially. Decent data tools don’t come cheap, either.

Let’s call it $650K per year in total budget. It generally takes me six months to get a growth program started. This includes onboarding, approvals for everything, setting up tools, running a bunch of preliminary tests to verify that they’re working, and getting the team comfortable working together.

Then, another 12 months of non-stop testing to drive conversions in a funnel. After about 12 months, I can usually double conversion rates. If I’m lucky, maybe even triple them.

one hundred dollar bill.
Want a growth team? Make sure you have a six- or seven-figure budget.

That’s a year and a half of work (i.e. $950K) to double conversions in one funnel. That’s a lot of coin. For $950K, I could also build a blog—from scratch—to hundreds of thousands of visitors per month. Or cover every major event in my industry for several years. Or blitz Facebook with enough branding campaigns that everyone will know the company after a few months.

Looking back, I sure wish I had spent that money elsewhere. It would have served those companies much better. If a million dollars is a rounding error in your budget, go for it. Otherwise, consider using those marketing dollars to scale up your core distribution channel first.

7. Growth teams are not fungible.

After building a “growth” team, we can just assign that team another KPI to grow and they’ll figure it out, right? That has not been my experience.

Most people need a playbook of some kind. They’re great at incrementally improving a playbook but really struggle when trying to build something from scratch. I’ve found this to be true across all disciplines and departments.

Of course, there are exceptions. But the exceptions are rare and tend to be founders.

I trained my growth teams to optimize funnels and run A/B tests. When that wasn’t a priority anymore—and I needed top-of-funnel growth—those same teams were completely unprepared to solve a different problem. We went back to the bottom of the learning curve.

Whatever you assign your “growth” team, they’ll get good at that one thing. Switching gears means starting over and learning a whole new process from scratch. While the term “growth” may be flexible, the team won’t be.

8. Growth teams are only as good as their managers.

I love the concept of a fully functional team. Get every discipline into one team and cut them loose. These are the teams that I prefer to manage myself.

This is part of what makes growth teams so effective. There’s no need to wait on design or engineering to complete a project. The team already has all the design and engineering resources they need. Simply make a decision as a team on what needs to get done, then ship it.

When I scaled one of my programs to four growth teams, I discovered that fully functional teams have a major weakness: The team is only as good as the manager.

We all know that management is a tough job. This isn’t news. But being a manager of a fully functional team is even more difficult, which caught me by surprise.

Managing a high-performing team is hard. Even the right candidate may take a full year—with lots of mentoring—to settle into the role.

In hindsight, it seems obvious. Someone stepping into a growth team manager role needs to know how to work with designers and engineers. They need to know how to guide the team through technical hurdles. They need to own a KPI. They need to find alternative options when plans fail. They need to coordinate with other managers and teams. That’s on top of having a knack for improving conversions.

Most folks require many years of management experience to handle a team like this. Even for an ideal candidate, it takes me at least six months of coaching before they start to find their feet—then another six months of guidance before they can truly run the show.

If someone isn’t extremely coachable, loves learning multiple disciplines, empathizes with their teammates, and relentlessly pursues a goal, it’ll take much longer.

Whether you’re building a growth team from scratch or attempting to scale multiple teams, finding the right managers is a serious bottleneck.

9. Growth teams have limited revenue potential.

I’ve left the biggest problem for last. Most companies can get only so much revenue growth from an optimization program. After all, conversion rates can go only so high.

My rule is that I can double or triple the conversion rates in a funnel within 9–12 month. I’ve done this multiple times across multiple companies. For that year, the metrics look amazing. Conversions are up. Leads are up. Revenue is up.

But then what? Once you hit a wall on conversions, what do you do?

google analytics revenue chart.
A conversion rate can go only so high. If you have just one funnel, revenue growth—and the ROI for your growth team—will plateau.

There’s all this data infrastructure, a fully staffed team, and lots of experience with testing. (Remember: The team isn’t fungible; they can’t easily switch into another workflow.)

If you’re a massive company with a ton of funnels, you move on to the next user flow, and that’s great. Most of us are not in that position. We have one primary funnel. Once conversion increases hit a wall, there’s not much else to do.

Contrast this with a marketing team going after any particular channel—events, paid, SEO, community, PR, whatever. The long-term game completely changes:

  • At scale, these channels can 10X the size of most companies. There’s plenty of room to grow.
  • When you hit a ceiling on a given channel, there are plenty of wins around process efficiency to lower acquisition costs.
  • As long as the channel is profitable, it’ll continue to kick off profit year after year. Smart management and process design keeps the channel running smoothly for years on end.
  • Once it’s mostly automated, that frees up your time to go build another channel. You never have to dissolve the team because you run out of opportunities.

A properly scaled marketing channel can 10X your business. Best case for a growth team that optimizes your funnel? 3X. I’d rather spend my time on a 10X strategy than a 3X one.

One major exception

Some businesses are driven primarily through user loops. More users brings in more users. 

For example, take Facebook, the very company that popularized the concept of growth teams. 

Their growth came from users, and they had true virality. A new user comes in, activates, then invites other users.

With funnels like this, a growth team can drive core business growth. The user acquisition funnel is such an enormous lever that it’s well worth the cost of a growth team. 

Consumer tech companies like Facebook, Twitter, Skype, Snapchat, and Pinterest all have this option. Many marketplaces like Uber and Airbnb also have strong user loops worth optimizing. 

In these situations, I wholeheartedly recommend building a growth team.

Just remember that true virality is really rare. Most businesses don’t have a user loop that drives their core growth. They rely on standard distribution strategies to grow revenue.

What do we do instead?

So if it’s not worth investing a million dollars to double conversion rates in our funnel, should we worry about conversion rates at all? Yes.

There are still plenty of things we can do:

1. Use your funnel as a guide for a healthy business.

The overall conversion rates in your funnel give you a feel for how well your business model is working. If conversion rates suck at multiple points at your funnel, work on the foundational parts of your business like product-market fit, positioning, and your offer.

For anyone in a senior marketing role, you should absolutely audit your positioning. There are almost always gaps, and it’s the single most important variable that applies to all your marketing assets.

Follow the recommendations from the book Obviously Awesome by April Dunford. It’s extremely practical and the best resource on positioning that I’ve found. I really wish she had written it 10 years ago. It would have saved me a lot of heartache.

2. To fix your funnel, eyeball iterations that go after big wins.

If you have one step in your funnel that’s broken, iterate on that step without A/B tests.

Collect qualitative feedback from users through user tests, heat maps, surveys, and interviews. Find the biggest objections and opportunities, then design a few new versions for that step of the funnel.

Focus on major changes—don’t test small stuff. Launch a new version, run it full-time for a month, and eyeball the impact on your conversions.

Even if your funnel is fairly healthy, run 5–10 user tests on the onboarding funnel to pick up any glaring problems that need to be addressed. Don’t worry about A/B testing here. Find points of friction and get rid of them. When you find a real winner, you’ll feel it.

It’s not advanced testing, but eyeballing major changes can get you a few wins. And basing your iterations on qualitative research dramatically increases the odds that you’ll find a winner.

coffee shop conversation
Qualitative research is often the key to huge wins.

Wait, doesn’t this contradict my testing philosophy of 99% statistical significance on A/B tests? It may seem like they conflict, but in practice both philosophies excel in different circumstances.

The honest truth is that most massive businesses grew without ever running an A/B test. CEOs and founders don’t run A/B tests. Most teams don’t run A/B tests. And yet we all try different ideas, eyeball the results, then act accordingly. This is how most of us pursue progress.

The trick is knowing which philosophy you’re using. To see results, you have to chase wins that could have major impacts on your business. A 10% improvement doesn’t matter; you’re looking for 100% wins and above. 

A/B testing works beautifully when chasing 1–30% wins. These wins are too small to feel but can be detected with methodical testing. Our intuition is no longer a useful guide.

3. Test homepage headlines.

Even if your funnel is healthy from top to bottom, try out some different headlines on your homepage.

Headlines are the one variable on every marketing asset that always has a massive impact. Finding better headlines has routinely given me 30%+ conversion rate lifts on my funnels.

No need to A/B test anything here, either. Once you have a good handle on your positioning, sit down and write 3–5 really strong—but completely different—headlines. My favorite resource on headlines is the book Great Leads.

Once you have 3–5 strong headlines, launch one at a time on your homepage and run each for a month. When you find a winner, you’ll feel the impact in your monthly signups, leads, purchases, etc.

Could you quickly A/B test these with a free tool like Google Optimize? Yes, but I still consider it a waste of time. For me, it’s about knowing the game I’m playing. Eyeballing with intuition is one approach. A full program of A/B testing to statistical significance is another. I generally don’t like to mix the two.

When you find a really great headline, you’ll feel it. Multiple KPIs will be up, prospects will repeat it back to you, your team will start using it on their own. You’ll get multiple signals that it hit a nerve.

And if you don’t feel it, the headline wasn’t a big enough improvement to matter. Try something else instead.

4. Check off the conversion best practices.

I tend to hate best practices. I find them uninspired, and most don’t work as well as everyone claims. But I’ve come around over the years. They have their place.

As a business, you’ll truly innovate in only a few core areas. And that’s all you need to win. For everything else, pull the best practices off the shelf, implement, and move on.

Best practices can also make a difference when you install them quickly. As a single improvement, most are a waste of time. But in aggregate, they can make a big difference. I can spike conversions on most sites simply by running full-speed through my conversion checklist.

Here are the items that have made it on my conversion checklist:

  • Whatever headline or offer works best, use it across the entire site (homepage, pop-ups, sidebars, etc.). A great offer tends to work over and over again.
  • Throw up a chat box like Drift. If this doesn’t work, you probably have product/market fit problems.
  • Install a pop-up. I know we all hate them, but they still work beautifully.
  • Get your site speed up. Make sure you’re on a good web host, remove any unused marketing scripts on your site (the worst offenders), and optimize your images.
  • Make sure product and pricing pages communicate your positioning really well.
  • Build dedicated landing pages for every paid campaign that you run.
  • If anything needs to be set up during onboarding flows, find a way to automate it.

There are tons of lead-gen and conversion guides out there, like this one. Read a few, combine them with my checklist, then launch all the recommendations as quickly as you can. Half the suggestions won’t do anything, but that won’t matter if you launch everything within a few weeks. You’ll easily make an impact on your conversions.

Conclusion

The recommendations above are exactly how I run my business today. We’re moving faster than I ever have before—at a fraction of the budget.

That’s why I doubt I’ll ever build a growth team again:

  • Few folks understand probability, and most executives don’t care about the data—regardless of what it says.
  • Testing encourages growth teams to get out of sync with company strategy, and it’s easy to screw up the data, which forces you to throw away months of testing.
  • Even if you get past all this, you probably don’t have enough data to work with anyway.
  • A 1.5-year growth program will cost you just short of a million dollars.
  • Once you hit a wall on your conversions and need your growth team to do something else, they’ll have to start from scratch to learn another workflow.
  • And it all depends on finding an amazing growth manager to run the team.

Difficult? Yes. Impossible? No. But that’s an awful lot of work when the upside is limited to doubling or tripling the conversion rate in your funnel.

Yes, massive business and businesses with user-driven growth are the exceptions. It’s absolutely worth it to them. For the rest of us, I’d rather spend that money and time building a marketing team that can continue to grow my business for years to come.

Related Posts

Join the conversation Add your comment

  1. We are on the verge of launching a growth team inside a large enterprise org, that is riddled with silos and its fair share of bureaucracies. This article was one of the most informative reads (along with the linked resources) that I have encountered in a long time. I don’t print to paper often, but when I do the article has to be a keeper. This is a keeper.

    1. Glad you enjoyed it Jason. :)

  2. That’s a terrific piece of content. Honest and direct. One point made me think – number 3. Wouldn’t you think that findings from growth initiatives should be used as engine for other departments?

    1. Thanks Chris!

      In theory, they should be used as insights for other departments. In practice (at least for me), it doesn’t work out like that.

      First, lots of tests produce weird results. And when things don’t make sense, people will dismiss them.

      Second, every team lead is balancing feedback and data from lots of sources. A Growth program is only one source of many. No team lead can prioritize every data point that comes in so it’s easy to de-prioritize the data that doesn’t match the prevailing theories of the company. This is totally understandable and I’ve done it myself against data from other teams.

      Third, we all have an instinct to reject insights from other teams. It’s the classic “not built here” syndrome. It takes a lot of maturity as a leader to readily adopt ideas from other teams.

      None of these problems are insurmountable, they just make the job much more difficult in practice than in theory.

    2. Thanks. I was curious because I’ve been seeing same situation in my own projects, where an insight would only be used for single tactic/asset and never really picked up for a wider application. Then again, getting an insight is a lot of effort for something so short-lived.

      I was thinking about collecting those and feeding into customer research to test further. This would brings 2nd proof or dismiss the insight. This would be not in your usual growth team, but traditional analytics department in an enterprise.

  3. great post Lars! spot on re “A/B testing tools realized only bigger companies get value from them, so they jacked pricing.”

  4. Thanks Ryan! Glad you enjoyed it. :)

  5. I think you should look into the history of Booking.com. It’s amazing that nobody in the US seems to know about it.

  6. Maybe I am missing this but is this an argument against optimization positions or just growth teams in general? Most of the former work under the marketing team so rather curious what thoughts are.

    1. I’m totally fine with optimization positions that plug into other teams like marketing.

      A good example is a paid marketing specialist that spends all their optimizing ads on Facebook or Google Ads. The bulk of their work would be A/B tests and the role would have plenty of ROI for the business.

      So there’s definitely optimization roles that skip a lot of the problems that I outlined above. Most of those problems come into play with a fully staffed Growth team.

  7. I really enjoyed your article and will try and keep in mind some of the things I have read.

    I think though, that the comparison you suggest between 95% and 99% is very misleading. That is not to say it is not a huge difference, but you state that:

    | At 95% certainty, you have 19 people saying “yes” and 1 person saying “no.”
    | At 99% certainty, you have 99 people saying “yes” and 1 person saying “no.”
    | It feels like a difference of four people when, in reality, it’s a difference of 80.

    It really isn’t a difference of 80 if what you’re doing the percentage on is not the same number. As an example, take your sample and do both with the same percentage:

    | At 95% certainity, you have 19 people saying “yes” and 1 person saying “no”.
    | At 95% certainity, you have 95 people saying “yes” and 5 person saying “no”.

    Wow, a 76 (or 76 + 4 if you want to take the NO’s into account also) difference, even though percentage is the same.

    To do a fair comparison you have to do it based on the same sized population otherwise it’s just a trick. That is not to say that a 4% difference cannot be a huge difference, but the way to prove it is just wrong IMO.

    1. Not necessarily.

      Those examples are equivalent only if the sample size is the same between the 95% statistical significance result and the 99%.

      In practice, the sample sizes are almost never equal.

      Very few folks ever calculate the sample size required ahead of time. So they just start collecting data and then watch the statistical significance. Usually, they’ll call a winner as soon as it hits 95% which is possible at a much smaller sample size than 99%. 99%, by its nature, forces a better sample size and a higher amount of data.

      Regardless, the core point still stands: probability is difficult to understand and counterintuitive.

  8. The third point of the article got me puzzled – it says “Test homepage headlines”, then it says “No need to A/B test anything here, either”. I get the idea you wanted to convey but why don’t you want to A/B test when you can?

    “When you find a really great headline, you’ll feel it. Multiple KPIs will be up..” – is that really the case? I think you cannot attribute all the increase to the headline, maybe it’s seasonality? Maybe one of your competitors went bankrupt? I get it, in case it’s a super win, you will see it (maybe), but what if it isn’t? Maybe it’s a small win? If it’s a small win, you could still leave it and search for the big win -> gaining value in the long term. And you only can find these small wins by using A/B tests.

    1. I used to A/B test every change. Now I rarely run A/B tests.

      1. Setting up the A/B testing infrastructure is way too much work and too expensive for just a few tests. Especially if you want to do it right and A/B test against revenue.

      2. Any good executive can grow a business without A/B tests. This is very counter-intuitive to what many of us our taught in online marketing. Data-driven decisions can still be made without rigid A/B tests.

      3. Yes, there’s always a chance for correlation to some unknown variable. That’s a part of life though, this never goes away. BTW, I’ve only come across seasonality impacting my work twice in my entire career. It’s pretty rare in B2B. And the B2C seasonality tends to be fairly well understood in each respective industry.

      4. Small wins do matter but only if you can stack them. And you need a disciplined A/B testing program in order to do that, which I’ve done many times. For me, it’s not worth the expense and time unless you’re at a very large company or a business with a genuine user loop that’s driving growth.

      I’ve found that you can usually feel wins without A/B tests that are 30% or greater. A well-run business should be focusing the bulk of its time on these types of wins anyway if it’s going to reach its full potential.

Comments are closed.

Current article:

“Don’t Build a Growth Team?”: 9 Reasons Why They Might Fail

Categories