Join the Conversation Add Your Comment

  1. Hey Lance!
    Terrific post. Thanks so much for sharing all of this. It was really interesting to read about your decision to make changes to the funnel without first A/B testing. I’ve been working on a project where the client went ahead and implemented the layout and copy changes based on the research I did for the funnel. Sign ups have increased by 17% compared with the previous month (based on goals set up in GA) but without split testing I wonder if it’s a case of apples to oranges. Am I overthinking this or should I be teasing apart the data more before coming to any conclusions?

    1. Hey Jen — if you have no choice but to measure the impact of changes longitudinally (due to cost, availability development resources, etc.), then I’d compare 1 week prior to 1 week following, 2 weeks prior to 2 weeks following, 1 month prior to 1 month following, 1 quarter prior to 1 quarter following… you get the picture. I’d also look at year-over-year data, although organic growth could make that tricky. OTOH, if you can “extract” the natural organic growth rate from the “following” conversion rate metrics, you could get a decent read on things.

  2. Hi Lance, thanks for this thorough article. I wanted to ask about the big drop in signed-up users. There is a 33% drop in sign-ups (from 3110 to 2008). The final conversion number is also lower: 1314 after introducing changes vs 1751 on the “control” version.

    You claimed that the volume and quality of your traffic among both periods are the same. I understand, that you spent exactly the same amount of money on traffic acquisition, but generated 337 conversions less.

    It seems that you actually harmed your conversion rate. The completion rate of the funnel is better, but maybe the reason is that you got better leads in the top of the funnel (less people signed-up, so they have to be more into your solution). My question is: has your business really gained revenue thanks to changes you made?

    1. Hi Damian!

      Thanks for commenting.

      The “Signed Up” event simply represents the number of visitors who clicked the “Sign-up” button on our home page, and for this funnel optimization (and analysis), we made no changes to the home page design or copy.

      Additionally, just as an FYI, we weren’t spending anything on acquiring traffic.

      However, you did catch a data issue. The “before” data, showing 3,110 sign-ups, include visitors to 2 different pages, whereas the “after” data include sign-up events for just the home page.

      When I re-run the pre-optimization data to include only the home page sign-up button, we get 1,976 sign-ups and a 54.88% completion rate, so the net conversion lift actually improves a little from 17.3% to 18.09%.

    2. Hi Lance!

      Thanks for your answer, it surely clarifies the case a little. I’m still confused about the changes you introduced though. Could you share the before and after screenshots of the following steps in the funnel? I think it would be interesting for all the readers to see what exactly influenced the conversion rate.

      I’m also wondering why have you included only the sign-ups that occured after clicking the button on the homepage? Why didn’t you take into account sign-ups from other pages, like pricing or features?

      Could you also share some tips on getting all this traffic you have without spending any money? It looks like a profitable strategy ;-)

Comments are closed.

Current article:

Optimizing the Free Trial Signup – How Flow Got a 17% Lift [Case Study]