The Endless Suck of Best Practice and Optimisation Experts
So what’s this all about? Unicorns, Useful Best Practices and Optimisation Experts — all rather rare and mystical beasts.
So what’s this all about? Unicorns, Useful Best Practices and Optimisation Experts — all rather rare and mystical beasts.
If I were to tell you – what would you do with that information? Honestly, think about it and see if you can answer that.
At CXL Live 2015 we had an amazing a/b split testing panel featuring statistics and testing gurus Lukas Vermeer from Booking.com, Matt Gershoff from Conductrics and Yuan Wright from Electronic Arts. And the audience asked some of the toughest testing questions ever. All of them got answered.
Free trials are to SaaS applications as keys are to locks. The right one gets you in right away, but sometimes you have to try a few before you find the one that works.
For many SaaS and cloud service providers, 100% of customers sign up for a free trial as part of the sales process — yet one study suggested that even the best-in-class of SaaS marketers were losing a staggering 75% of those who signed up for a free trial before they entered their credit card details.
While there are no universal rules in conversion optimization, there are some things that tend to work more often than not. This article will give you some of these tactics to test for yourself.
Elite Camp is a 3-day traffic and (mainly) conversion event. It’s among the very best CRO events in the world, and of course in Europe. This year was already its 6th year – and the format has proven to be so successful that the event has been replicated in many other countries.
Elite Camp 2015 had an enviable line-up of heavy hitters and rock stars. Here are top insights from every speaker of this year’s event.
Here’s another presentation from CXL Live 2015 (sign up for the 2016 list to get tickets at pre-release prices).
While optimization is fun, it’s also really hard. We’re asking a lot of questions.
Why do users do what they do? Is X actually influencing Y, or is it a mere correlation? The test bombed – but why? Yuan Wright, Director of Analytics at Electronic Arts, will lead you through an open discussion about the challenges we all face – optimizer to optimizer.
CXL Live 2016 is coming up next March (get on the list to get tickets at pre-release prices). We’re going to publish video recordings of the previous event, and here’s the first one.
You run A/B tests – some win, some don’t. The likelihood of the tests actually having a positive impact largely depends whether you’re testing the right stuff. Testing stupid stuff that makes no difference is by far the biggest reason for tests that end in “no difference”.
You have a hypothesis and run a test. Result – no difference (or even drop in results). What should you do now? Test a different hypothesis?
This is the methodology that I have developed over 12 years in the industry and working with over 300 organizations. It is also the methodology that has been used to have a near perfect test streak (6 test failures in 5.5 years), even if most others do not believe that stat.