I’ve been to a lot of conferences. A lot. Most of them are mediocre. I made it my personal goal to deliver an amazing conference.
I took inspiration from the good things I’ve seen at other conferences, made sure I avoided the bad things – and so ConversionXL Live 2015 was born. We had 245 attendees from 23 countries. The crowd was very diverse, very high level (~75% of attendees were CRO professionals). So we needed high level content to match the audience + deliver a kickass experience.
So how did it go?
Here’s what Andre Morys, CEO of Web Arts thought:
— André Morys Web Arts (@morys) March 12, 2015
Here’s what the Conversion Scientist told me over email:
And lots of other feedback similar to this. Thank you everyone who took part of our very first ConversionXL Live. It meant a lot to me. You made it a success.
Check out the photos from the event here.
What were some of the insights shared by the speakers?
I will go through the agenda, and cherry pick 5 interesting points made by each of the speakers.
Peep Laja: This I Believe
- If you’re focusing on tactics, you’re doing it wrong. Conversion optimization is a process. You have to be able to describe what you’re doing as a process.
- Forget best practices, focus on principles when evaluating design screens. Don’t copy your competitors, they don’t know what they’re doing either. Websites are highly contextual.
- The most important thing in conversion optimization is the discovery of what matters. If you don’t know what specific elements on your site might have an impact when you change / test them, you’re wasting everybody’s time.
- Conversion optimization is hard, gruesome work. Testing is just the tip of the iceberg. If you’re testing the wrong things, you’re wasting time. Do your research.
- Everybody wants a win, but few want to do the heavy lifting – conversion research. But that’s the only way to get sustained wins over a long period of time.
Andre Morys: The Million Dollar Optimization Strategy
- Zalando has seen huge growth in Europe, overtaking many established, big ecommerce companies. Why? They have ~80 optimizers on staff. You need to have dedicated people working on growth.
- Reinvest the money you are earning from optimization back in optimization to keep growing. Do optimization continuously, for years. Optimization is about growth.
- Bigger conversion rate increases come from changing peoples motivation not just the structure of the site! Before launching another test, ask yourself: will this fundamentally change the behavior of users?
- Higher conversion doesn’t mean that you earn more money! When you have a good conversion rate on an experiment, you will turn on more traffic and the conversion rate will drop. Measure return rate – need to take into account if users are actually happy when they receive the product. Made a cohort report with the users who ordered from each template. Also measure profit contribution from each variant.
- Ask questions before setting your test live?:
- Is the variation bold enough that people will notice it?
- Will the test affect the behavior of users?
- Is the page part of the sales funnel?
- Am I using motivational triggers?
Your Test is only as Good as Your Hypothesis: Michael Aagaard
- Hypothesis is not a guess – it’s based on research, data that comes from observing and talking to real people. The test is not the main point, it’s the optimization that drives the end goal.
- Every hypothesis has to fit into this phrase: By changing A into B I can get more prospects to C and thus increase D.
- Communication is key with clients / bosses, you need to explain why are you doing research, and why are you spending so much effort on setting up the right A/B tests. Don’t be a split test junkie.
- Always estimate ahead of time how long your test would take. If it would take too long (more than 4-6 weeks), you need to re-evaluate the cost of your time.
- Don’t come up with hypothesis to test, but come up with hypothesis that actually makes the site better.
Oli Gardner: The Landing Page Manifesto
- Never start a marketing campaign without a dedicated landing page
- Context is important as people land on the page with specific context in mind.
- You need to have attention driven design. Look at the ratio of things that you can be do on a landing page vs the number of things that you should be doing. As attention ratio goes down, your conversion rate goes up. Exception: multiple links with the same goal.
- Sometimes more form fields may work better. Unbounce data shows that 2 form fields show on average 33% increase in mean conversion rate when compared to a single form field.
- Worship information hierarchy. What people want? Put that first and draw attention to it. Remove everything that is not important.
Michael Summers: Effective User Research Methods
- What people refer to as ‘best practices’ are mostly common practices. Not best.
- You need to watch user behavior also on competitors’ websites. Part of research must involve competitor – gather info on how users consume and experience your competitor site Your competitor is your first prototype. We are not the target market – all experience decisions we produce are dependent on the customers we’re targeting.
- In some cases videos truly help with the conversion (e.g. PayPal 7x more likely to use product) – even if video isn’t wow and super developed, it adds value (but HOW you do it is much more important than IF you do it). Some users think videos are nice, easier to get an overview. He once created a really bad video, and even then those who watched the video were 7 times more likely to sign up.
- Error messages need to give information on what you need to do in order to fix the error. When you want people to use specific formats, give them an example of what the format should look like: right there, next to the field. Keep in mind – UI Creator does not equal UI Evaluator.
- Always remember the four “r”s:
- RIGHT RECRUITING – test with real users (not semi-professional testers you find on markets) – not just the idealized profile of who you think your user is, but rather who your user actually is (the real human beings)
- RIGHT METHODOLOGY – talking vs. doing (real behavior), are you layering motivation and cuing the user tester or letting them really experience what you’re testing?
- RIGHT ANALYSIS – watch the footage – record & play it back later to fully get the feedback/results
- RIGHT POINT IN THE PROCESS – did we plan the time to do something with the results?
Amy Africa: Selling on a 2” x 4”: Proven Techniques for Mobile Optimization
- Action Directives: The bigger the bolder the better. This applies to everything ATC (adoption to cart) and ALT ( adoption to lead) related, especially the search box and your action directives (per page). With action directives, the rule of thumb is one per view… What is there to do? What is there to click on?
- Navigation counts for 80 – 85% of mobile success. You have one view to make the magic happen – the page they come in on. It boils down to 3-8 items and a text search. Pages where the navigation is weak are dead ends. Navigation should not match the traditional site, the task they are willing to do on mobile are different.
- Email traffic is becoming more mobile traffic. Develop specific mobile landing pages for email.
- Prioritize your site search results (e.g. ecommerce site) – the order in which the search results are presented is typically far more important than the search itself. Too many results is a hindrance on mobile.
- In mobile we make our decisions in less than half a second on whether we will stay on the page or leave. So on mobile visual match and word connect are even more important.
Brian Massey: Finding Wins Before You Test
- Averages lie. Break down your segments – general picture of site activity is irrelevant; you must drill down to details and that way you can grab potential areas to improve and grow.
- GA samples data for custom segments, so you’ll need to be careful when using those reports (if you need to remove sampling and don’t have access to Google Analytics Premium you can use Analytics Canvas).
- Sales and customer support people > LISTEN to them. They know about the problems your customers are experiencing.
- Search and discoverability (popovers, improved search button) can really give quick fix, high conversion lifts.
- One source can be enough to define hypothesis but when prioritizing them you may need some extra info:
- how much proof do we have that this is a problem?
- how much impact might the execution of the hypothesis have?
Yehoshua Coren: Improving Conversion Rate Using Digital Analytics
- Use GA to answer business questions. Track everything that is important.
- Be aware of where users are in their journey with your product (e.g. on mobile they may be at research but this may lead to ultimate conversion on desktop) – don’t cancel out a device for being a device, but rather look at the whole picture of how it’s impacting the journey.
- Track site errors – identify where and why, what kind of errors people get – so you can fix them. Nobody wants to see error messages.
- Use enhanced ecommerce and product performance metrics like propensity to purchase – which products are most viewed and engaged with? Hence should we move them to front of page, top of page etc?
- Set up each a goal for each step in the funnel – opens up for more options in analytics. You can now segment the funnels, create any kind of custom reports, look at funnel horizontally.
Lukas Vermeer: Testing Strategy: Bandit vs Scientist
- Multi-armed bandit problem – limited number of options and want to maximize gains. How do you decide which option (variation) should you use to make the most profit?
- Decision problem → hippo bandit → what your boss says is most important for revenue is where you’re going to focus your energy but you don’t know if this is the solution will really be the road to actual profit
- The concept of regret: the cost of trying to figure out which variation is best.
- If you decide upfront how much money are you willing to invest in finding out which machine works best, you might run into 2 problems:
- You’ll know which one is going to win, but you keep on spending to reach statistical significance [lost opportunity cost]
- You may spend the whole amount, but since the difference is small by the time the test ends you have spent a lot of money, but still don’t know which was better.
- Tests are more about minimizing regret than maximizing statistical confidence. If you’re not running tests, that’s regret by default.
Matt Gershoff: AB Testing, Predictive Analytics, and Behavioral Targeting
How to connect customers to experiences:
- Decision rules as a framework
- if [this] then [that].
- A logic that links observations to actions
- Observational data (we passively collect it, e.g. day of the week, device, user age) and causal action (the button, price, sales offer)
- There’s no causation without manipulation
- Hippo’s know their field, so listen to them as well.
- Picking rules
- Segmented AB: Texas vs not Texas. If you run a segmented test, then you’ll end up with targeting rules after. If we know that people in Texas don’t like rice in their burritos, we won’t serve them that option (e.g. a rule determines which variations are shown to whom)
- Predictive models: Why & When
- Use a model to combine segment preferences. A model tries to predict what will work for whom
Yuan Wright: The Hard Life of an Optimizer
- The more you test the more you get good insights, innovation, learnings and CR lifts. The goal is to find out why some of your tests are not working. If the A/B test is not going to influence the decision (whether to roll out the winner) – don’t do it.
- Odds are not in Optimizer’s favor – 70% of tests don’t win or make a major difference. The number is even less (<10%) if you’ve run tests on the same site over many, many years.
- Calling a winner has expiration date so even when you replace the version to “winner”, monitor how it’s really affecting the ROI over a long term.
- Scaling A/B testing programs:
- test in like markets – type of segmentation – test in similar market, learn and then apply to another market. If you’re running a global testing program, consider using like behavioral markets (e.g. U.S. and Canada) to run more tests to accelerate learning. Once you have a winner in one, you can run it in the other.
- the devil is in the details – make sure to be able to apply learnings small and large scale (design, process, etc.)
- marketing drivers – email, affiliates, PPC etc. – watch that the segmented traffic that doesn’t cannibalize rest of site
- personalization – bring the experience to where the user left off (another form of segmentation)
- make use of data science – helps you find correlation which helps you make you work smarter not harder
- QA is crucial – if the Hippo finds an error in your test setup, even if it is not affecting the final result, it will ruin your credibility.
Bryan Eisenberg: Buyer Legends
- Conversion Optimization belongs in the C-suite (it is the CEO’s responsibility)
- CRO is not about growth hacking – the focus is on the CUSTOMER. Testing is about research and development, about understanding things. Customer experience/relationships is about the long term – where you’re going and will continue to go over the next decade (input/output).
- Learn to tell the stories from the customer’s point of view. They are the hero and we need to focus on their journey – everything should be worked around and towards their experience. When launching anything new, write a narrative of what want to accomplish from the perspective of your customer.
- Which business are you in? The business of selling your products, or helping your customers buy your products? Amazon knows the difference.
- Buyer Legends are essentially a long hypothesis, written in a form of a narrative.
Brooks Bell: The Conversion Maturity Model
- 6 essential components to successful optimization program
- prerequisite of a CRO program is that you need to have an executive sponsor the program, a champion
- A dedicated developer is critical to the success
- Custom testing team
- Tools & System
- 1st start with the data (Analytics)
- integrated data (with voice of customer, offline, analytics)
- the more data, the better
- qualitative – UserTesting, ClickTale
- every single test should be discussed by a cross-disciplinary team: analyst, engineer, test strategist. Who are the segments/customers being targeted? What are their needs? What are the obstacles? What are potential solutions?
- develop (gather data), 3 documents for every test (experiment design, concepts (wireframes), final results)
- produce assets, set up campaign, front end code, QA (most critical and important element of the process) – must be pre/post launch
- risk-reduction stage (validate intuitive thinking – used to prove points and give backing to develop whatever feature)
- buckshot/popcorn approach – use a testing tool, but have no idea of how to actually use; test what you can and use features blindly.
- driving testing for wins – have someone that specializes in optimization, test for the sake of win but don’t spend enough time on learning why it works and what to do
- winning is not as important as learning and thinking about the customer and their needs
- discovering your customer – move to specific customer needs; who are your customers and how can you segment them (persona and personalization)
- run tests for at least 2 weeks to achieve validity (dependent on traffic volumes)
- 2 numbers: conservative + speculative – measuring what was gained
- conservative – what was gained during and then 1 months after
- speculative – over the course of a whole year
- Internal conflicts are the biggest obstacle to optimization in large enterprises. CRO is something done in the company internally and convincing big companies to outsource is complicated.
- 3 tips for the future:
- get good at stats
- have an analyst be a driver behind your testing, so it’d be data-driven
- be skeptical of your tests (false/positive effect)
Anita Andrews: How the Fastest-Growing Companies in the World Use Data to Drive Success
- If optimizing for conversion rate, then you’re actually optimizing for local maximum and not for the long haul. You really have to be focusing on profit not just revenue and conversions.
- Perform a regular analytics and data audit. Does everyone know how we’re defining metrics? Is a single metric consistent across all departments? Are we using all the reports we’re running?
- Have a single source of truth, a central place for all your data.
- Business health is a collection of all teams so the goals and metrics must be aligned and uniform.
- Use data to define or focus effort. Typically 80% of incoming traffic is going to top 10% of landing pages. Use data to drive hypotheses.
ConversionXL Live 2015 was a success, and we’ll do it again next year. Probably similar time – March. We’ll make sure you’ll find out about it when we announce it.