fbpx

#8: From Data to Test Hypotheses

Every conversion project starts with conversion research. During the research process you complete everything we’ve been discussing in the previous lessons:

Once you go through all these, you will find identify issues – some of them severe, some minor.

Next: Allocate every finding into one of these 5 buckets:

Test

If there is an obvious opportunity to shift behavior, expose insight or increase conversions – this bucket is where you place stuff for testing. If you have traffic and leakage, this is the bucket for that issue.

Example: people struggle with completing your quote request form, most have multiple tries due to usability problems and lots of fields. Now we should test a simpler, more usable form.

Instrument

If an issue is placed in this bucket, it means we need to beef up the analytics reporting. This can involve fixing, adding or improving tag or event handling on the analytics configuration. We instrument both structurally and for insight in the pain points we’ve found.

Example: there is no event tracking set up for ‘add to cart’ button clicks or using filters on ecommerce category pages. These are instrumentation issues.

Hypothesize

This is where we’ve found a page, widget or process that’s just not working well but we don’t see a clear single solution. Since we need to really shift the behaviour at this crux point, we’ll brainstorm hypotheses. Driven by evidence and data, we’ll create test plans to find the answers to the questions and change the conversion or KPI figure in the desired direction.

Example: you’re selling pool parts, and qualitative research tells you that the main thing holding people back is that they’re not sure if the parts they’re looking at will fit their pools.

We know the problem, but there is no single obvious solution here. So we’ll need to brainstorm many different ideas for solutions, and test them (run A/B/C/n test).

Just Do It – JFDI 

This is a bucket for issues where a fix is easy to identify or the change is a no-brainer. Items marked with this flag can either be deployed in a batch or as part of a controlled test. Stuff in here requires low effort or are micro-opportunities to increase conversions and should be fixed.

Example: people complain that the font size you use is too small to read. Just make it bigger!

Investigate

You need to do some testing with particular devices or need more information to triangulate a problem you spotted. If an item is in this bucket, you need to ask questions or do further digging.

Example: conversion rate for iOS devices is really low. Have to investigate for cross-browser compatibility or mobile UX issues.

Next: issue scoring, ranking them.

We can’t do everything at once and hence need to prioritize. Why?

  • Keeps you / client away from shiny things
  • Focus is almost always on biggest money / lowest cost delivery
  • Helps you achieve bigger wins earlier in projects
  • Gives you / the client potential ROI figures
  • Keeps the whole team grounded

Once we start optimizing, we start with high-priority items and leave low priority last – but eventually all of it should get done. There are many different ways you can go about it. A simple yet very useful way is to use a scoring system from 1 to 5 (1= minor issue, 5 = critically important).

In your report you should mark every issue with a star rating to indicate the level of opportunity (the potential lift in site conversion, revenue or use of features):

★★★★★
This rating is for a critical usability, conversion or persuasion issue that will be encountered by many visitors to the site or has high impact. Implementing fixes or testing is likely to drive significant change in conversion and revenue.
★★★★
This rating is for a critical issue that may not be viewed by all visitors or has a lesser impact.

★★★
This rating is for a major usability or conversion issue that will be encountered by many visitors to the site or has a high impact.

★★
This rating is for a major usability or conversion issue that may not be viewed by all visitors or has a lesser impact.


This rating is for a minor usability or conversion issue and although is low for potential revenue or conversion value, it is still worth fixing at lower priority.

There are 2 criteria that are more important than others when giving a score:

  • Ease of implementation (time/complexity/risk). Sometimes the data tells you to build a feature, but it takes months to do it. So it’s not something you’d start with.
  • Opportunity score (subjective opinion on how big of a lift you might get). Let’s say you see that the completion rate on the checkout page is 65%. That’s a clear indicator that there’s lots of room for growth, and because this is a money page (payments taken here), any relative growth in percentages will be a lot of absolute dollars.

Essentially: follow the money. You want to start with things that will make a positive impact on your bottom line right away.

Be more analytical when assigning a score to items in Test and Hypothesize buckets.

Now create a table / spreadsheet with 7 columns:

IssueBucketLocationBackgroundActionRatingResponsible
Google Analytics bounce info is wrongInstrumentEvery pageGoogle Analytics script is loaded twice! Line 207 and 506 of the homeRemove double entry★★★★Jack
Missing value propositionHypothesizeHome pageGive reasons to buy from youAdd a prominent value proposition★★★Jill

Most conversion research projects will easily identify 50 to 150 issues. “What to test” is not a problem anymore, you will have more than enough.

Translating issues into hypotheses

Let’s get on the same page by what a hypothesis is. This is a definition I like:

Hypothesis is a proposed statement made on the basis of limited evidence that can be proved or disproved and is used as a starting point for further investigation.

Every good test is based on a hypothesis. Whether a test wins or loses, we’re validating a hypothesis – hence testing is essentially validated learning. And learning leads insight which leads to better hypotheses, and in turns into better results.

The better our hypothesis, the higher the chances that our treatment will work, and result in an uplift.

Here’s a good format for writing your hypothesis (credit to Craig Sullivan):

We believe that doing [A] for people [B] will make outcome [C] happen. We’ll know this when we see data [D] and feedback [E].

With a hypothesis, we’re matching identified problems with identified solutions while indicating the desired outcome.

Identified problem: “It’s not clear what the product is, what’s being sold on this page. People don’t buy what they don’t understand.”

Proposed solution: “Let’s re-write product copy so it would be easy to understand what the product is, for whom, and what the benefits are. Let’s use better product photography to further improve clarity.”

Hypothesis: “By improving the clarity of the product copy and overall presentation, our target audience (e.g. middle-aged bikers) can better understand our offering, and we will make more money. We will know this by observing revenue per visitor.”

All hypotheses should derived from your findings from conversion research. Don’t test without hypotheses. This is basic advice, but its importance can’t be overstated. There is no learning without proper hypotheses.

Read Next Lesson or Download guide as PDF

  • #1: Mindset of an Optimizer

    You seek to understand your customers better - their needs, sources of hesitation, conversations going on inside their minds.
  • #2: Conversion Research

    Would you rather have a doctor operate on you based on an opinion, or careful examination and tests? Exactly. That's why we need to conduct proper conversion research.
  • #3: Google Analytics for Conversion Optimization

    Where are the problems? What are the problems? How big are those problems? We can find answers in Google Analytics.
  • #4: Mouse Tracking and Heat Maps

    We can record what people do with their mouse / trackpad, and can quantify that information. Some of that data is insightful.
  • #5: Learning From Customers (Qualitative Surveys)

    When quantitative stuff tells you what, where and how much, then qualitative tells you 'why'. It often offers much more insight than anything else for coming up with winning test hypotheses.
  • #6: Using Qualitative On-Site Surveys

    What's keeping people from taking action on your website? We can figure it out.
  • #7: User Testing

    Your website is complicated and the copy doesn't make any sense to your customers. That's what user testing can tell you - along with specifics.
  • #8: From Data to Test Hypotheses

    The success of your testing program depends on testing the right stuff. Here's how.
  • #9: Getting A/B Testing Right

    Most A/B test run are meaningless - since people don't know how to run tests. You need to understand some basic math and statistical concepts. And you DON'T stop a test once it reaches significance.
  • #10: Learning from Test Results

    So B was better than A. Now what? Or maybe the test ended in "no difference". But what about the insights hidden in segments? There's a ton of stuff to learn from test outcomes.
  • Conclusion

    Conversion optimization is not a list of tactics. Either you have a process, or you don't know what you're doing.