Which Visual Cues Work Best To Drive Attention? [Original Research]

Which Visual Cues Work Best To Drive Attention? [Original Research]

Designers and conversion optimization people use visual cues to attempt to guide users in particular direction on a web page. Maybe you want a user to continue scroll, or to look at a value proposition, so you add a visual cue to subtly guide them there.

However, when you consider the vast amount of different kinds of visual cues that are available, things become complicated.

You could use arrows, lines, photos of people, borders, pointing fingers, bright banners, exclamation points, check marks… The list goes on.

Which brings us to the real question: Are some visual cues more effective than others? This ConversionXL Institute study explores that question.

Results summary

  • The visual cues did differentially impact how much a user pays attention to the form.
    • The hand-drawn arrow resulted in the longest amount of time on the form.3. Arrow Pointing to Form (sm) watermark
    • The human looking away from the form resulted in the shortest.2. Human Looking Straight Ahead (sm)watermarked
  • There was no difference in the speed at which users first noticed the form.
  • The visual cues did not differentially impact how well viewers remembered the form.

How do I apply this research?

  • Test hand-drawn directional objects (e.g. an arrow) for guiding the attention of users.
  • If you use an image of a human as a visual cue, have this person looking in the direction of the CTA or key feature. While this variant didn’t significantly differ from the control, the human looking away from the form resulted in the lowest fixation duration on the form.

[grwebform url=”https://app.getresponse.com/view_webform_v2.js?u=Td&webforms_id=7834303″ css=”on” center=”off” center_margin=”200″/]

Visual Cues Report: Which Cues Are Effective and Memorable?

Study Setup

Data Collection Methods and Operations:

Screen Shot 2016-03-10 at 1.52.39 PM1. We used eye-tracking to quantify user behavior after manipulating the homepage for the law firm Lemon Law Group with six different types of visual cues (along with one control condition, which had no visual cue).

We placed the visual cues strategically on the page, to try to get users to look at the signup form. To maintain consistency, all visual cues were place in the same spot.

Participants were given 15 seconds to browse the page as if they were considering the law firm’s services.

Task Question: “Imagine you’re in need of legal help. Please browse the following law firm’s web page as you normally would to assess their quality of service.”

Visual Cues Used:

Visual treatment with a human looking away the form (really towards the user).
Visual cue treatment with a human looking away from the form (towards the user).
Visual cue treatment a human looking towards the form (really towards the user).
Visual cue treatment of a human looking towards the form.
Visual cue treatment of a 'hand-drawn' arrow pointing towards the form.
Visual cue treatment of a ‘hand-drawn’ arrow pointing towards the form.
Visual cue treatment of a broad, triangular shaped arrow pointing towards the form.
Visual cue treatment of a broad, triangular-shaped arrow pointing towards the form.
Visual cue treatment of a line leading from the text under the value proposition to the form.
Visual cue treatment of a line leading from the text under the value proposition to the form.
Visual cue treatment of a 'prominent' form, being darker and with a subtle outline compared with the others.
Visual cue treatment of a ‘prominent’ form (darker with a subtle yellow outline).
The control. This is the original look of the landing page that was manipulated to create the other treatments.
The control. This is the original look of the landing page that was manipulated to create the other treatments.

Analyzing eye-tracking data allows us to run stats to see how much people paid attention to the form and how that differed among cues.

The stats we were concerned were:

  • the average time spent fixating on the form
  • the average time to first fixation on the form.

2. A post-task questionnaire measured the efficacy of each visual cue by asking users how they would contact the law firm. This measured recall.

Task Question: “Considering the web page you just saw, what would your next step be in getting in touch with this law firm?”

If participants answered that they would fill out the form to get in contact with the firm, the visual cue was considered effective at directing attention to the form and thus increasing the probability of recall.

Number of participants per treatment
Number of participants per treatment.

Findings

1. The visual cues do not differentially impact the speed at which users first notice the form.

A simple one-way ANOVA analysis tells us that the average time to first fixation of the signup form does not vary significantly among the treatments [F(6, 237) = 0.7947, p = 0.5748].

After thinking about the results, this makes some sense. Take a look at the means:

Summary statistics for time to first fixation of the form for all treatments
Summary statistics for time to first fixation for all treatments.

Remember, these means are not ‘significantly’ different from one another, but that is at a pretty high/conservative standard (alpha – 0.05).

There is still an interesting pattern to see.

The visual cue itself appears to take some time to process. The control resulted in the shortest mean time to first fixation, followed by the next least conspicuous treatment (triangular). We see the pattern continue with: prominent, arrow, line, human looking at form, and then human looking away from form.

This pattern is intuitive, if not backed by significance at an alpha of 0.05. If we were to set the treatments on a scale from least to most conspicuous, this might be the order we’d get.

But what about the amount of time users look at the form on average? This measure might get at how the visual cues differentially drive information processing via engagement (i.e. actually reading the text and processing the information).

2. The visual cues do differentially impact how much a user pays attention to the form.

Analysis of variance indicates that the average amount of time viewing the form area does vary significantly among the treatments [F(6, 237) = 2.3108, p = 0.0346].

Here are the average and standard deviation stats:

Summary statistics for amount of time fixating on the form for all treatments
Summary statistics for amount of time fixating on the form for all treatments.

The arrow drew the most attention to the form, and the human looking away from the form drew the least. A post-hoc Tukey test showed that these two treatments differed significantly at p < .05.

Here’s a histogram of the data. The red bars indicate the two means that are significantly different from one another.

Histogram of the mean time fixating on the form for each treatment. Red indicates significance differences at an alpha of 0.05
Histogram of the mean time fixating on the form for each treatment. Red indicates significant differences at an alpha of 0.05.

Takeaways?  Well, don’t use a human looking away from where you want a person to look, that’s for sure.

At least in this study, on average, users spent less time (by about half) considering the form compared to the control. The simple line, prominent form and human looking all did pretty well, but not as well as the arrow, which led the pack in total time spent looking at the form.

The simple line, prominent form and human looking all did pretty well – but not as well as the arrow, which led the pack in total time spent looking at the form.

Based on our pairwise tests, we can’t say at a 95% confidence level that the arrow resulted in a different amount of time spent compared with most of the others, but it still provides support for further testing of this hypothesis.

These stats are fun to geek out over, but what about the specific patterns of people’s gaze? Specifically, what are the visual patterns of viewers and how does this differ among the cue treatments?

For this type of insight, the eye-tracking heatmaps provide something that the statistics obscure. That is, we can see exactly where people are looking, in what order, and for how long.

Visual cue treatments with aggregate heatmap displayed.
Visual cue treatments with aggregate heatmap.

The heatmaps provide a supplemental perspective for the visual perception of viewers as they consume the page. And they tell a pretty clear story.

The arrow focuses the viewer’s gaze with the most precision, guiding user attention quite specifically in the direction it’s pointing. This pattern surely explains some of the results.

The cue of the human looking away from the form seems to make people actively avoid it and anything to the right. The triangular cue treatment didn’t stand out particularly with the statistics above, but here we see it did result in guiding attention to the form.

3. The visual cues do not differentially impact how viewers remember the form.

Following the website stimulus, we asked each user: “Considering the web page you just saw, what would your next step be in getting in touch with this law firm?”

This was to test the short-term memory effects among the different treatments.

Here is a table of the number of participants who recalled the email capture form and the number who didn’t:

Numbers of participants who recalled and didn't recall the form as a means to get in touch with law firm, answered in a follow-up questionnaire.
Number of participants who recalled and didn’t recall the form as a means to get in touch with the law firm, answered in a follow-up questionnaire.

We performed a Chi-Squared test on this data and found non-significance [X2 (5, N = 232) = 8.942, p = 0.111]. However, note that the prominent treatment did have a noticeably low number of people recall it.

Overall, these results were not insightful and it is likely we need a larger sample size to detect differences. Given the sample size average of 35, a sample size calculator indicated that we should have expected significant differences at a confidence level of 90% if the critical difference between proportions was 30%.

Limitations

There are thousands of different visual cues we could have tested (e.g. the type of human used). Maybe he’s not lawyerly enough? Or too much so?

These results are limited in their transferability, but they do provide ideas and hypotheses for further testing. For example, we might implement some lessons learned here in a follow-up study that will test visual cues to get people to scroll down a page.

The arrow did well, but all arrows surely won’t perform the same. We hypothesize that it did well because of the ‘hand-drawn’ nature of it. Thoughts?

The post-survey questionnaire wasn’t insightful and it’s likely that the question needs to be more precise (less open-ended) or our sample size needs to increase… or both. To us, this shows the value of eye-tracking compared to survey designs in getting more objective results, even if they are only visual perception results.

The study also would have been better without a form, rather if we used some kind of copy, like a value proposition. The post-survey questions might have been more insightful then as well.

Conclusion

People paid most attention to the form when a hand-drawn arrow was used as a visual cue; they paid least attention to the form when a human was used and was facing away from the form.

There are infinite iterations of each type of visual cue you could use, but this does provide insight into how visual cues impact attention. Notably, the results imply you shouldn’t use a human looking away from a form and that you should try testing out hand-drawn arrows.


cxli-logo

Join the Conversation Add Your Comment

  1. I’m loving these studies. Takes me back to university. I knew those stats courses would come in handy. :)

    I’d be interested to see this study repeated for form completion rates.

    My thoughts:

    I wonder if the hand-drawn arrow increased average time looking at form because fewer users spent time reading the headline and bullet points? That is, I wonder if more people in the arrow group had to think long and hard about whether or not they’d complete the form, given their lack of information.

    And I’d be curious to see whether the triangular outperforms the rest when it comes to form completion. There seems to be a nice balance in that treatment. Users read the headline, bullet points, and then shift their focus to the form. They arrive at the form with enough information to make a decision.

    Just a few thoughts. Keep pumping this research out!

    Reply
  2. Ben Labay

    Yo Josh, thanks for the comment, good observations on the study. I do think there’s a tradeoff to be noted in where the eyeballs go and don’t go with different cues. Never thought about the balance with the triangle, worth testing I think.

    If anyone has A/B case studies on visual cues we’d love to work with you to publish along side this! Could be nice series.

    Thanks for the encouragement, the research will keep coming for sure!
    Cheers,
    Ben

    Reply
  3. Not at all what I would have expected, particularly with the heat maps and the eye-tracked time on form.

    Thank you for the hard data, and for the excellent results write-up!

    Reply
  4. Ben Labay

    Thanks for the comment Anne, and let us know if there’s anything you want data on that we can maybe do a study with. Cheers, Ben

    Reply
  5. It would also be interesting to test an arrow that is included as part of the call-to-action button so see if this increases conversions.

    Reply
  6. If possible could you test visual cues to get people to scroll down a page?

    Reply
  7. Ben Labay

    Hi Darryl, good idea. We’re working on a variation of this for verticle scrolling, how to get people further down a long form page. But other treatments with the same approach could be good to do as well. Thanks for the comment, Ben

    Reply
    1. Ben Labay

      Hi Barry, thanks for the comment. Conversion rate wasn’t our metric of interest here, and we don’t have any AB tests for these treatments. Understanding underlying patterns of user perceptions can help set tests up, refining endless numbers of hypotheses. We were simply interested in getting some data on this visual cue case study and to illustrate the kinds of UX research possible before any kind of AB tests. That said, anyone that has AB testing visual cues like this, feel free to share and we’ll publish it as a followup. Cheers, Ben

  8. Moral of the story? People need to notice action goals when and only when there needed, in the exact flow they are needed. Arrows are a great way to do interest action on landing pages and what I also use most often.

    Reply
    1. Ben Labay

      Hey Corey, thanks for the feedback, let us know if you want to see any particular studies! Cheers, Ben

Leave a Reply

Your email address will not be published. Required fields are marked *