How People Read Short Articles [Original Research]

How People Read Short Articles [Original Research]

When people read short articles and blog posts online, what percentage of the content gets actually read?  Do people read image captions? How many readers finish an entire article?

Background

David Ogilvy famously wrote in his Confessions of an Advertising Man (1963) that “five times as many people read the headline as read the body copy’. Does this hold true today?

In 2008, a study found that users read only half the information on pages with 111 words or less.

In our study, we wanted to answer a few specific questions:

  1. How are short articles read?
  2. How much of the article gets read?
  3. Do people read image captions?
  4. Do older internet users read articles the same way younger users do?

Study Report

A short article on astronaut training was used for the research stimuli:

Screen Shot 2016-08-19 at 1.30.33 PM
We chose an interesting but brief National Geographic article

The article was short —approximately 300 words long— and included a title, featured image, side banner ad, and varying font sizes. Although the article had little content, it was three folds. We wanted to study how far participants read, and if there’s a drop in reading rates when one has to actively scroll.

Data Collection Methods and Operations:

The same article was shown to two groups: younger participants aged 18-30 and older participants aged 50-60.

All participants were prompted with this scenario:

You are interested in reading about astronaut space training. Please read the following web article about this subject.

They then had 30 seconds to read the article.

Participants

Usable eye-tracking data was collected for 62 participants in group 1 (ages 18-30).

Usable eye-tracking data was collected for 33 participants in group 2 (ages 50-60).

NOTE: This is a smaller sample size than we normally like to use (~50), but it took almost 3 weeks to get even this number of participants, the panels that we use don’t have many people in this age group.

Findings

Reading behaviors between the two age groups were quite similar.

To study what participants looked at, and for how long they looked at it, we created “areas of interest” —AOIs— on the article page:

screen-shot-2016-09-20-at-1-18-22-pm
AOIs were placed over the article main elements.

Using these AOIs we were able to quantify the following results:

97% of people read the title 

Almost everyone read the title (but not 100% of people!), spending on avg 2.9 seconds on it (7 words).

The sub-title was seen by 98%, but people on average only spent 2.8 seconds on those 21 words, which means they were rather glancing at it than reading.

“Which elements of the page were looked at the most?”

aoi-1

“How much of the article was read?”

aoi-4

When we look at how many seconds people spent looking at the different content blocks and compare that to the word count, it’s obvious that people were rather skimming that reading it word by word.

“How many people read the image caption?”

aoi-5

“How many people paid attention to the ads?”

aoi-6

Limitations

It’s possible that the participants adapted their behaviors since they knew they were being studied. Perhaps they read more of the article than they usually would, or oppositely, read the article more slowly anticipating survey questions that might follow.

There’s also the likelihood that some participants didn’t have enough time to read the entire article. Because the testing platform used allows a maximum of 30 seconds for a picture to be shown (the picture being the article in this case), there must have been at least a few people who simply didn’t have enough time to read the whole story.

Conclusion

What we already knew: people don’t read.

  • 97% read the article title.
  • 98% looked at the sub-title, but it was more of a glance than read.
  • 60% finished the article, but the time spent on content shows they were skimming rather than reading everything.
  • 91% read the image caption.
  • banner ad got less than 1.5 seconds of attention.

Join 95,000+ analysts, optimizers, digital marketers, and UX practitioners on our mailing list

Emails once or twice a week on growth and optimization.

  • This field is for validation purposes and should be left unchanged.

Join the Conversation Add Your Comment

  1. Why was a 30 second time limit given?

    Why not say you can stop whenever you want?

    Of course they are going to rush through if they only have 30 seconds.

    Reply
    1. Diego Chacon

      Hi Joe Mama,
      We’ve addressed your concerns on the 30 second time limit and potential reading pattern discrepancies in our section on limitations. The software and type of test used has a 30 second time limit. To account for different reading behaviors, the 30 second time limit shared amongst all participants is helpful in the sense that the constraint is what allows for measurable differences between the two groups.
      Thank you for reading,
      Diego

  2. Thanks for the research…
    Who has time to read? I must have scrolled up and down a dozen time to review results…
    I would be interested to see the same study completed where the participants pre-register their interest for a particular topic and the test re-run using an article that matched their interests.
    It could be easy to assume that the 50-60 age group have a short attention span or cognitive impairment considering this group as a whole read less of the article.

    Reply
    1. Diego Chacon

      Hi Amajjika,
      Introducing preference and intention is an interesting factor to consider in eye-tracking studies. I am interested in seeing those results as well. I could see the study your proposing be great exploration for studying segmented users groups. Will add your notes to our idea bank here.
      Thank you for reading,
      Diego

    2. Cognitive impairment! That’s quite a leap!

      One might also postulate that readers 50+ don’t really care about a 30-second time limit. They’ll get to what they get to instead “putting themselves under the gun”.

      I don’t qualify in that demo, but I definitely see my habits/actions trending toward that attitude. If I can’t finish something, then it isn’t meant for me, or isn’t meant to be finished in that timeframe.

      Could also be attrition is due to them being experienced enough to know when the information is no longer useful/interesting to them. Therefore, more decisive in managing their information load. Could also be that the article just wasn’t written well and they didn’t want to waste their time.

      There are so many uncontrolled variables here, that there is no telling.

    1. Diego Chacon

      Hi Antonia,
      I got 303 and have updated.
      Thank you for reading,
      Diego

  3. Hard for me to trust a study which includes only 33 participants in a group.

    Reply
    1. Diego Chacon

      Hi Nitin,
      I hear you, 33 participants can appear to be a small sample size. However, 30 participants in a qualitative eye-tracking study is a strong enough sample size to draw conclusions from. I invite you to explore online sample size calculators for eye-tracking. You might be surprised to see some go under 30 depending on your study design.
      Thank you for reading,
      Diego

    2. Thank you for your response, Diego. I’d love to see how you calculated the sample size for the research based on your study design.

    3. You can’t know whether a study is trustworthy or not based on sample size alone. There’s a reason a significant amount of psychological experiments have less than 100 participants, and many have less than 10; because the anticipated effect sizes between sample groups are large.

      Imagine a test that measures the average marathon time of two out of shape desk jockeys [254 minutes average] vs. the average times of two members of the Kenyan Olympic marathon team [173 minutes average]. With only two participants per group, this test ends with a significance level of over 99%.

      Why? Because the intention of a statistical test is to identify the probability of our observed result occurring given a particular distribution. To put another way, what we’re really asking is: “How often would our overweight desk jockeys run a world record marathon time?” Using basic common sense we could confirm that this is extremely unlikely (But not impossible, hence the .01%).

      The issue of replicability in science has much less to do with sample size, and much more to do with things like optional stopping and alpha inflation.

    4. Thank you for the reply, Chad.

      Talking about the example you’ve used about the fat DJs and Kenyan athletes, I think it’s quite extreme. Firstly, the two DJs are overweight; secondly, the Kenyan athletes are professional runners. No wonder the result is going to be statistically significant. Suppose, we were to do a study to find out better marathon runners between the entire Kenyan population and all the DJs of the world. Do you think a sample size of 4 would suffice?

      I have this concern because the CXL study extrapolates the result from a sample size of 30 users to the entire population of web users in the world. Now that’s a big jump.

      That being said, you probably have more experience with such studies and their sampling processes, and I understand that my question might appear juvenile to you.

    5. Hi Nitin,

      Some good questions. However, you’ve slightly missed the point of my runner analogy. The key is that sample size has very little bearing or relevance when the difference between groups is very large. The reason a study such as this is done, is because we hypothesize that the difference between groups will be large. If the true difference is NOT large, we simply would not see such a result unless we were very lucky.

      Therefore the question on whether or not we need a large sample size when evaluating DJ’s vs. kenyan runners doesn’t really make sense. If we sampled on a small amount of data and there was no difference in these two groups, we would observe nothing. If the difference was indeed large, we would observe something. As long as your decisions are made in advance, your rate of error does not change.

      > I have this concern because the CXL study extrapolates the result from a sample size of 30 users to the entire population of web users in the world. Now that’s a big jump.

      It certainly would depend on who and how they sampled. However, extrapolation in a true random sample is perfectly fine regardless of sample size. True random samples are what we called “unbiased predictors.” Meaning that if you took 30 samples out of the entire population (whether it’s millions or even billions) over and over and over again, millions of time, 95% of the time the generated confidence intervals of those 30 samples will contain the true mean value of the population. The more samples you take, the smaller (and more accurate) your margin of error becomes. The fewer samples you take, the larger your intervals.

      Hence, when applied correctly, we are always equally confident of our result regardless of if we have 2 people in a sample or ten million.

  4. You cannot fake an interest in a topic.

    The results are all dependent on the affinity the reader has with the topic and the design of the content.

    “What we already knew: people don’t read” is a massive generalisation. If you’ve engaged 5%-10% of your traffic with the content, that still has huge value for a business.

    Weak study

    Reply
    1. Diego Chacon

      Hi William,
      I hear you that true affinity amongst study participants is varied. Given the technical foundation of the study, tasks assigned to recruited participants, and results reported, the takeaway from this study is not so much the granular details of how articles about astronauts are perceived but what we can learn from how digital content is digested, their patterns and things we can do to optimize the experience.
      Thank you for reading,
      Diego

  5. I wonder what the practical uses are based on the conclusion derived from this study. On a more simplistic note, should one scrap long form content to come up with something shorter and yet, visually-appealing? If yes, will this be compelling enough that your target audience will take an action (like those clickable banners)? I wish you can conduct more studies on this area. Thanks!

    Reply
  6. I personally feel it may be harder to try and postulate whether one prefers longer or shorter content what I know for sure is that shorter visual data is more likely to be appealing than longer content.

    Reply
  7. Very interesting data. Well, I think images are really important for attracting users’ attention.

    Reply
  8. Yes, This is true. mostly people read short articles seeing images, heading, and background.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

If you agree to these terms, please click here.

This site uses Akismet to reduce spam. Learn how your comment data is processed.