Designers and conversion optimization people use visual cues to attempt to guide users in particular direction on a web page. Maybe you want a user to continue scroll, or to look at a value proposition, so you add a visual cue to subtly guide them there.
However, when you consider the vast amount of different kinds of visual cues that are available, things become complicated.
You could use arrows, lines, photos of people, borders, pointing fingers, bright banners, exclamation points, check marks… The list goes on.
Which brings us to the real question: Are some visual cues more effective than others? This CXL Institute study explores that question.
- The visual cues did differentially impact how much a user pays attention to the form.
- There was no difference in the speed at which users first noticed the form.
- The visual cues did not differentially impact how well viewers remembered the form.
How do I apply this research?
- Test hand-drawn directional objects (e.g. an arrow) for guiding the attention of users.
- If you use an image of a human as a visual cue, have this person looking in the direction of the CTA or key feature. While this variant didn't significantly differ from the control, the human looking away from the form resulted in the lowest fixation duration on the form.
Visual Cues Report: Which Cues Are Effective and Memorable?
Data Collection Methods and Operations:
1. We used eye-tracking to quantify user behavior after manipulating the homepage for the law firm Lemon Law Group with six different types of visual cues (along with one control condition, which had no visual cue).
We placed the visual cues strategically on the page, to try to get users to look at the signup form. To maintain consistency, all visual cues were place in the same spot.
Participants were given 15 seconds to browse the page as if they were considering the law firm’s services.
Task Question: “Imagine you’re in need of legal help. Please browse the following law firm’s web page as you normally would to assess their quality of service.”
Visual Cues Used:
Analyzing eye-tracking data allows us to run stats to see how much people paid attention to the form and how that differed among cues.
The stats we were concerned were:
- the average time spent fixating on the form
- the average time to first fixation on the form.
2. A post-task questionnaire measured the efficacy of each visual cue by asking users how they would contact the law firm. This measured recall.
Task Question: “Considering the web page you just saw, what would your next step be in getting in touch with this law firm?”
If participants answered that they would fill out the form to get in contact with the firm, the visual cue was considered effective at directing attention to the form and thus increasing the probability of recall.
1. The visual cues do not differentially impact the speed at which users first notice the form.
A simple one-way ANOVA analysis tells us that the average time to first fixation of the signup form does not vary significantly among the treatments [F(6, 237) = 0.7947, p = 0.5748].
After thinking about the results, this makes some sense. Take a look at the means:
Remember, these means are not ‘significantly’ different from one another, but that is at a pretty high/conservative standard (alpha – 0.05).
There is still an interesting pattern to see.
The visual cue itself appears to take some time to process. The control resulted in the shortest mean time to first fixation, followed by the next least conspicuous treatment (triangular). We see the pattern continue with: prominent, arrow, line, human looking at form, and then human looking away from form.
This pattern is intuitive, if not backed by significance at an alpha of 0.05. If we were to set the treatments on a scale from least to most conspicuous, this might be the order we’d get.
But what about the amount of time users look at the form on average? This measure might get at how the visual cues differentially drive information processing via engagement (i.e. actually reading the text and processing the information).
2. The visual cues do differentially impact how much a user pays attention to the form.
Analysis of variance indicates that the average amount of time viewing the form area does vary significantly among the treatments [F(6, 237) = 2.3108, p = 0.0346].
Here are the average and standard deviation stats:
The arrow drew the most attention to the form, and the human looking away from the form drew the least. A post-hoc Tukey test showed that these two treatments differed significantly at p < .05.
Here’s a histogram of the data. The red bars indicate the two means that are significantly different from one another.
Takeaways? Well, don’t use a human looking away from where you want a person to look, that’s for sure.
At least in this study, on average, users spent less time (by about half) considering the form compared to the control. The simple line, prominent form and human looking all did pretty well, but not as well as the arrow, which led the pack in total time spent looking at the form.
The simple line, prominent form and human looking all did pretty well – but not as well as the arrow, which led the pack in total time spent looking at the form.
Based on our pairwise tests, we can’t say at a 95% confidence level that the arrow resulted in a different amount of time spent compared with most of the others, but it still provides support for further testing of this hypothesis.
These stats are fun to geek out over, but what about the specific patterns of people’s gaze? Specifically, what are the visual patterns of viewers and how does this differ among the cue treatments?
For this type of insight, the eye-tracking heatmaps provide something that the statistics obscure. That is, we can see exactly where people are looking, in what order, and for how long.
The heatmaps provide a supplemental perspective for the visual perception of viewers as they consume the page. And they tell a pretty clear story.
The arrow focuses the viewer’s gaze with the most precision, guiding user attention quite specifically in the direction it’s pointing. This pattern surely explains some of the results.
The cue of the human looking away from the form seems to make people actively avoid it and anything to the right. The triangular cue treatment didn’t stand out particularly with the statistics above, but here we see it did result in guiding attention to the form.
3. The visual cues do not differentially impact how viewers remember the form.
Following the website stimulus, we asked each user: “Considering the web page you just saw, what would your next step be in getting in touch with this law firm?”
This was to test the short-term memory effects among the different treatments.
Here is a table of the number of participants who recalled the email capture form and the number who didn’t:
We performed a Chi-Squared test on this data and found non-significance [X2 (5, N = 232) = 8.942, p = 0.111]. However, note that the prominent treatment did have a noticeably low number of people recall it.
Overall, these results were not insightful and it is likely we need a larger sample size to detect differences. Given the sample size average of 35, a sample size calculator indicated that we should have expected significant differences at a confidence level of 90% if the critical difference between proportions was 30%.
There are thousands of different visual cues we could have tested (e.g. the type of human used). Maybe he’s not lawyerly enough? Or too much so?
These results are limited in their transferability, but they do provide ideas and hypotheses for further testing. For example, we might implement some lessons learned here in a follow-up study that will test visual cues to get people to scroll down a page.
The arrow did well, but all arrows surely won’t perform the same. We hypothesize that it did well because of the ‘hand-drawn’ nature of it. Thoughts?
The post-survey questionnaire wasn’t insightful and it’s likely that the question needs to be more precise (less open-ended) or our sample size needs to increase… or both. To us, this shows the value of eye-tracking compared to survey designs in getting more objective results, even if they are only visual perception results.
The study also would have been better without a form, rather if we used some kind of copy, like a value proposition. The post-survey questions might have been more insightful then as well.
People paid most attention to the form when a hand-drawn arrow was used as a visual cue; they paid least attention to the form when a human was used and was facing away from the form.
There are infinite iterations of each type of visual cue you could use, but this does provide insight into how visual cues impact attention. Notably, the results imply you shouldn’t use a human looking away from a form and that you should try testing out hand-drawn arrows.