Last qualitative bit in this series: user testing.
The premise is simple: observe actual people use and interact with your website while they’re commenting their thought process out loud. Pay attention to what they say and experience.
User testing gives you direct input on how real users use your site. You may have designed what you believe is the best user experience in the world, but watching real people interact with your site is often a humbling experience. Because you are not your user.
You can do this in-person or remote. When you do this in person – you go to test users or have them come to you – make sure you film the whole thing. Doing it remote by using online user testing tools if definitely the cheapest and fastest way to do it.
Creating user testing protocols
User testing starts with creating a test protocol – tasks that you want your test users to complete.
Online user testing tools limit one session to 15-20 minutes, so don’t try to cram too many tasks into one test. Depending on your site, 4-5 tasks per test is typically enough.
What kind of tasks should they complete?
The main thing you want to assess is completing key actions, such as signing up for something and buying something. You want to create scenarios that actual users would follow, and aim to identify all the friction they experience in the process.
Maybe they’re not able to find something, or can’t figure out how to do XYZ, or make mistakes when filling out forms.
Remember – every “mistake” a user makes is not because they’re stupid, but because your website sucks. When watching user testing videos it’s easy to say “I can’t believe these idiots don’t see that button”. But the real idiot is you for putting that button somewhere where people don’t look. But that’s okay – you can fix it!
In most cases you want to include 3 types of tasks in your test protocol.
- A specific task
- A broad task
- Funnel completion
So let’s say you run an ecommerce site that sells clothes. Your tasks might as follow:
- Find dark jeans in size 34 under $50 (specific task)
- Find a shirt that you like (broad task)
- Buy the shirt (funnel completion)
You have users that know what they want, and users who’re browsing around. This test protocol accounts for both. And funnel completion is the most important thing – you want to make purchasing as easy and obvious as possible.
Make sure you have them use dummy credit cards to complete the purchase. If you don’t let them complete the full checkout process, you’re missing out on critical insight.
If your platform does not allow dummy credit cards, you might want to run user tests on a staging server (if available), or get some pre-paid credit cards and share that info with testers. Once they’ve completed the test, just refund the money and cancel order.
Tasks to avoid
A typical rookie mistake is to form tasks as questions – “Do you feel this page is secure?” or “Would you buy from this site?”. That’s complete rubbish, utterly useless.
The point is to OBSERVE the user. If they comment on security voluntarily, great. If they don’t, it’s likely not an issue. Don’t ask for their opinion on anything, just have them complete tasks and pay attention to the comments they volunteer and to how they (try to) use the website interface.
Asking whether they would buy or not is completely useless as humans are not capable of accurately predicting their future actions. It’s one thing to say that you hypothetically would buy something, and it’s a completely different thing to actually take out your wallet and part with your money.
Test users know that they’re not risking with their actual money – so their behavior is not 100% reflective of actual buyer behavior.
Once I ran user testing for an expensive hotel chain. Test users had no problem booking rooms that cost over $500 per night. I seriously doubt they’d pay that much so easily in “real life”.
Another common mistake is telling them exactly what to do. For instance “use filters to narrow down the selection”. Don’t do that. You just give them the goal (e.g. find stores near you), and watch what happens.
Your testers should be people from your target audience (although ANY random tester is better than no tester) that understand your offer, and might represent the people you’re actually trying to sell to.
Also – it should be the very first time they’re using your site. So you can’t use past customers as testers. They’re already familiar with your site, and have learned to use it even if it has a ton of usability issues.
If your service/product is for a wide audience (e.g. you sell shoes or fitness products), you have it easy. You can turn to services like usertesting.com or TryMyUI.com, and recruit testers from their pool. I use usertesting.com all the time with every client. Or even Craiglist.
If you have a very niche audience (e.g software quality assurance testers or cancer patients on vegan diet), it can get more complicated. You can reach out to dedicated communities (e.g. forums for software testers or people with cancer), use your personal connections (friends of friends) or dedicated recruiting services (expensive).
If you do custom recruiting, you absolutely need to pay your testers, typically $25 to $50 per tester (depending on how niche they are). Or much more if they’re way more niche.
How many to recruit
In most cases 5 to 10 test users is enough. 15 max – law of diminishing returns kicks in after that.
You should conduct user testing every time before you roll out a major change (run tests on the staging server), or at least once a year. Definitely at the start of every optimization project.
Once you have all the videos done, time to review them all at once. Go through the videos, take notes of every single issue.
Fix the obvious problems and test everything else. If needed, recruit another 5 test users to see if the issues were solved or any new ones were created in the process.
You seek to understand your customers better - their needs, sources of hesitation, conversations going on inside their minds.
Would you rather have a doctor operate on you based on an opinion, or careful examination and tests? Exactly. That's why we need to conduct proper conversion research.
Where are the problems? What are the problems? How big are those problems? We can find answers in Google Analytics.
We can record what people do with their mouse / trackpad, and can quantify that information. Some of that data is insightful.
When quantitative stuff tells you what, where and how much, then qualitative tells you 'why'. It often offers much more insight than anything else for coming up with winning test hypotheses.
What's keeping people from taking action on your website? We can figure it out.
Your website is complicated and the copy doesn't make any sense to your customers. That's what user testing can tell you - along with specifics.
The success of your testing program depends on testing the right stuff. Here's how.
Most A/B test run are meaningless - since people don't know how to run tests. You need to understand some basic math and statistical concepts. And you DON'T stop a test once it reaches significance.
So B was better than A. Now what? Or maybe the test ended in "no difference". But what about the insights hidden in segments? There's a ton of stuff to learn from test outcomes.
Conversion optimization is not a list of tactics. Either you have a process, or you don't know what you're doing.