If you’re not following form design best practices, you’re leaving a lot of money on the table.
While forms aren’t the sexiest part of conversion optimization, they tend to be the closest to the money—the macro-conversions. Spending a little time optimizing forms can be some of the most important optimization work you can do.
Of course, best practices don’t work the same on all sites. It’s contextual. But generally, implementing form design tactics that work more often than not is a good way to get started.
You put tons of time into creating your product, experimenting with acquisition channels, and honing your messaging.
Yet here I am, about to tell you that consumers are often swayed by such subtle nudges as the order in which you present your products, or the “serial position effect.”
A/B testing splits traffic 50/50 between a control and a variation. A/B split testing is a new term for an old technique—controlled experimentation.
Yet for all the content out there about it, people still test the wrong things and run A/B tests incorrectly.
When should you use bandit tests, and when is A/B/n testing best?
Though there are some strong proponents (and opponents) of bandit testing, there are certain use cases where bandit testing may be optimal. Question is, when?
At a certain point, the results from your A/B testing will likely slow down. Even after dozens of small iterations, the needle just won’t move.
Reaching diminishing returns, is never fun. But what exactly does that mean? In most cases, you’re probably hit a local maximum.
So the question is, what do you do now?
Heat maps are a popular conversion optimization tool, and for good reason. We leveraged correctly they are a powerful way to better understand your audience and deliver more value.
So what can heat maps answer?
In digital analytics, it’s all about asking the right questions.
Sure, in the right context, you can probably get by doing what Avinash Kaushik refers to as “data puking,” but you won’t excel as an analyst or marketer.
In addition, you’ll consistently come up short on bringing true business value to your company.
How you design a survey or a form will affect the answers you get. This includes the language you use, the order of the questions, and, of course, the survey scale: the default values and ranges you use.
One thing many people forget when dealing with data: outliers.
Even in a controlled online A/B test, your data set may be skewed by extremities. How do you deal with them? Do you trim them out, or is there another way?
Your users will make mistakes. It’s inevitable. That’s what error messages are for—but so many companies fail to follow best practices, and they’re pissing off potential customers in the process.
So, how can we better design error messages to improve the user experience and increase conversions?