What we know today as a “website redesign” isn’t what it used to be.
‘Radical’ website redesign – where the company embarks on a ‘big-bang’ website overhaul – is becoming less common these days – which is generally a good thing, and there are a number of reasons why.
Pretty much every UX/CRO expert and company will advise you to tread lightly when it comes to radical redesigns.
The Common Pitfalls of Radical Redesigns (and Some Possible Fixes)
Many suggest you should replace radical redesigns with what’s known as Evolutionary Site Redesign (ESR).
First, it is simply far less risky to iterate on an existing website than to start from scratch.
It’s also generally cheaper, faster, and more measurable – ESR allows companies to make educated, incremental changes to their existing websites that can be validated.
Perhaps most importantly, the ESR approach tends to be rooted in data insight. Radical redesigns traditionally aren’t.
I’ve lost count of the number of times I have worked with companies who have launched slick-looking brand new websites only for their conversion rates to plummet at launch (not to mention what happens to search engine rankings and traffic volumes).
However, this doesn’t mean that radical website redesigns don’t still happen.
Sometimes there isn’t the option to do ESR, as the decision has already been made by senior executives or the CEO.
Radical site redesigns are often the result of:
- ‘digital transformation’ or re-platforming
- an existing website not being mobile-friendly
- an outdated look and feel.
All is not lost if you find yourself in this situation, though. There are things you can do to minimize the potential negative impact on performance that a radical site migration might have.
Using CRO Research in the Website Planning Process
ESR has been rightly called “conversion optimization on a large scale.” But even a radical redesign approach can incorporate CRO research methods.
By adopting conversion rate optimization methodologies you can plan how to iterate on your new designs as soon as you launch – and not lose control over the performance of your new site.
Just as SEO needs to be considered during the scoping and planning phases, the foundations for CRO need to be laid at the same time.
And there is great opportunity to use insights from an existing live site to inform the experiments that you can run as soon as the new site is launched.
Too many times we have seen companies addressing conversion only after the website has gone live. This is too late, and generally results in the need for unwieldy design changes further down the track that could easily have been avoided. There’s a better way to handle these things.
A 10-Step Process for De-Risking Website Redesigns
You shouldn’t underestimate the importance of planning and prioritization in the overall CRO process.
In our methodology at Slipstream Digital, steps 1-8 are all about planning and prioritizing: gathering data insights, user surveys, user testing results, site drop-off reports etc. The actual testing and analysis of results constitute only the last two steps.
Scrimp on the data gathering at your peril.
In our experience the data gathering phase for CRO can sit nicely alongside keyword research for SEO and content in the pre-site build timeline.
In fact, many of the insights that this research yields are important for the site scoping process in general, for instance:
- Customer survey data
- UX reports
- User testing results
- Heuristic studies and research
- Page and form field abandonment data
- Live chat transcripts
- Heat mapping data
- Session replay recordings
This is all invaluable information that can feed straight into the wireframing/prototyping phase of the redesign project.
How this approach fits into the overall redesign process
Our recommended process can be seen in the two diagrams below.
The first shows the various steps within the scoping phase of the project, broken down by discipline (e.g. Search, CRO, content, UX).
The second outlines the development phase of the redesign project, with the CRO disciplines included throughout the timeline. This shows how experiment ideation and prioritization, tool set-up, QA, and implementation can happen at the same time as the development sprints.
Agree and Prioritize Your Areas of Focus
But with so many options open to you, you have to decide what areas you’re going to focus on, and then prioritize them.
Considerations and questions you should ask yourself are:
- What is the primary objective of this website redesign?
- What are the most highly trafficked pages on the site?
- What are the highest value pages on the site? (these are often checkout pages and product details pages)
- What are the areas we already know that visitors have difficulties with?
Once you have identified the answers to these questions, you’ll be in a much better position to plan your attack.
Minimizing the Risk of a Disaster (plus a Case Study)
I have worked on website redesign projects where we essentially ran an A/B test between the old site and new, rather than releasing a beta version.
This is an effective way of de-risking the site launch, since the site can be served to a small proportion of the total audience, and issues can be rapidly identified and fixed.
An example of this approach was with the UK general insurance company Direct Line.
If the issues on the new site are major, there is always the option of ‘throttling back’ the new site to 0% (effectively switching it off).
Most enterprise-level A/B testing tools (we used Adobe Target) give you the ability to scale up the size of the audience that you want to expose the new site to. So you can start with, say, 5% and throttle up on a weekly basis as bugs are resolved.
Direct Line saw a 0.4% increase in car insurance quotes and a 0.1% increase in home and pet insurance quotes.
While this doesn’t sound like a resounding success, it was actually a great result.
Remember that this exercise has two purposes.
- The first is to check that the site is functionally sound.
- The second is to guard against any significant drop in your primary metrics (such as sales conversion). Be aware that overall sales conversion may well drop at first, as customers get used to the new design and layout.
Your A/B testing tool should also be able to show you how the site is performing by segment. So if you have created a mobile-friendly website (which you should have done!) you’ll be able to view the performance metrics for all mobile visitors.
What Are the Success Criteria?
There should be an underlying measurement framework that underpins the objectives of your website, whether the existing one or the new one. This framework is made up of the KPIs, or Key Performance Indicators, that give you a clear idea of how well you are doing.
When you are testing the success of the old site against the new, it’s important that you don’t have too many.
I learned the hard way.
A few years ago I worked on a site redesign project where we were asked to report on all metrics within our measurement framework. The idea was to provide as detailed a report as possible of how we were tracking.
It quickly became an impossible exercise, though, as subtle differences in site design meant that in a number of instances we weren’t able to compare like with like…and the results were meaningless.
Focus on the primary metrics you usually report on. For an ecommerce business these might be:
- Sales conversion rate
- Newsletter subscription rate
- Average order value
Running Pre-launch Experiments
Once you are comfortable that the new website is performing at an acceptable level, it’s worth already having some initial experiments up your sleeve to test against the new control.
This effectively means you will be running A/B/C tests, with:
- A being the old site
- B being the new site
- C being the new site including a slight variation (thoroughly researched, ideated, hypothesized & prioritized).
Isolate areas that are known to be high impact, high importance, and focus on them first. These are usually the pages that have the most traffic, such as product pages or the homepage.
Again, focus solely on one or two primary metrics, since differences in page design and the user journey will make it hard to compare too many KPIs.
By this stage you should have a new website that is:
- Bug free and lightning quick to load.
- Performing at least as well as the old site from a primary conversion perspective.
- Already enhanced by some quick and easy improvements.
You’ll then be in a position to switch off the old website and focus solely on the new.
The sooner you can do this the better, as it removes the need for two separate sites and/or content management systems to be maintained.
Running Post-launch Experiments
It’s a good idea to have a shortlist of A/B test candidates at the ready for launch.
These should be tests that you feel will make the biggest difference in terms of conversion uplift.
Most companies will have far more experiment ideas than time to run the experiments themselves – so it pays to be choosy, and to prioritize.
You should have a fairly good idea of initial test candidates already. Score them by ease of implementation and potential revenue uplift (or a similar prioritization framework).
With general insurance websites, an effective way to estimate the potential value of an experiment was to take into account the data we already had about the profitability of our sales funnel.
We knew the value of each part of the purchase funnel in terms of average sales revenue (or gross written premium to be more precise), and could estimate the number of visitors who would be bucketed into the experiment based on analytics data, so we could get a decent estimate of the possible uplift.
The question we would ask was: If we were to see a nominal conversion uplift of 1% from this experiment, what would that equate to in terms of additional revenue?
Take Baby Steps
You may find it useful to follow these ‘rules of thumb’ when you launch your first experiments:
- Consider just running one test at a time to start off with. You don’t want to have too many variables, and you want to clearly see which changes are having an impact.
- You might want to ring-fence a small proportion of traffic for this experiment too (e.g. 10%). This helps keep risk to a minimum, although you need to be sure to send enough traffic to the variant to make the experiment worthwhile and valid.
- Think very carefully about which changes are going to result in the biggest upswing in conversions, and focus on them. You want to minimize wastage as much as you can – of time, cost and effort – so it’s critical that your approach is as lean as it can be.
- Be very clear about how long you run your initial experiments for. If your website doesn’t get large volumes of traffic then you’ll need to make sure you run it for long enough to get a statistically significant, valid result.
Track and Monitor Success as You Go
I mentioned above that you need to be selective with the metrics you use to track the success of your redesign project when you first launch your new website. You also need to do this on an ongoing basis, as you launch your initial A/B tests and incrementally improve the content and functionality.
However, depending on the type of tests you run, there will be different success metrics that you will need to measure.
It’s important to trace the objectives of your experiments back to the core business objectives of your website – and of your organization. If ancillary product penetration is a primary KPI for your company (as it was for us at Direct Line Group), then you need to make sure your experiments focus on this outcome.
It sounds obvious, but you would be amazed how often this gets missed.
My recommendation would be to use a test results document to log the results of each of the experiments you run.
This can be a simple spreadsheet that lays out, in chronological order, all the experiments you carry out in your testing program, with their results. It’s generally a good idea to include comments about the experiment, as well as agreed next steps.
It’s amazing how easy it is to forget these details down the track if they aren’t documented somewhere.
We would recommend that you include at least the following fields in the spreadsheet:
- Test ID
- Test description
- Success criteria (& maybe hypothesis)
- Experiment start and end dates
- Winning variant
- Comments, including uplift/downturn %
- Agreed next steps
This has served us as a great way to present results to stakeholders and the senior management team. However you might also want to consider some data visualization to support it, as the spreadsheet format can come across a little dry on its own.
Wherever possible we would advocate taking an existing website design and improving it iteratively. Push back on the urge to start from scratch, as performance will suffer as a result (especially if the decision to launch a redesign is based on nothing but gut feel).
If you are left with no other option than a radical site redesign, though, do your best to devise a conversion plan that can be adopted as part of the overall project schedule. This will help you to align your redesign to business goals, giving you the best chance of success (and the lowest possibility of catastrophic failure).