You’re running lots of tests? Great stuff.
Now, here’s a piece of the puzzle you may not have thought about: what is the most appropriate way to archive test results?
Surely, any mature organization can use insight from past test results as an indicator of where to go in the future. Trouble is, there is no single correct way to do this, and barely anyone is talking about how to do it well.
Table of contents
- Why Archive Old Test Results?
- How Mature Organizations Archive Test Results
- Tools To Help You Out
- Limitations of Learning From Past Tests
Why Archive Old Test Results?
If optimization is, as Matt Gershoff put it, about “gathering information to inform decisions,” then part of that process is documenting what you’ve learned. In fact, this isn’t just applicable for test results. It’s important for qualitative research, such as customer surveys, and it’s equally important for anything that brings insight to your decision making.
Really, there are two main tangible benefits to archiving old test results. The first has to do with regular reporting and is more applicable to communication. As Manuel da Costa put it, “You have an audit trail of everything you have done for that client – so you can show them the value of your optimization efforts.”
The second is about knowledge management, supporting test ideas, and evolutionary learning.
You Need To See It To Believe It
A scenario you might be familiar with:
You’ve got the initial buy-in for a testing program (or you’ve just started working on a client’s site), and you’ve made some substantial lifts. You’re learning more and more about your customer base, and each test is bringing you more insight, which will lead to more revenue.
The problem: how do you communicate these results, clearly, to executives?
We recently published a great post by Annemarie Klaassen and Ton Wesseling that told their detailed journey of visualizing A/B tests results for clearer communication. You can (and should) read the whole post later, but for quick reference, here’s what they started with:
And where they ended up:
The clearer you can communicate ROI, the more organizational buy-in you can receive, leading to a stronger testing culture.
Joanna Lord gave a great speech at CTA Conference this year, where she talked about fostering a better culture around optimization. Her third point honed in on the need for reporting, because, as she said, “you need to see it to believe it”.
At Porch, she says, every week they have weekly test roundups, and each report is led by insights (which is above even revenue). As Joanna said:
So their reporting accumulates, and even tests from a year ago can bring insights to current test ideas.
Knowledge Management and Evolutionary Learning
Now there’s the second side of reporting: using the archives as a database of accumulated knowledge.
As Manuel da Costa from Digital Tonic put it, “Ultimately, documenting also serves as your own testing library that you can dip in and out of when brainstorming in other projects. There is accountability and also helps maintain the trust with the clients you work with.”
Martijn Scheijbeler, Director of Marketing at The Next Web, echoed a similar sentiment, placing emphasis on the fact that all of the knowledge can be put in one place where everyone can benefit from it:
How Mature Organizations Archive Test Results
There’s no one way to do it. While one organization may prefer Excel and Trello, another may have a built in process complete with a custom tool to track all tests.
We mentioned Porch, above, who spends Sundays documenting results, insights, and other pertinent information that goes into a database of past test results. Though I’m not sure on the exact tools they use, it seems like a more manual process than some other organizations.
That’s what’s interesting about archiving test results: there’s no correct way to do it. All that matters is what works best for the efficiency of your team.
The Next Web
The Next Web is a powerhouse in tech news, and their growth/optimization team is efficient. Here’s how Martijn Scheijbeler described their reporting and archiving process:
GrowthHackers.com recently outlined a growth study on how they began high tempo testing and how it revived their growth. This entailed 3 experiments a week, including new initiatives, product feature releases, and of course A/B tests.
What is high tempo testing? As Sean Ellis put it:
So as much as I hate the Ducks for beating Wisconsin in the Rose Bowl a few years ago, you can see how archiving results would be beneficial in their case. Running tests at this volume and velocity, it’s important to fuel your tests with as much insight as possible, so as not to waste any valuable time or traffic.
The above image is from Growth Hackers’ new tool, Canvas, which helps support the whole process (including archiving results). Not only does it archive results, but also ideas, hypotheses, etc. This makes it easier for members of the team to extract insights from past tests, and it also lets new members quickly onboard by analyzing what has and hasn’t been tested in the past. Here’s how Sean put it:
Here’s Manuel da Costa explaining how their reporting process has evolved:
So in summary, in effort to track everything and report it back efficiently, they created a tool to help save time from manual reporting with a patchwork of common tools.
Data for Decks
While this is more of a reporting solution for executive understanding, it’s also a solid way to archive test results for learning. Chris Tauber, chief analyst at Data for Decks, wrote a post on Monetate’s blog that outlined a simple 5 step reporting process that uses Powerpoint to explain results:
1. Capture full screenshots of “A” and “B.”
2. Highlight what’s being tested.
3. Align the hypothesis to the metrics.
4. Show only the metrics that matter.
5. Put these pieces on one, and only one, slide.
And of course, you can save these slides as a high-level overview of tests results, and possibly combine this visual with other tools we have listed above (and more that I’ll list below).
Tools To Help You Out
Built by konversionsKRAFT with the purpose of organizing the entire testing process, Iridion is a sophisticated tool for archiving test results. One of their benefits listed on their site is that the tool can, “Record all of your test results in a constantly growing archive. Make sure that new team members immediately know what has been tested previously and how successful these tests were. Use these findings for follow-up tests.”
Here’s how Andre Morys describes the tool:
So Iridion is aimed at improving quality of tests as well as workflow. As Andre told me, “I don’t share the idea of “high speed testing” – High impact and success rate is economically much more important than high frequency.”
As mentioned above, Manuel da Costa built Effective Experiments to help conversion optimization project management. Here’s how Manuel describes the tool:
So, it’s an all-in-one workflow tool that will make reporting and archiving much, much easier.
Trello & Excel
No one said archiving test results had to be fancy. In fact, Excel is probably (though I have no data to back this) the most common way organizations archive test results. Josh Baker wrote an in-depth post on how he documents A/B test results using excel, along with what exactly he documents.
We use Trello for certain projects at CXL. It’s also possible to enact a combination of Trello, Excel, and say, the Data for Decks Powerpoint example above, which will give greater visual clarity to non-optimization team members and executives.
Your Testing Tool
There are a multitude of ways you can integrate your documentation process with your testing sool. Here, Leonid Pekelis from Optimizely, explains:
There’s a whole discussion on Optiverse about archiving test results. Read it if you’re looking for ideas for your own organization.
VWO also has ways to archive test results. Here’s Paras explaining:
Limitations of Learning From Past Tests
Archiving past results, and particularly managing and analyzing them, is time consuming. With any time investment, you’d hope that the ROI would be positive. One of the main questions you’ll ask yourself when it comes to learning from past tests is, “how relevant are the learnings from last year’s tests?”
Martin Scheijbeler says that, though there are some limitations on past tests, in general the benefits outweigh them. Here’s what he had to say:
Manuel da Costa agreed, mentioning that learnings from past tests are valid, yet they have to be taken with a grain of salt due to external validity factors:
Seasonality, traffic sources, PR, and other external factors are things you need to worry about no matter what, though. It’s not just in analyzing past results that they matter. If you were to indicate, in detail, these details on your reports, then you can factor them into your analysis.
Steven Pesavento doesn’t see these things as ‘limitations,’ necessarily. Even though a channel or tactic may change, learning from past tests is a necessity for the GrowthHackers team:
Archiving test results is important because it allows for clearer reporting and communication, and because it gives you a knowledge database from which you can extract insight.
However, unlike A/B testing statistics, the rules of execution are bendable when it comes to archiving results. There is no one way to do it, and most mature organizations do it just a little differently. As long as you’re tracking the right data, the data that is pertinent to your growth, then the method by which you do so is of secondary importance.
Some have developed sophisticated in-house tools to solve the problem, some use their testing tool, some purchase external tools, and some are still using good ol’ Excel. In the end, it’s up to you and what works best for your team.
Since this article is more of a discussion than a how-to, I want to ask: how does your team document and archive test results? What kind of struggles and bottlenecks do you face in the process?