At TelTech, it took our product-marketing organization more than a year to get to something that resembled a true growth team, running high tempo testing.
So, if you are struggling to implement the growth hacking methodology, I get it. We assembled a team, achieved product-market fit, and identified our growth levers, but got stuck when we tried to put process behind our testing.
If you’re at a similar stage in your development, you’ll probably get stuck there too. Eventually, we found some practical methods to help us succeed.
You can benefit from some of the things we learned along the way:
- You can’t get around it; you have to be relentless in your efforts to drive growth.
- Developing documents and practices to organize your testing process can really help.
- You can’t be stupid about data; you need the right data people and tools to grow.
Don’t substitute passion for practical solutions – you need both!
Being relentlessly driven in our growth efforts was easy, and if you need a pep talk in this area, you’re probably not the cheerleader for growth that your team needs.
Blog posts, speakers at conferences, and anecdotes in books often make the growth process sound formulaic and easy, but it’s hard and you know it!
So, if you are not going to put your head down and work at it every day, none of this is going to help you, so I’m not going to focus on that.
What will help you is organizing tests around a process that forces you and your team to make sure the fundamentals of your experiments are sound.
For us, “vision documents” cemented the consistent process we desperately needed to get out of our own way.
In a bit, I will explain what a vision document is, how to get value from it, and all the things it can do to ensure that you start with a proper hypothesis, success metrics that can actually be measured, and a solid plan for getting something valuable from each test.
Don’t substitute anything for professional data analysis
Even if you are disciplined with documentation and structure, you can’t substitute that for the data analytics skill set you need to run the growth hacking process.
There’s a huge chasm between aspiring to be data-driven and actually being data-driven. If you aren’t a real data analyst – as in, you don’t know the difference between predictive and inferential analysis – then you aren’t a data analyst.
That skill set is not optional in the growth methodology, so you need to either hire, borrow, or kidnap the right people, and appropriately leverage them as part of your team.
Develop tools that force your team to organize around growth
You need a system to manage and prioritize tests, and you should consider purpose-built tools like Growthhackers Projects once your process has matured.
Managing the experiment flow is secondary to defining the experiments themselves in meaningful ways that you and your team can rally around. For that, you need your own tool, and for us, that tool is a vision document.
Before we had vision documents, we defined experiments on cards in our product management tool (probably Trello at that point).
We tried a couple of different formats, but mostly we focused on a summary, start and end dates, and what we thought at the time was a hypothesis.
The problem was that this was good for moving the test around a board, but not for properly planning experiments.
Learn to ‘speak in hypotheses’ that will shape your experiments
We just didn’t have a good hypothesis format to follow, which meant that most of our tests were just exploratory in nature: “By changing X, we expect to see something happen.”
That may sound kind of stupid to you, but take a look at your tests and see if your hypotheses are really any better.
The problem is that with this kind of lazy hypothesis something might happen, but even if it does, you may not be able to really learn anything from it.
When I attended CXL Live 2015, Michael Aagaard of Unbounce defined a good hypothesis as, “By changing X into X, I can get more prospects to X, and thus increase X,” and for me, that was a pivot point.
Aagard suggested that this would force you to know what you were changing, how it would affect users, and the impact it could have. It did that for sure, but two other positive things came out of it as well:
1. We began to see where our data collection and analysis was too deficient to support a proper test. This was especially evident in measuring app installs from web to mobile, then tracking those installs to conversions. The hypothesis format forced us to address these weaknesses, and ultimately we turned to solutions like Segment.io and Mode Analytics to address these problems.
2. We stopped shaping results to fit our narratives and started shaping our narratives to fit our results. Before we adopted this hypothesis format, we mistakenly believed that our marketers and product owners could own the data analysis process. But this format makes it obvious when you are shaping, instead of measuring, results. That is when I knew I had to hire data analysts, which I will discuss momentarily.
If you start “speaking in hypotheses” that fit Aagaard’s format, you and your team will challenge each other to start experimenting only after you know you have the data available to answer the questions at hand.
You will force yourselves to define what success should look like, and from there you can build a roadmap of where you want to go based on the results.
This made a huge difference for us, especially when we started codifying the answers to those questions as success metrics within our vision documents.
A good hypothesis informs good success metrics
Here’s an example of a hypothesis we used for TrapCall (our app which unmasks blocked caller ID), that helped us properly set up an experiment:
“By changing TrapCall’s test call into a live practice call, users will learn the call unmasking process earlier in their journey, thereby reducing short-cycle cancellations by 75%.”
We knew there were many things we could learn about this hypothesis: does a practice call education process reduce immediate user cancellations?
Would a better onboarding experience improve month-one retention? Could the improved onboarding process reduce tech support tickets?
From there, we started challenging each other to see which of these questions were actually measurable and whether our data could support that learning. We then were able to formulate coherent and measurable success metrics, such as:
1. Reduction in cancellations will meet or exceed our 75% goal.
2. Month-one retention will improve between 4 – 7%.
3. We will see a 9 – 12% reduction in tech support tickets.
Good hypotheses inform good success metrics, and together they’re the foundation of a solid vision document. We keep our vision documents to just over a page, and we require them for every test (within reason). The information includes:
1. A Summary of the Experiment — Limited to one or two short paragraphs.
2. Hypothesis — Following the formula described above.
3. Success Metrics — Always tied to specific numbers and vetted by our data analysts.
4. Timeframe — An ETA to get started and an estimation of how long the test must run.
5. Tasks — High-level project needs that help each stakeholder know their role.
6. Notes — Usually questions that need to be thought about before the test starts
When your whole team is working around vision documents, everyone has a stake in making each test a success.
Getting everyone involved in the testing process makes it work
The methodology they describe is as close to a prescription for growth as you’re going to find, but getting all the pieces to fit together is much tougher than it sounds if your growth organization is still in the maturation process.
Rigorously adhering to vision documents helped us make our tests more viable. In many cases, just because they got us all talking about the tests, they helped reduce the scope of our tests from full-scale development projects to simpler MVP-style tests.
However, vision documents alone didn’t make our high-tempo testing efforts successful.
Be prepared for the obstacles between you and high-tempo testing
We really struggled with testing cadence and consistency. If we had three tests planned, inevitably we ran into issues:
- One might get delayed because we had to restart it after a week when we realized we had failed to properly set up a goal on our testing platform or consider a database attribution limitation.
- Another might not net results as quickly as expected because we forgot to consider how few people saw a particular screen in a given time frame.
- And one might fail because an engineer unilaterally made a decision. This happened to us once while testing a free one-week upgrade for users that was supposed to start three days prior to a user’s renewal. Because it was an easier database change, the engineer started the test eight days before the user’s renewal date and completely screwed up the test.
- Vision documents definitely helped reduce some of these issues, but getting tests off the ground was always tricky. It would be easy to suggest simply lowering the bar, but in reality, if you want to grow, you can’t limit your testing to button and color changes. You have to be able to do bigger, more meaningful tests, too.
Everyone must embrace their role in growth
What helped us get past the inconsistency was getting our whole team–from product owners and marketers to designers and developers–more involved in the growth process.
When I started building a growth team, Sean Ellis told me, “The most important thing you can do is to help everyone in the organization learn their role in growth.” That has definitely proven itself to be true.
Getting people on the team to own tests that they themselves are passionate about drives individual growth that is contagious within our organization.
When testing is one person’s responsibility, tests become assignments, but when testing is everybody’s responsibility, tests become a team sport.
Now we have data analysts leading pricing tests, product owners driving feature experiments, and everyone collaborating to make each test a success. This has had a profound effect on our testing cadence and consistency.
Build your data approach around team education
Just as everyone has a role in growth, everyone has a role in making sure your organization is data-driven.
But, since not everyone is a data analyst, until we invested in that skill set and hired two data analysts, we really were just scratching the surface when it came to experimentation.
We all believed in the value of data, and we all tried to make data-driven decisions, but we were amateurs. Fortunately, when we got serious about hiring data specialists, we did two things right:
1. We said that data wasn’t going to be one person’s job. The right data person would be someone who would teach us all how to better integrate and use data.
2. We made our data analysts a part of the testing process from end-to-end. We decided that data analysis would begin at the hypothesis stage of our process.
The only way testing will provide any value is if it asks a question that can be quantified. Data analysts are really good at telling you if you have the tools to answer the questions you are asking before you waste time and resources.
Once tests are in progress, data analysts become instrumental in ensuring that experiments stay on track and run to statistical and logical completion. Finally, if all goes well, they can help you account for both expected results and unintended consequences.
The benefits of employing data analysts in this manner are most evident for us with pricing tests. Pricing is complicated when you have multiple plans, durations, and set-up fees. Before hiring data analysts, we were guessing.
We could see the short-term effect on revenue or signups when we made a change, but we couldn’t see the effect on other variables such as lifetime value.
Our data analysts helped design tests and employ machine learning tools to identify how any one lever affected another, and this has helped us optimize pricing more effectively than we thought possible.
Don’t try to do it in pieces – keep adapting
Part of what makes implementing the growth methodology so difficult is that it’s a bit of an all-or-nothing proposition.
Since it’s an organizational mindset and structure, you can’t just do a little bit of testing, or have part of your team “trialing” the concepts. You have to get a full buy-in, get your team excited about the vision, and introduce the tools and ideas before you get going.
Of course, with so many things being implemented simultaneously, your own process will inevitably break down, especially in the beginning.
Relentless drive and passion for growth do help, but like us, to get through the breakdowns, you’ll have to constantly adapt and change based on where your team is effective or deficient.
What I’ve described here represents some of the larger corrections we had to make along the way to find success. By applying our learnings to your organization, you’ll likely streamline the process and build a true growth team that runs effective high-tempo testing.