Last week, 400+ data-driven marketers from 20+ countries around the world met up in the outskirts of San Antonio, Texas for CXL Live 2017. Everyone, including the 20+ speakers, stayed together at the same secluded resort for a full three days.
People began arriving in sunny San Antonio…
Made it to San Antonio! Excited for #cxllive (and also lots of ☀️). https://t.co/yTA10TjIUw pic.twitter.com/g46Kb7tHF7
— Nicole Mintiens (@Nicole_Mintiens) April 5, 2017
Getting ready for #CXLLive! pic.twitter.com/GkEfQQyB75
— Sara McGuire (@sara_mcguire) April 5, 2017
Just got to #cxllive! And now to stand around awkwardly until I see someone I kind of know…
— Misha Hettie (@uncommonlymisha) April 5, 2017
Michael Aagaard, our event emcee, quickly kicked things off in style…
Getting ready to kick off #CXLLIVE – This. Is. Going. To. Be. Awesome! pic.twitter.com/T2OjzFm2U8
— Michael Aagaard (@ContentVerve) April 5, 2017
On day one, we heard from UX and digital analytics experts. Day two was all about optimization / testing strategies and processes. (Featuring a classic game of CRO Feud.) Finally, day three covered all things customer success and growth.
Overall, it seems like people had a good time…
Thank you to the entire team @conversionxl for an amazing #CXLLive! See you next year. pic.twitter.com/cNzSJ0JPWN
— Joe Martin (@hijoemartin) April 7, 2017
Thanks, San Antonio and #CXLLive: one of the best conferences I’ve attended in a while. I am a believer. #cheers
— James Valentine (@ValentineJames) April 8, 2017
Flying back to SF after a great week in San Antonio for #CXLLive, great event, even better people. Thanks for the fun!
— Krista Seiden (@kristaseiden) April 7, 2017
Thank you to everyone who made this year’s event our best yet! To those who couldn’t make it, we hope you’ll join us next year. But, in the meantime, here’s a recap of all of the CXL Live 2017 talks.
(We’re still waiting on the official photos, but you can take a look at attendee photos on Twitter. Videos of the 20+ talks will be available to CXL Institute subscribers in about a month.)
Jared Spool: Is Design Metrically Opposed?
- Are we measuring the right things?
- It’s not which variant collects more email addresses, it’s which variant sells more books.
- Email addresses are observations.
- The meaning we bring to the observations is an inference.
- Observations -> Inferences -> Design Decisions
- What we saw, why we think it happened, what we will do based on that info.
- You may have one observation and then multiple inferences, which would result in multiple design options. So, you need to know which inference is correct.
- The best designers never stop at the first inference. “What else could it be?” Conducting additional research adds observations, which can help clarify the correct inference. “Research turns inferences into observations.”
- Useless measures and silly metrics.
- Metric: A measure we track (usually over time).
- Measure: Something we can count.
- Analytic: A measure software can track.
- E.g. Average time on page; we don’t know how it’s collected, we don’t know exactly what it’s measuring, it doesn’t tell us anything… but we still use it.
- A metric should tell you what you will do differently.
- Conversion rate is a ratio and ratios are a huge problem.
- 10K / 1M = 1%. Could increase with 20K / 1M = 2% or with 10K / 500K = 2%. Are both created equal? No, the first 2% is more valuable in terms of revenue.
- We don’t know how the systems we use calculate conversion rate. If a person visits four times and purchases on the fourth visit, is that 100% or 25%?
- Measure experience.
- Take the steps a customer takes to complete a task (e.g. book a hotel). Then we chart the customer’s happiness / frustration at each step. Then we can get ideas for what we can do differently.
- Make a list of things the user is finding frustrating.
- E.g. People find error messages frustrating. “Phone numbers can’t have dashes or spaces.”
- Combine qualitative usability research and quantitative custom metrics (not metrics that come out of the box).
- Who measures?
- Design: The rendering of intent.
- Design must drive metric collection, not the other way around.
- Moving beyond satisfaction.
- Gallup CE11. 11 statements that you either agree with or don’t agree with (or not sure). E.g. “I’m always proud to be a customer.” and “I can’t imagine a world without this product’s company.”
- Take the survey at the beginning and the end.
- Metrics must drive us to eliminate frustration and deliver delight.
Anna Dahlstrom: Using Storytelling to Craft Multi-Device Experiences That Convert
- The problem is we focus on trying to fix vs. trying to architect things.
- CRO usually starts at the bottom of the funnel: measure. When CRO is at the bottom of the funnel, it encourages silos.
- The entire funnel is all about asking and finding the answers to the same fundamental questions. Who? What? Why? How? When?
- 7 principles of good storytelling: plot, character, idea, speech, chorus, decor, spectacle.
- Plot
- Dramaturgy: knowing how to apply and structure elements to tell a story.
- Three act structure: setup, confrontation, resolution.
- Act one, start to consider. Act two, looks further into it. Act three, buys (or not).
- Use dramaturgy to help define and visualize the plot of the experiences we design.
- Characters
- The protagonist, the main characters, the supporting characters, sub-plot characters, one string characters.
- Traditionally, we think about end users. But it’s also about the different devices we could be using and the role those devices will play. Voice, bots, machine learning, etc.
- Identify and define the different characters that play a role in the experience and what role they play and when.
- Decor
- Everything happens for a reason in a good story… the “red thread”.
- We no longer control the journey. We don’t even control the messaging.
- Define the ecosystem. What’s the role for each? Are they owned, earned or paid? What’s the messaging for each? Where do they drive to? How are they connected?
- Map, define and design the environment in which the experience that we create takes place.
- Everything must work together, every part of the business.
- Be clear and accurate from the start. Everyone should be telling the same story. Don’t focus on fixing.
Abby Covert: How to Make Sense of Any Mess
- Information doesn’t equal content. Information architecture is how we arrange the parts to be understandable as a whole.
- Language matters.
- How many duplicative nouns does your team deal with?
- How do those nouns relate to each other and how might users perceive them?
- What about verbs? Once the nouns are sorted out we can start looking at the verbs.
- The goal is not to simplify, the goal is to know what you mean when you say what you say. Talk about language.
- There’s no right way.
- There’s no right way to organize your words. The only thing that matters is reaching your goals.
- Show an alternative way of organizing something. This will help you break out of the “same old, same old” model.
- We need pictures.
- Take the time and space needed to collect and iterate.
- Be aware when visualizing something that’s very hard to explain, be careful of reductionism.
- Show the process, not just the result.
Jaime Levy: Conducting Methodical Guerrilla User Research
- Traditional usability research is lame.
- Typically happens at the end of the product lifecycle.
- In an uncomfortable format… in a usability lab with nerds watching over your shoulder.
- Planning Phase (2 Weeks)
- What’s the most important thing you need to learn?
- Prepare the questions.
- Scout the venue and map out logistics.
- Find participants.
- Screen the participants and schedule time slots.
- Interview Phase (1 Day)
- Stop them at the door and bring them straight to the cafe table, for example.
- Make them feel comfortable.
- Pay them upfront so they don’t try to say X to get the money.
- Extract succinct notes in real time instead of recording. Have a dedicated note taker.
- Analysis Phase (1-4 Hours)
- By this time, you’ll already know.
- Look at the answers across the grid to spot patterns.
- Validated user research is about getting measurable results.
Karl Gilis: How to Make Sure Your New Website Won’t Be a Failure
- Is a redesign always a bad idea?
- Redesigns can work as long as you conduct research.
- When a redesign is just a redesign, don’t do it. It’s not the look that’s the problem, it’s the content / the value prop.
- When do you need a new website?
- You don’t have a website.
- Your site sucks donkey balls.
- Content or structure are a disaster. Sites are huge, layers and layers of content.
- Crappy or outdated technology. FrontPage, Flash, etc.
- You’ve hit its local maxima.
- What method do we use to make sure it doesn’t fail?
- Analyze data to find problem areas.
- User research to find out why it’s a problem.
- Find a solution and implement / test.
- Measure and refine.
- Look at the least visible pages. Don’t be afraid to prune your site.
- Heatmaps, session replays, etc.
- Observe users using your site. Watch where they succeed, where they fail, etc.
- 1-3 question targeted surveys.
- Don’t change what’s working. Only change what needs to be fixed.
- Structure, mockups, lots of user testing.
- Test your navigation.
- Most people who create mockups don’t use real words. Use real text and real images. Make it is as close to the final version as possible.
- You can do click testing on your mockup.
- Don’t do what your clients want, don’t follow trends (like big banners).
Krista Seiden: Best Practices for Testing, Adapting, and Personalizing the User Experience
- Site content and personalization matter. 89% competed solely on customer experience in 2016.
- Personalization at mass scale.
- If you understand who is coming to your site in the first place, you don’t actually need to do that much work to personalize your site.
- Local-level personalization.
- Preferences, intents, etc. change based on location and culture. You can personalize just by understanding the markets.
- Personalizing at the traffic level.
- E.g. Message match / headline match from PPC ad to landing page.
- Action-level personalization.
- E.g. Someone who hasn’t purchased an event ticket gets: “Sorry, sold out.” Someone who did purchase gets: “Looking forward to seeing you!”
- Personalizing in real time.
- E.g. Someone is very interested in a particular product. They’ve been to the page three or four times, but not purchased. Now, when they return to the site, they get new personalized copy.
- Personalizing with Google Optimize.
- Inject code to create popup offers.
- Identify key segments of users to target. E.g. location, loyalty.
- Create a unique offer and target those groups.
- Insert line of code.
- Then, they trigger the targeting rule and get a personalized offer in a popup.
- Use GA Audience for remarketing and targeting.
- The same audiences you create remarketing campaigns for can be created and used for targeting in GO.
- E.g. People like cameras, but haven’t purchased one. You can remarket to them using audiences. Then you can use that same audience for GO targeting.
- Combine remarketing + personalization for best results.
- Get more out of reporting in GA.
- eID automatically sent to GA with every GO hit.
- Segment / Add secondary dimensions of Variant *, etc.
- Create audiences and segments based on previous test behavior, target to future test experiments.
- Inject code to create popup offers.
Michele Kiss: Mastering Analytics for Optimization Success
- Hypothesis writing. Ask people to phrase it: “I believe that ___. If I’m right, then I will ____.”
- Data can help you prioritize.
- Create a revenue forecast before you even run the experiment.
- Layer on your assumptions to estimate the impact. You can use your existing data. E.g. current traffic, conversion to product pages, % that view images, current conversion rate, current conversion rate for image viewers.
- Template.
- Better control the uncontrollable. Statistical significance is not enough.
- You can reach significance in a few hours, sometimes. The results can change.
- Problem #1: Seasonality. You can forecast what you think your forecast + seasonal curves will look like. E.g. Black Friday.
- Problem #2: Drift. You might repeat your tests to ensure your results hold true.
- Ever doubted your testing tool? E.g. users falling into multiple test variations.
- As much as we don’t like it, some things will not be tested.
- Confirm the validity of test results.
- E.g. bots get into a variation and throw off the results. If you are not monitoring, you might miss that and call the wrong winner.
- Run a year-over-year test. 2015 site vs. 2017 site. What’s the lift?
- Analytics and testing should live together in an organization. It allows for individual expertise, avoids duplication of work, aligns priorities.
- Summary: Find opps, estimate impact, dig into results, take testing further.
Joanna Wiebe: How to Be Specific: From-the-Trenches Lessons in High-Converting Copy
- Copy is your online salesperson. If the copy won’t sell them, what will? Copy either sells or it doesn’t. It’s not about 10 second copy or 15 second copy.
- “Long copy” doesn’t mean “long form sales page”. There’s a time and place for long form sales pages. Just give your copy some breathing room.
- Images shouldn’t do all the heavy-lifting. Neither should video.
- Don’t cram all of your copy into an unreadable lump.
- Do not summarize everything in as few words as possible.
- What happens if you try to convert just one person and forget the copy restriction rules?
- There’s so many messages every day that people are struggling to find something relevant to them.
- 1986: 2K ads a day. 2016: 5K ads a day. We recognize 50. We remember 4. 10 second copy makes you one of the many, many forgettable ones. You and your business are not average.
- Zooming in (we’re currently at summary level).
- It’s not about long vs. short. You can get more specific without changing length.
- Try zooming in on the pain you’re solving. E.g. hero image is pain-free vs. hero image is pain-filled.
- PAS framework: problem, agitation, solution.
- This works on email, too.
- Good email copy needs to stop a busy person from doing whatever they were doing before.
- In almost every case, there was 2-3X more copy in the winning variant. People will read it.
- “I’m not afraid to read. Just don’t make me figure it out.”
- Zoom in, test your words, they’re free and in unlimited quantities.
Andrew Anderson: The Discipline-Based Testing Methodology
- Echo chambers. What do we really know?
- Can you prove that it wasn’t the alien space lizards?
- We find reasons to believe something.
- Changing the conversation.
- Hypothesis – BETA (No more hypothesis because that’s what you believe.)
- Justification – Activity (The goal is to create activity.)
- Sacred Cows – Must Challenge (Whatever it is, this is the first thing you want to challenge.)
- Review – Trust Your Users (Any marginal change you make, you don’t know if it’s good or bad.)
- Discomfort – Success (You need to challenge your team to move beyond their comfort zone.)
- You are the least important part of the process.
- Simple rules.
- All test ideas are fungible.
- More tests does not equal more money.
- It is always about efficiency.
- Type 1 errors are the worst possible outcome.
- Plan the tests around the resources.
- No storytelling. What do you really know?
- Maximize your outcomes, don’t maximize feeling good.
- Framework for framing the discipline.
- Never test less than 4 experiences.
- Prioritize by the number of options and the beta of the options, not by confidence.
- Make sure as many experiences as possible make you uncomfortable.
Chris Goward: The Better Way to Optimize: How to Get Business-Impacting Insights from Your Growth Program
- Combine the brand marketing manager and the data analyst. They’re the Yin and Yang. Optimization champions are zen marketing masters.
- LIFT Model
- Value proposition.
- Relevance – to the client’s needs right now, external environment and seasonality.
- Clarity – copy, communication of the value proposition.
- Anxiety.
- Distraction.
- Urgency.
- Use this model to analyze each page of the site systematically.
- Cognitive biases can help you with the LIFT Model.
- Design your experiments so you maximize for growth and insights.
- Tips:
- Senior-level buy-in is critical.
- You need to track the right revenue goals.
- Trust the process.
- Don’t be afraid of the left-field variations.
- Analyze results for personalization insights.
David Nye: How to Manage a Large-Scale Testing Program
- 2008 process.
- Idea -> Build -> Test -> Analyze -> Finalize
- Test ideas came from HiPPOs, tests were often broken, development cycle was 6 weeks, only one analyst.
- Needed investment in a “Test and Learn” program.
- Arguments: Manage risk through testing, ROI, internal competition, prioritization, knowledge building.
- To upgrade, needed: people (more resources), process (decision makers, understanding value of the program), and tools (pipelining and process management).
- Lessons learned.
- Measure the health of your program. Save tests and results, monitor conversions, tag tests, report on results.
- Tools used: JIRA, SiteSpect, Apptimize, Adobe Analytics, Amazon Web Services.
- 2017 process.
- Observe -> Investigate -> Insights -> Hypothesis -> Experiment -> Conclude
- Problems: Calling tests early, optimizing the optimization program itself, finding time to retest, investing in data quality, learning to test everything.
Julie Grundy: UX of Form Design: How to Design an Effective Form
- Why do we care about forms?
- Web forms are often the last and most important mile in a long journey.
- How much can a bad form cost you?
- Expedia? $12M.
- #1: Reduce cognitive load.
- Top-align labels, put labels close to fields, avoid all caps.
- Don’t use labels as placeholder text.
- Take caution with floating labels (label is in-field and then floats to above the field when you start to fill it out).
- Put the primary action on the right. Different styles for primary and secondary actions.
- Your brain can process about five items in a list at once. If you have 6-25 items, put them in a select dropdown. More? Search dropdown.
- Show progress. Make sure it doesn’t look like navigation.
- #2: Help prevent errors.
- Specify errors in-line, validate per field.
- Helpful messages.
- Show password option helps, especially on mobile.
- Leave select lists blank. Static values are easy to skip. Smart fields are ok.
- Show basic help text underneath the input box.
- #3: Make it human.
- Clear, simple language and CTAs. Form text should be clear, explain what to do and be written for the user.
- Don’t make people work harder; limit typing.
- Make forms accessible. All users have equal access to information and functionality. Better for everyone.
- Ask only if you need to.
- Make each question as engaging as you can.
- Consider the context. Empathize, picture the user when planning the form, situations matter.
- Field length should match estimated text size.
Momoko Price: How to Listen to Customers Effectively
- Business advice always comes back to “listen to your customers”.
- The problem? We suck at listening.
- We ask our customers a ton of questions and do nothing with the answers. Then we test based on hunches and opinions about what they want.
- Askholes, all of us.
- How do we stop being askholes?
- Good listeners ask good questions.
- Stop asking questions on autopilot.
- Creating higher converting value props with just 3 questions…
- Ask homepage visitors: What matters most to you?
- Ask customers: What’s the #1 benefit?
- Ask customers: Why us and not someone else?
Stefania Mereu: Fast User Segmentation for a Better Conversion Strategy
- Of all your users, how do you know who to write for, build for, design for?
- Lots of people choose demographics. Doesn’t work. They’re just stereotypes.
- 1 Hour
- Pay attention to the analytics. Top keywords, for example, can reveal hidden audiences and segments.
- 2 Weeks
- Look through user-level data you have sitting around.
- You can do cluster analysis to have the data tell you who your users are.
- 1 Month
- Add survey to user-level data.
- Talk to them yourself, get new data.
- You can have better messaging, better marketing, etc. You will learn more from tests because you will be able to come up with more specific hypotheses.
Jakub Linowski: Searching for Repeating Conversion Patterns: What Can Multiple Tests Tell Us?
- Will someone else’s test results reproduce on your site?
- If it does, lower effort, faster results, higher win rate.
- You can measure reproduce-ability.
- Look at many tests that are similar. How well do the results reproduce?
- Have to measure medians. It gives you a sense for what the effect might be, helps mitigate outlier data.
- Highest certainty first because patterns can predict.
- Insignificance matters. If you add two insignificant tests together, it’s worth more.
- Inverted losers. When talking about patterns, it’s possible to turn losers into winners.
- Patterns do lose, of course, but they are quite accurate.
- 23 partner-based projects. They wanted to beat 50/50. They ended up with 70/30 in favor of positive results.
Chris Out: The Margin of Aggressiveness, the Game That Optimizers Should Really Be Playing
- The CEO doesn’t care about his website, he cares about his valuation (i.e. future cash flow / money). So, how can CRO improve future cash flow?
- #1: More clients.
- #2: Higher average spend.
- #3: Increase purchase frequency.
- Margin of Aggressiveness: the percentage of earnings that you re-invest to boost LTV.
- Increasing AOV and frequency.
- How? Research and A/B testing.
- How can you do this?
- Calculate your current margin of aggressiveness.
- Calculate your current LTV and CAC per customer segment.
- Which segment has the most room for growth?
- Focus on high LTV impact.
- Reverse engineer lifetime value study.
- What is the lifetime value of the variation?
- Sell CRO by calculating the valuation increase.
- 3 month ROI sheet.
- How much input?
- LTV increase?
- Valuation increase?
- ROI on CRO?
- What game are we really in?
- Not in increasing conversion rate. In increasing business value.
- Use CRO to improve future LTV.
Abi Hough: Hypothesize This: How Do You Find $100M of Lost Revenue Without Creating a Single Experiment?
- Top of the pyramid down: Persuasive -> Intuitive -> Usable -> Accessible -> Functional
- What is functionality?
- Live chat, customer survey, stakeholder interviews, visitor surveys, user testing, heuristics, eye tracking, heat mapping (leaves).
- Analytics is the soil.
- The roots: performance (how well it works), usability, and functionality.
- If you don’t look after the roots, the tree dies.
- Look at the site and try to break it.
- Use physical devices in your hand to test or a physical device on a cloud-based service.
- Simulators are unreliable.
- If you’re not doing functionality testing, you’re not optimizing.
- Test functionality before you try to harvest fruits.
- You also need to check for function issues with your tests.
- Tips:
- Build your own device lab and keep it well-stocked.
- Use virgins; first interaction.
- Manual testing for qualitative results.
- Automated testing for quantitative results.
- Track and log everything you find.
- Toolkit.
Leho Kraav: A Better Way to Prioritize Your A/B Tests
- Subjectivity is the enemy of good prioritization.
- Good prioritization makes maximizing the percentage of winning tests and uplift per test easier.
- After research, you’ll have a long list of ideas.
- You need early wins to prove your program. Where do you start?
- PXL.
- Asks you to bring more data to the table.
- Score all variables 0, 1 or 2.
- First, address the visibility of the change (more visibility, higher likelihood that it will bring along change).
- Second, score research data.
- Third, look at ease of implementation.
Joel Harvey: Essential Ecommerce Optimization Techniques
- Where should you test?
- Break down your site into these sections: home, other category, sub-category, product, cart, checkout.
- Measure these metrics for all: % and # of visitors, % and # landing, % and # revenue, % and # conversion, RPV, % and # previous step, CPA, total spend.
- Then segment by device and new vs. returning visitors.
- This will tell you where testing will have the biggest impact and where you have enough volume to run simultaneous tests.
- What should you test?
- Ask questions.
- Where did these visitors come from?
- Did they land here?
- Was there an offer that brought them here?
- Are we clearly communicating value at the right time?
- What friction points do we have in our UX?
- Ask questions.
- Every visitor has a click budget. The maximum number of interactions they are willing to make.
- Everything you do in ecommerce reflects your value proposition.
Note: see a ton of guidelines in our comprehensive ecommerce guidelines report (247 guidelines specifically for ecommerce).
Bill Leake: Optimizing for the “Considered Purchase”: What Changes if You’re B2B or Expensive / Long Sales Cycle B2C?
- Who is the B2B buyer?
- No single decision maker.
- Longer decision cycle.
- Multiple touch points.
- Persona-based stuff is really interesting for B2B.
- Your content needs to talk to a lot of different personas. It needs to get approved by HR, finance, marketing, etc.
- Marketing/Sales optimization funnel. Impression, visitor, web form, post-web.
- The goal is to continue building the relationship. Don’t sell too much too fast.
- You have to pull your data in from multiple layers and include everything after the post-web conversion.
- You have to watch the end goal, not early micro-conversions.
- You can’t think web-only because the sales model may drive conversions elsewhere.
- This means you can actually get more sales with a lower conversion rate.
- Conversions can happen over the phone or chat, too.
- Argue about attribution.
Oli Gardner: Data-Driven Design
- Immediacy: can people understand your value in the first few seconds?
- Use a five second test to measure first impressions and find clarity problems.
- What’s a good conversion rate?
- Unbounce released a really cool report that you guys should download.
- The way marketing teams work together now is broken.
- 81% of designers have to start designing before they get the copy.
- 62% of designers receive no customer data at the start of the campaign.
- 53% of designers get feedback from non-designers.
- 98% of marketers give design feedback. 87% think they’re qualified.
- Discovery
- Research, opportunity, priority, etc.
- Design
- Design is like sex; there’s someone else involved and their happiness is as important as your own.
- Copywriting, psychology, etc.
- Sentiment: positive, neutral or negative.
- Semantic design: are you designing for the content inside this experience? E.g. round door handle, turn; flat door handle, turn down.
- Style, attributes, elements can be informed by data.
- Colour palette: the colours you choose.
- Typography choices: knowing about the data can change how you choose.
- Typographic readability: how do the words look on the page? Can you read them?
- Image style: photography vs. illustration, for example.
- Visual ID: measure of how well the context of your product or service can be identified when your hero image is shown.
- Doing data-driven design from the beginning saves you from HiPPOs at the end of the process.
- Delivery
- Now you deliver the results to the client.
- Use your core values to deliver them.
- Go through each and evaluate based on each.
Guillaume Cabane: Personalized Experiences for the 97%
- You don’t understand your audience.
- You need to know more about each individual.
- Email-based enrichment opens better onboarding flows.
- Scoring.
- Leads with a very good customer fit make 80% of conversions.
- Using email as the key is easy. There are many data providers, no time sensitivity.
- Discovering anonymous users.
- No data? IP lookup.
- Which companies are coming to your site? Then you can build funnels per industry.
- Using the data.
- Score B2B marketing campaigns in minutes by joining IP detection and company scoring.
- Email is good, but directly on the site is better. E.g. Only offer live chat to visitors who are scored high.
- Enabling dynamic web content.
- The website is the mother of all cognitive load. It converts poorly because the messaging is generic.
- Enrichment in milliseconds can enable dynamic web content.
- Enrichment and scoring need to happen in parallel, though.
- Behind the scenes.
- Going way beyond A/B testing.
- Machine learning variations.
- Dynamic data injection.
- Data gathering.
- Try it out. (G-CONVERSIONXL for a 50% discount.)
- Accept that you don’t know the customer.
- It’s ok though, the AI will do it for you. And better than you.
Sean Ellis: Setting and Achieving the Right Growth Objectives
- Shift to a holistic approach to growth.
- Surface optimization is no longer enough.
- You have to test across the full journey.
- The journey often crosses multiple silos.
- Success requires clear goals and process.
- Impact of the right goals.
- Keeps the team focused.
- Celebrate victories.
- Makes growth sustainable.
- How do you achieve high impact goals?
- Pick the right goal (most leverage, solves problem, improves North Star Metric).
- Communicate specifics across the organization (context for goals, baseline and target, timeframe).
- Focus resources (limit short-term goals, assign a goal owner).
- Ideas -> Prioritize -> Test -> Analyze
- Create ideas as experiment documents (hypothesis, research, target, etc.)
- Use ICE to nominate the ideas you want to run.
- Listen to everyone and then decide on the 3-4 tests to move forward with based on resources available.
- Analyze the results and report on the progress. Share with the team, tests lead to new ideas.
- Repeat the process.
Lincoln Murphy: Marketing Success: Building a Customer-Centric Growth Engine
- As a customer, I want my life to suck less.
- Step 1: Define customer success-driven growth.
- Customer success is when customers achieve their desired outcome through their interactions with your company.
- As customers succeed and evolve (there’s no such thing as a static customer) their relationship with us should evolve and grow as well.
- Step 2: Create a foundation for customer success-driven growth.
- Ideal Customer Profile + Desired Outcome
- FOMO tells us not to get so specific, but that’s the right thing to do.
- Success potential is: technical fit, functional fit, competence fit, experience fit, cultural fit. You should be able to check all of these boxes. Go through the process of identifying bad fit customer characteristics.
- Churn happens because: (1) the customer goes away (dies, out of business) or (2) they didn’t achieve their desired outcome. Churn is 100% your fault.
- Customer Segment -> Required Outcome -> Method -> Appropriate Experience
- Step 3: Take massive action.
- Effortless Upselling Formula: Orchestration + Logical Intervention + Appropriate Offer
- As soon as a customer becomes a customer, you want to tee up the upsell opportunity. “You don’t need that right now, but you will.” It’s called building trust.
- It doesn’t matter what you want to sell, it matters what they want to buy.
- Quick Revenue Doubling Formula: Short-circuit Offer + Ascension Offer = Exponential Growth
- WOM Accelerator Method: Timing, Appropriateness, Success Vector.
- Exponential Expansion Framework: Infiltrate their Network, Timing, Prescriptive.
- Effortless Upselling Formula: Orchestration + Logical Intervention + Appropriate Offer
Wil Reynolds: Closing Keynote
- Great CRO, like great SEO, can long-term be bad for the customer.
- Redeploy your skills in a way that solves problems for people.
- Start typing a search term. Look at the suggested searches that come up… these are intentions. You can change it so that you see 10 results vs. 4.
- Look at intention for words, terms and phrases before creating content / landing pages (organic or paid).
- Look for discrepancies between paid and natural search.
- Compare organic descriptions (set once and never changed) to paid descriptions (optimized all the time).
- Look for times where results show outliers.
- E.g. a B2B company going up against a page of ecommerce results.
- We’re in a bubble, we get disconnected from how the rest of the world uses the Internet.
- You have to watch them go through the search results to understand what people want, what they click on, etc.
- If you ask people, they’ll give you the blueprint to better content.
- Find the low quality content and swipe it by creating better content with the feedback.
- You’ll be shocked by what you hear from people, how quickly their opinions change, etc.
- No budget? Fiverr. Cheap, easy. But bad on mobile and low quality.
- Validately. Good on mobile, instructions, reporting. Cons? $199/mo. And BYOpanel.
- Make your landing page predictive of the next steps.
- Google is good at this. Use their machine learning to influence your content strategy. E.g. the business / store results with hours, call button, directions, etc.
- Look at the bottom results in Google every time you search. Those are what Google predicts you want.
- “People also ask” can guide your content creation. When someone searches over and over again or clicks back to the SERP (on mobile), it displays.
- GetStat = Get PAA’s (people also ask) and related searches at scale.
- Also pay attention to what Google highlights / calls out per keyword.
- Don’t leave your landing page testing to the landing page. Start all the way back at the search results.