Superpower 5: Experimentation and testing

Back to PMF diagnostic

DOWNLOAD PDF

Why is this valuable?

Whether you have launched your product or you are starting to explore new ideas or features for a longstanding product, you are usually in a space of maximum uncertainty. You have many questions about what elements of the solution are working and why. In these instances, it is important to divide your solution, and acquisition and retention efforts, into what is known and what is unknown, to ensure ongoing improvement towards product-market fit. Rather than guess at the answer to these questions, we suggest that you exploit experiments. Experiments are a powerful tool for startups. They encourage teams to pause and test assumptions before barreling forward with a complete solution that might not work.

Experiments help startups articulate what hypotheses need to be proven and to approach them in a rigorous, cost-effective way. The team decides what questions need to be answered about how users experience the product, and then designs a low-fidelity model (or two) to share with users for observation and feedback. For instance, you could test customers’ perceived value of your product based on various taglines, or their willingness to buy and/or use your product depending on how the offer is articulated. You could also test the right time and place to encourage users to refer others. Such tests allow you to make informed decisions so that you invest your time and effort where it makes sense to grow the business. Experimentation is not just a tool, but a mindset. All team members need to buy into the method of experimentation. 

 

How do you test ideas?

The first step is to identify your key hypotheses (our chapter on value propositions can help), and decide which ones to test. Next, decide which experiment method to deploy. A minimum viable product (MVP), for example, is a type of experiment that allows you to observe how potential users might respond to your initial, early-stage offering. It is important to identify your riskiest assumption/hypothesis (i.e., what users want, how the design should work, what messaging to use), and then find the easiest, fastest possible way to test those assumptions. Then, use the results of the experiment to correct the course and further progress to product-market fit. 

The list below can help you decide what type of experiments to conduct, and how to create a framework to collect valuable results. Costs vary depending on the type of experiment you select.

 

When should you do it?

You can start testing and measuring your solution as soon as it’s launched, and provided you’re testing just one variable at a time (to ensure you can attribute any differences in results to your experiment), this process can occur on an ongoing basis. This helps ensure continual product improvement. 

We believe you should make experimentation part of your startup culture. Instead of relying on assumptions, use experiments to generate data (see our chapter on data to learn how to collect and analyze it), to guide your decision-making when it comes to understanding what to do, when and how. 

That said, adopting an experimental mindset and embedding it into the company’s processes won’t be easy. It requires buy-in from the entire team, and realization that your great ideas are just hypotheses that can be validated (or, more likely, invalidated). Experimentation requires a lot of rigor, and time to properly design the experiments, run them, and extract and document learnings over time.

 

What kind of expertise is required?

Designing lean experiments requires no additional expertise beyond that which is already on your team. Your CTO, product person or anyone on your tech team who has basic data analysis skills can get cracking. If no one in your team can do this, it indicates an opportunity for someone on your team to upskill.

Getting started: Designing lean experiments 

Experiments are most often used to test either marketing or product concepts. For example: marketing experiments help ensure that your messaging resonates with customers and that you’re using the right channels. It can also aid in quantifying the expected conversion rates against cost, while product experiments will help you confirm that customers will see value in your proposed additions or changes, and that they understand how to use your product.

Such experiments can either be qualitative or quantitative. For instance, qualitative marketing tests could include showing customers your marketing materials and asking for feedback re: what is appealing to them and why. For quantitative marketing tests, such as A/B testing, you could test two marketing campaigns targeted toward the same users, to see which one converts customers more reliably using statistical tests as a guide. 

Here are some common steps you can take to ensure your experiments are well-designed:

  1. Describe the hypothesis that you want to test. Indicate how critical it is for your business model.
  2. Define the test you are going to run to verify whether your hypothesis is valid or need to be revised. Indicate how costly and reliable this test would be.
  3. Define what data you are going to measure. Indicate how long it would take.
  4. And finally, define a target threshold that will indicate whether you can validate or invalidate your hypothesis.


A quantitative experiment in practice: A/B testing

Experiments can take various forms, but the most common type of experimentation is A/B testing, which observes customer behaviour in response to different features, messages or stimuli. A/B tests are randomized experiments with two variants (A & B), representing the control and treatment groups. Ideally, you would run an A/B test for each improvement you want to implement.

  1. Identify the key behaviour you want to see from your users: Choose a metric you can use to track that behaviour. Use this metric ( i.e., signups, referrals, usage) to measure the effectiveness of your test. See below for more information on metrics you can use. Use online tools to calculate the statistical confidence level you’ll need for each test.
  2. Identify the barriers that deter users from exhibiting your chosen behaviour: This could be any sort of friction that makes it difficult for users to sign up and engage with your product, that makes the key behaviour not sufficiently compelling, or that hides your product’s potential benefits.
  3. Establish a control and treatment group for comparison: When A/B testing, give a sub-group (treatment group) of customers exposure to an experiment or change/adjustment in your product. Then measure their behaviour against that of the rest of your customers (control group).
  4. Change just 1 variable at a time between your treatment and control groups: For example, you could present two versions of your product homepage and measure which one results in higher sales conversions. Keep all other aspects of the product exactly the same while the two homepages are up so as to ensure that any difference in sales can be attributed to the homepage design.
  5. Control for other factors that could affect your results: Test with a roughly similar treatment group when compared to the control group. This ensures it is the variable you changed (and not the preferences of the treatment group of customers) that explains differences in outcomes. Pay attention to how your experiments could be affected by things like climate, time of day or seasonality, the gender of participants, social class, and other factors.
  6. Timebox your experiments: You may feel the desire to continue running the experiment “just a little while longer” in the hopes of getting better results. The problem is that when left unchecked, weeks can easily turn into months, unnecessarily risking your precious time and resources. Instead, it may be useful to set a time limit at which point you will evaluate the outcomes of your experiments. 

 

When deciding how to execute your experiment, your approach will depend on factors like the level of detail you need, the resources you have available, how far along in your product development process you are, etc. Approaches will vary based on how closely your test represents the real product (known as fidelity), and how much the customer can interact with the MVP relative to a live product (interactivity). A few product experiments approaches, in order from low to high fidelity/interactivity, include: 

  • Sketches: Sketches have the lowest fidelity and lowest interactivity, but are extremely low-cost and easy to rapidly iterate. They can be a useful way to test marketing concepts or user experience features. Pen & paper is a helpful tool for sketches. 
  • Wireframes: A wireframe is a visual guide that illustrates the skeletal framework of a product, typically a website or mobile app. Wireframes offer improved fidelity over sketches as you can give a sense of the product components and how they might be arranged. They can help startups make decisions about how to organize a home page and iterate quickly without spending time on a full design.
  • Mockups: Mockups are the next level up in fidelity, and look much more like the final product than a wireframe. Mockups often include visual design details, images, etc., instead of placeholders as in the wireframe. For example, they can help designers decide which icons most resonate with users. InVision and Figma are helpful tools for mockups.
  • Interactive prototypes: Prototypes are not quite fully functional, but they provide a level of interaction beyond a clickable mockup. They give users a fairly complete experience of the product so that the team can observe it and receive feedback.
  • Concierge MVPs: These allow you to test your product or service live using manual workarounds or hacks, instead of a full-blown backend. Using a “concierge” MVP involves doing everything for early customers by hand, not automated by tech, and works best for services that require a lot of interaction and input from customers. Zapier is a helpful tool to enable some automations.
  • Live product: A live product has the highest fidelity possible, so it can help you generate feedback on things like how the product looks and behaves in different contexts that you would not get from lower fidelity options.

 


A qualitative experiment in practice: Interviews for product MVPs

Once you have an MVP, take a structured approach to testing it with users. We suggest the following:

  1. Develop a test script for the entire conversation, including the steps you want the user to take. Test it on your team members first. An MVP test will likely take 1-2 hours per person.
  2. Start by spending a few minutes getting to know the user in question, and setting expectations for the experience. Make sure they know you want them to give honest feedback, even if it is negative. Ask how they feel about what they are using now, how it works, and their current frustrations.
  3. Ask questions and observe, but don’t lead. If a customer does something curious, ask them why they took that action: “I see you did this, can you tell me why?”
  4. Ask open questions, which begin with “why”, “how”, and “what”, rather than questions with yes/no responses. Write your questions in advance so that you are not distracted.
  5. If the user has challenges understanding or using the product, your job is to understand the issue – not to help them – so you can keep the test as realistic as possible. After the test wraps up, you can answer questions or respond to the challenges they had, but don’t interrupt to problem solve or guide (to the extent possible).
  6. Don’t forget to ask if they would like to be notified when the product comes out!
  7. Finally, as a thank you – consider giving them a token gift for their time.

Nigeria’s Cowrywise uses experiments to improve referral rates

Simple experiments helped the Cowrywise team understand what marketing strategies could be leveraged to improve referral rates. The experiments gave them clear, reliable answers within a few short weeks, and indicated what strategies to pursue in the longer term.

READ MORE

Resources

Designing strong experiments

Well designed experiments can further strengthen the evidence you get, which will increase your confidence in making decisions.

Test and Learning cards by Strategizer.

A Beginner’s Guide to A/B Testing

In this guide, Just Eat Senior Product Designer Kein Stone discusses what A/B is, why you should do it, and its limitations.

Optimizely

Optimizely is a digital experience platform, empowering teams to deliver optimized experiences across all digital touchpoints. 

Visual Website Optimizer  

VWO is the market-leading A/B testing tool that fast-growing companies use for experimentation & conversion rate optimization.

Convert

Convert is another A/B testing tool. It enables you to onvert more visitors, plug revenue leaks, and save on testing tool costs with Convert Experiences.

Google Optimize

Google Optimize is a free website optimization tool that helps online marketers and webmasters increase visitor conversion rates and overall visitor satisfaction by continually testing different combinations of website content.

Amazon A/B Testing Service

Amazon A/B Testing service is an effective tool to increase user engagement and monetization. It allows you to set up two in-app experiences.

Modesty

Modesty is a simple and scalable split testing and event tracking framework.

A/B Test Calculator

This simple calculator that tells you each variations conversion rate to help you determine whether A was better than B or vice versa.

Calculate how long you should run an A/B test

Put in a few metrics and this tool tells you for how many days you should run your experiment for.

A Massive Social Experiment On You Is Under Way, And You Will Love It

This article explains the benefits of A/B testing, with real-world examples – it should hit home.