If you’re not a big fan of lost deal reviews, or if you’ve heard mixed reviews about them, I’m not surprised.
For many, the sample sizes are too small, and the data quality too poor to make a convincing case for substantive changes. Many of us have experience with lost deal reviews that degenerated into finger-pointing and arguments, not commitments to act.
So, how do you overcome these issues and turn your lost deal reviews into a powerful tool for positive change in your org?
The answer lies in selecting a method for collecting and analyzing win/loss data that meets your goals and can be sustained long term. That way, you stay up-to-date with and one step ahead of ever-evolving buyers and competitors.
Choosing Your Scope
Whether you call them lost deal reviews, loss reviews, or win/loss analysis, these programs are ultimately the same thing.
The data source and analysis may vary, but the goal is always the same — to win more deals and increase revenue by improving overall rep execution, your sales process, or by fixing gaps in enablement and product.
But just because they share the same goal doesn’t mean all lost deal review programs are created equal. The scope of the changes you want to make with your deal review program makes all the difference in choosing the right approach to running one.
A Narrow Scope
If your intended scope of change is narrow — to refine or keep a program or process up-to-date — a low cost, lower fidelity program is likely fine.
For example, sales may use a team deal review to spot positive or negative patterns that can be used to update and improve their existing sales motion.
The goal here is to make small, incremental improvements.
The best data collection methods for these narrow scope scenarios are ones that maximize efficiency first over fidelity. I recommend:
- Buyer surveys run in-house
- Sales surveys run in-house
- CRM Closed/Lost reasons
A Broad Scope
At the other end of the spectrum are programs with a broad scope of change. Here, the goal is to fix issues that affect the company’s revenue as a whole, issues that require big cross-functional changes, or issues that have defied your previous attempts to fix.
For example, “early-stage losses” is a topic I speak with people about a lot. And fixing this can require changes to everything from product and pricing to your entire sales process.
Substantial, cross-functional change like this requires insights that are totally trusted. No doubts. No arguments. Just action.
So in these cases, you need to spend more to get the highest accuracy.
Broad change win/loss programs should focus on data collection that prioritizes accuracy over efficiency. I recommend:
- Outsourced buyer interviews with a clustered model
- Outsourced buyer surveys
Choosing Your Data Collection Method
Most writing about win/loss analysis takes it for granted that buyer interviews are the best method. But the truth is all data collection methods have pros and cons. And while I agree that buyer interviews can yield the most accurate, highest fidelity data, they’re also the most expensive approach.
And as we just looked at above, depending on the scope of change you’re looking for, the need for accuracy will ebb and flow during the lifecycle of a win/loss program.
It’s common for a win/loss program to be initiated with expectations of broad change because it’s been triggered by a painful issue.
This initial period of broad change is usually followed by a longer period of narrow change when the program is used to track the impact of the initial changes and identify other areas of improvement. From there, the cycle repeats.
But with so many methods for collecting and analyzing win/loss data, how do you choose the one that’s right for you.
I’ve summarized the pros and cons of six of them in the sections that follow.
I’d be surprised to find a CRM implementation that didn’t require reps to select a Closed/Lost reason before closing an opportunity.
This data is a bit of a mixed bag, though. It’s tracking all your deals, so sample size isn’t an issue. But the data is less reliable because it comes from the rep, who rarely knows the real reason why a deal fell through. Plus, they have a stake in what’s said.
Reps don’t want to take the blame for a loss, so price and features usually take the bullet.
Likewise, on a win, the rep’s answer is usually a guess, but I have heard of cases in which reps are required to call the new customer and ask. Or the question is asked by Customer Success.
Closed/Lost reason notes provide context, so they can be helpful, but will probably require a manual quality assurance review of some sort to keep quality high.
Cluster analysis of this text with a product like Quid can reveal the key factors causing losses against a particular competitor. It can also help highlight the main reasons a deal is won.
The up-front effort to set up closed reasons in your CRM is modest, so cost is relatively low. Data hygiene and analysis will require more ongoing effort, though.
It’s still uncommon, but some companies use their CRM to gather more than just closed reasons. For instance, if your sales team has been trained to use a qualification process like MEDDIC, they are already gathering other key data points, such as use case and decision criteria.
But capturing any of this expanded data accurately will require substantial changes to CRM configuration, rep training, and ongoing enforcement.
The leader of the CI program at a large enterprise technology vendor described to me their four-year journey to using CRM data, all sourced by reps. He told me, “It was a very painful process to get there,” but they’re now having real success. They close deals 14-20 days sooner, and their deals are 25-50% larger.
Sales team survey
To kickstart a new win/loss program, surveying or interviewing the sales team is often a quick and inexpensive way to capture key data points about recently closed deals.
For each opportunity closed in the past one or two quarters, sales can provide some or all of these essential data points — the buyer’s biggest pain point, decision criteria, consideration set, winning vendor, and reason lost.
Any of the many freestanding survey tools can be used to kickstart a win/loss program like this.
But for a continuous win/loss program, the survey responses should be appended to CRM records to support reporting and analysis of win/loss patterns over time.
Salesforce itself has a survey capability. And GetFeedback is an independent survey tool that’s distinguished by its Salesforce integration. It can be configured to automatically send a close-won or close-lost survey to the contact when an opportunity is closed.
As with data collected through the CRM, a survey that’s backed by the active support of Sales leadership will achieve higher response rates than a buyer survey. But in both cases, the data is less reliable because it’s secondhand. The rep will rarely have an accurate, detailed explanation for why a deal was lost.
Interviewing the sales team personally instead of using a survey is another option as well. Interviews yield more accurate data, since you can dig into answers to get more clarity, but this comes at a higher cost.
Buyer surveys are an efficient way to gather essential data about deals. For this reason, they make a good complement to buyer interviews in an ongoing win/loss analysis program.
Buyer interviews generate a deeper understanding of issues surfaced by the surveys or the CRM. But interviews require more resources than surveys.
Surveys are a good fit for those intervening periods between deep-dive interviews, when the goal is to stay up-to-date and get an early warning of new issues.
They can also reduce the expense of an ongoing program.
Completion rate and response rate are the biggest challenges with buyer surveys. When they’re low, accuracy is low.
Completion rate: This is the number of surveys submitted divided by the number started. Completion rate decreases with survey length, so you’ll need to put effort into topic selection and survey design. SurveyGizmo recommends keeping survey completion time under 10 minutes.
Response rate: This is the number of surveys submitted divided by the number of buyers contacted about the survey. Responses from buyers in Closed/Lost opportunities will be low. Don’t be surprised if it’s 10% or even lower. Multiple follow-ups can help. So can a creative incentive.
An easy way to increase response rate is to send the survey from the rep who owns the account. The rep should already have a relationship with the buyer, especially in complex sales with long deal cycles, and you can leverage that to increase participation.
Buyer interviews are the gold standard for understanding why deals are lost.
Buyers will provide fuller, more candid answers in a live interview than they would or could in a survey.
Live interviews are dynamic. And while a list of questions should be prepared, the interview often won’t follow the script. The interviewer can rephrase or probe at any point to get more detail.
This is important because you can’t fix an issue if you don’t understand why it’s an issue. While a trouble spot can be identified with a survey or CRM data, that data won’t explain why. To avoid resorting to guesses, use buyer interviews to get to why it’s a problem and how it compares to competing alternatives.
Buyer interviews work best when paired with the other data sources described here. For example, reports of closed opportunities and other analysis of CRM data could identify a problem with early-stage losses or losses to a specific competitor. Buyer interviews would then be used to generate deeper insight.
There are two main ways to run a buyer interview — internally or outsourced. Let’s look at both.
Buyer interviews by an internal source
In the quest to learn why deals are lost, Product Marketers and Product Managers can be a useful ally. These roles require making decisions that reduce losses, and they have experience interviewing customers.
But I’ve seen many cases, after only 2-3 interviews in, the effort stalls.
Preparing for and conducting interviews is time-consuming, as is the analysis that follows. Compared to the structured data produced by a ten-minute-survey, the free text generated by a twenty-five minute live interview will require much more analysis time.
Product managers also run into resistance from buyers in lost deals as they are often unwilling to discuss their decision with employees of the company, and when they do, they won’t talk openly.
If you do decide to set up your own win/loss program based on buyer interviews, this seven-step guide will be a big help.
Buyer interviews by a 3rd party
Google returns about 178,000,000 results on a search for “win/loss buyer interviews.” The vendors providing this service vary by size, industry specialization, methodologies used, and pricing/packaging.
Buyer interviews are often bundled together in a clustered packaging model or spread out over several years or quarters in a subscription model.
Vendors using a subscription model price their offering by interview volume. Five to ten interviews per quarter is common. One exec I spoke with referred to this as a “heartbeat” model.
Unfortunately, this approach has a Goldilocks problem. Analysis and reports are based on either too few deals, just 5 or 10 in the past quarter. Or, they’re based on a larger group of 20-40 deals that are 3 or 4 quarters old.
Reduced accuracy makes the subscription model better for a win/loss program with a narrow scope of change.
If you’re expecting to make broad changes, the clustered model is a better fit.
Vendors with a clustered model often package a set of 20 interviews with deliverables like battlecards, training, workshops, or action plans. In this model, the interviews are a means to an end, not the primary deliverable. So the goal is to generate insights and begin operationalizing them as soon as possible.
This quality of service is harder to scale. That’s why the clustered model is typically provided by boutique firms.
Building The Best Win/Loss Program For You
Changing needs over the lifecycle of a win/loss program means there is no perfect data collection method for both times of broad change and narrow change. This makes a mixed-method approach the best way to build a win/loss program that optimizes both accuracy and cost-efficiency.
Use a lower fidelity, lower-cost method as the program’s foundation. Supplement that with a higher accuracy, higher cost method during times of broad change.
The green dots in the chart below highlight two mixed-methods combinations that balance the accuracy vs efficiency tradeoff better than any of the mono method alternatives — Clustered interviews plus buyer surveys and Clustered interviews plus CRM lost reasons.
A program like this, that alternates between clustered interviews and buyers surveys (or CRM lost reasons), will optimize the accuracy vs efficiency trade-off over the program’s life cycle.
Win/loss analysis has never been easy. Even the low-cost methods take a substantial amount of time and effort. But if you approach it correctly, and start with the right data, it can guide you through the harshest storms and keep your org on the top of its game.