Are You Tracking the Right Metrics?
How to choose the right metric? 8 types of metrics every PM needs to understand. 2 case studies. 5 actionable steps to take as a PM.
👋 Hey, here. Welcome to the free edition of The Product Compass. Every week I share actionable tips and advice for PMs.
If you’re not a premium subscriber, here’s what you have missed:
Subscriber-only resources and templates (PPTX, XLSX, Notion)
Attend Continuous Product Discovery Masterclass (May 12) + get certified
Join our closed Slack community and ask me anything anytime. Get my individual advice (1:1) and boost your PM career
Access my full archive
Our guest today is, entrepreneur, investor & author. Currently, he’s Founding Partner at Highline Beta, a hybrid venture studio and venture capital firm. Previously he was VP of Product at GoInstant (acq. Salesforce) and VarageSale.
Ben is the co-author of Lean Analytics, a book that helped make Lean Startup methodology more rigorous and analytical. It’s, by far, my favorite book on Product Analytics.
Currently, Ben is writing anewsletter covering product management, startups, and more (https://focusedchaos.co). For much more from Ben, subscribe to his newsletter and follow him on Twitter and LinkedIn.
Every product manager is taught early on to focus on metrics. Better decisions are made with more data. Clearer evidence of what’s working or not can be presented to bosses and senior management. Engineering and design teams aren’t expected to rely on “the product manager’s gut” for what should be built. Everyone feels better with data. Data is good.
But is that always the case?
What happens when we’re tracking the wrong thing? What happens when we misinterpret the data or rely on too little data to make the right decisions?
Using data to make decisions sounds easy, but it’s not, and there’s a considerable nuance you need to account for. And it starts with understanding what makes a good metric.
Not all metrics are created equal.
1. The Four Components of a Good Metric
In Lean Analytics, co-author Alistair Croll and I identified four key criteria for a good metric.
A good metric is understandable: These days, you can track so much data. You instrument your entire application to monitor everything people are doing. You can get access to competitive data (in some cases). You can segment users or customers into different groups based on behaviour. And so on. The instinct is to track as much as possible, which is OK, but it does tend to complicate things.
So the first thing you need to do is ensure the metrics you’re tracking are easy to understand. I’ve always thought of analytics as a “common language” that everyone should be able to speak because it’ll help everyone within your organization know what’s going on and collectively make better choices. So do your best to simplify the metrics and not overwhelm users.
A good metric is comparative: Metrics often represent a single point in time. They give you a snapshot of what’s happening at the moment. But that’s never the whole story. For example, if I tell you that I have 10,000 active users in my application, it’s difficult to know if that’s good or bad. If I tell you that last month I had 1,000 active users, comparing last month to this month, it looks quite good! I’ve seen a 10x increase in active users.
A good metric is typically comparative over a period of time. This is where we start to dig into cohort analysis and why it’s so important to measure progress over time as you make changes.
A good metric is a ratio or rate: Comparing numbers over time is helpful, but it may still not tell the whole story. In the example above, let’s say that last month I had 1,000 active users out of 5,000 signups, so I had 20% become active users. This month I have 10,000 active users (which is 10x last month), but I actually had 200,000 signups, which means I only turned 5% of my users into active users. (I know these numbers are simple, but hopefully, you get the point.)
Ratios or rates tend to be more revealing than whole numbers. In my simple example above, I somehow managed to acquire a lot more users, but very few of them became active. While I’m happy with a 10x increase in active users, something is not quite working regarding the users I’m acquiring.
When evaluating the metrics you’re focused on, they should be ratios or rates, which are inherently comparative.
A good metric is behaviour changing: This is the most important aspect of a good metric. It leads to what we described in Lean Analytics as the “Golden Rule of Metrics” - if a metric won’t change how you behave, it’s a bad metric.
It’s easy to track many metrics, but it’s much harder to find the right ones to make decisions from. Often we stare at the data we’ve collected, and we don’t know what to do.
The key to being a successful product manager is being able to narrow your focus on the few key metrics that actually matter.
And you know they matter because no matter whether they go up, down, or stay the same, you’re going to take action.
1.1 Case Study: Moz cuts down on metrics to track
Moz is a SaaS software company focused on SEO. In May 2018, the company raised $18M and was scaling quickly. They realized that progress was good, but it was getting harder to figure out what was really going on, how to identify problems, and make changes that actually moved the needle.
To simplify, the company focused on a single metric: Net Adds.
Net Adds represents the number of new customers minus customers that left Moz daily. This became the company’s primary focus, and they got everyone at the company aligned around this single metric.
By itself, Net Adds wasn’t actionable, but it allowed the company to dig into different aspects of the business very quickly to see what was going well and where there were trouble spots.
For example, if Net Adds were going up, they would want to figure out why because something was going well. On the other hand, if Net Adds was going down, it meant there was a problem somewhere within the company they had to figure out immediately.
Focusing on a simple metric that everyone understood (which could be compared over time) allowed the company to ask better questions, dig in faster and ultimately change their behaviour when necessary.
2. How can you take action as a product manager?
Now that you know what makes a good metric, go back to the data you track and run a quick assessment.
Remember: It’s not really about whether you capture a ton of data or not (by all means, collect lots of data.) It’s all about what you’re focused on right now.
You can quickly evaluate the key metrics you’re looking at by asking these questions:
Do we all understand the key metrics we’re tracking and why? (Again, this is about creating a common language across the company)
Are the metrics we’re focused on comparative and in the form of a ratio or rate? If not, how can we adapt them?
Are the metrics we’re focused on helping us make decisions? If a number goes up, down, or stays the same, what am I going to do about it?
3. Understanding the Different Types of Metrics
In Lean Analytics, we identified and defined different types of metrics to help product managers (and others!) better understand what makes a good metric. Those types include:
Vanity vs. Actionable Metrics
Qualitative vs. Quantitative Metrics
Exploratory vs. Reporting Metrics
Lagging vs. Leading Metrics
We’re going to cover each of these briefly.
3.1 Vanity vs. Actionable Metrics
You are probably all familiar with vanity metrics, but unfortunately, they still sneak into things. Honestly, they’re hard to ignore because they make us feel good. They’re the numbers that go “up and to the right” and give us the impression that things are going well. You can learn more in a newsletter post I wrote, “Vanity Metrics: The Numbers We Hate to Love”.
I’m not going to tell you, as a product manager, not to track vanity metrics. I know you’re going to. And in some cases, your leadership team may want you to track and report on these numbers. That’s not great, but it’s common. Your job, though, even if you track vanity metrics, is to avoid making decisions off those numbers because, ultimately, they are not actionable. It’s very difficult, and quite dangerous, to take action off of vanity metrics.
Actionable metrics are the only ones that change your behaviour and allow you to understand what’s going on truly.
3.2 Qualitative vs. Quantitative Metrics
Both qualitative and quantitative metrics are important.
Qualitative metrics are unstructured and anecdotal. The information you collect qualitatively is often very revealing but can be hard to aggregate.
Quantitative metrics are the numbers and statistics you track. They represent hard facts but often lead to fewer insights.
Quantitative data tells you WHAT is happening. Qualitative data tells you WHY. You cannot be a great product manager without being great at collecting both.
Collecting qualitative data means being very good at interviewing users & customers to understand their deeper needs and pain points. Even if you have a research team, you, as a product manager, need to be on the front lines talking to customers.
And in fact, I would encourage you to bring everyone along (i.e., designers, developers, quality assurance, etc.) Don’t interview a user with more than 2 or 3 people on your side at a time, but everyone involved in building products should have direct access to users and customers.
One big mistake I see is that product teams stop collecting qualitative data almost as soon as they can collect a lot of quantitative data, as if the quantitative data replaces the qualitative data. It doesn’t. You need both.
Never stop talking to customers. And don’t rely just on your customer success team, either.
3.3 Exploratory vs. Reporting Metrics
Both of these are valuable, but you start with reporting metrics (especially when you don’t have a lot of data.) As you collect more data, you can begin to “explore the data” to uncover things.
Reporting Metrics: These are largely predictable. They keep you abreast of normal, day-to-day operations. You manage these by exception; i.e., if churn suddenly skyrockets from 5% to 20%, you know there’s a problem, alarms should be going off, and it’s all hands on deck.
Exploratory Metrics: These are speculative. The goal is to find unexpected or interesting insights. And over time, this could be a source of unfair advantage.
3.3.1 Case Study: Circle of Friends uses its data to uncover the right target customer
Circle of Friends was a simple idea: a Facebook application that allowed you to organize your friends into circles for targeted content sharing. The company was started in 2007, shortly after Facebook launched its developer platform. The timing was perfect: Facebook became an open, viral place to acquire users quickly and build a startup. There had never been a platform with so many users that was so open (Facebook had about 50 million users at the time.)
By mid-2008, Circle of Friends had 10 million users. By all accounts, this was a huge success. But there was a problem, very few people were using the product. According to the founder, less than 20% of circles had any activity whatsoever after their initial creation. So they knew it would be impossible to build a solid company, despite the millions of users and the hypergrowth they’d seen.
One of the advantages they had was a lot of data. So they started digging in. And they uncovered that a specific customer segment, moms, was hyperactive. By every imaginable metric, moms were using Circle of Friends like crazy.
So the company pivoted, going from Circle of Friends to Circle of Moms. Initially, numbers dropped as a result of the new focus, but by 2009, they had 4.5 million users, most of whom were active.
That’s the power of data and being able to “explore the data” and ask it good questions. In this case, the team found an active customer segment they could build a real business around.
I’m a big believer in the power of identifying your best users and focusing more on them, especially in the early days. You can learn more here: 4 Steps to Building Super Sticky Products Leveraging Your Best Users
3.4 Lagging vs. Leading Metrics
Both lagging and leading metrics are important and useful, but ultimately your goal is to find leading ones.
Lagging metrics: These are historical numbers that show you how you’re doing; in effect, they report the news.
Leading metrics: These are numbers today that show you what might (or will likely) happen tomorrow; i.e., they make the news.
Let’s use an example: Customer churn
For this example, let’s define churn as the number of customers that abandon my service each month. So I calculate churn at the end of each month.
This number is helpful, but unfortunately, if I want to take action on it, I have to implement changes now and wait at least another month to see if they’ve had any impact and likely need to wait 2 or 3 months. That’s a slow learning cycle, a problem when trying to move quickly.
How might I identify a predictor of churn? Customer complaints
Customer complaints can be measured daily (heck, you could measure them in real-time if you wanted, but that’s probably overkill.) If I start to see customer complaints going up over a few days, that’s a good sign something is wrong. I don’t know what’s wrong, just from the customer complaints number, but it allows me to dig in and figure things out.
Maybe we introduced new bugs into the product? Maybe our customer success team is too small, and their response times have slowed? Maybe we had downtime?
One thing I know is likely true: If customer complaints keep going up, churn is also going up. So customer complaints become a leading indicator of churn. And I can react to customer complaint issues very quickly, diagnose the problem, implement a fix (hopefully!), and see the results almost immediately.
When you first start out, you should focus on lagging indicators. Report what’s going on, try your best to diagnose things, and see how your work affects change. But ideally, over time, you’re able to identify the leading metrics or indicators that are going to have an impact on your product and business going forward and react much faster.
4. The 5 steps to now take as a product manager
As a product manager, your job is to understand what is going on with the product (and, by extension, the business), identify “hot spots” (areas where there are issues), and figure out how to solve them. You are a decision maker and a coordinator/facilitator between all the other people (including design, engineering, QA, marketing, sales, management, etc.)
A big part of your job is figuring out what metrics to focus on and ensuring you’re maximizing the value of analytics. With the topics covered today, I would recommend you do the following:
Revisit all the data you’re tracking and see if the metrics meet the criteria of a “good metric” as defined above. Focus most of your attention on the key metrics you’re focused on (i.e., you may be collecting a bunch of data that doesn’t fit the “good metric” criteria above, and that’s OK.)
Update your dashboard(s) (if necessary). Most companies have 1 key dashboard that tracks the most important metrics for the company. Sometimes there are a couple of dashboards and some reports. Review these and make sure that all of the metrics are good ones. If they’re not, it might warrant a conversation with others at the company to see if you can get a better alignment on more useful metrics.
Identify any vanity metrics. I won’t tell you to stop tracking vanity metrics, but how important are they in your organization? Does management expect you to report on these numbers? That might warrant a conversation. At a minimum, identify the vanity metrics so you can be careful about how you use them.
Identify leading and lagging indicators. Both leading and lagging indicators are worthwhile, but it’s helpful to know which is which because it changes how you use these numbers. If you find yourself making most of your decisions from lagging indicators (which is quite common), you may need to have a brainstorming session with your team on how you could find the right leading metrics.
Identify the biggest challenges you’re facing and see if you have data that can help. Product managers are problem solvers. If you know what problem you’re looking to solve, you may be able to figure out if you’re collecting data (exploratory metrics) that can help. If you’re not, it might be too early (i.e., you just don’t have enough data), or it might be a sign that you’re not tracking enough things or the right things to eventually be able to use your data successfully to uncover insights. Don’t try to boil the ocean; find one problem that matters and dig deep into it.
Data is not a panacea. It’s important. Without it, you will be flying blind, but you can easily be led astray by bad data. Or you might give up all decision-making to it, which isn’t the answer. Your instincts are important, too (so are those of the team you work with).
It’s impossible to make every decision exclusively from data. In the absence of data, you need to do something. Otherwise, you’ll be stalled or caught in analysis paralysis.
So have a hypothesis, or take a guess. Run some type of experiment off your gut. But then measure the results.
Thank’s for reading The Product Compass!
If you find this newsletter valuable, share it with a friend, and consider subscribing if you haven’t already.
Let’s learn and grow together 🚀
Take care, Paweł
Thank you for allowing me to write this guest post Pawel.
If any readers have questions -- ask away! I'm happy to jump in and answer.
My favorite so far. A great piece with Ben