In the same way under-investment in technical rigor can result in long-lasting negative effects on a codebase when left unchecked, under-investment in metrics can lead to long-lasting negative effects in your product organization.
At Quint Growth we work with companies every day to improve how they measure, understand, and act on data. We review many metrics setups and talk to companies about how they approach the problem of understanding and optimizing growth. We find that key questions that companies need answers to in order to make important product and business decisions are often either poorly understood or require quite a significant amount work to answer. This doesn’t vary by company stage, we’ve seen this just as much in small companies as we have in large successful ones. I’ve never spoken to a single company who has complained about their over-investment in metrics, its chronically under-invested in by all post-product-market-fit companies.
There are a number of effects that under-investment in metrics causes:
- Ad hoc analyses are extremely slow or impossible.
When PMs and executives ask questions that need to be answered by data, under-investment in metrics cripples your team. Does your data team need to spend days in SQL aggregating the right data to answer business questions? Do your engineers get called on often to export data and manually process it from your 3rd party metrics provider? Does your data science team often run into inconclusive results? These are all expensive costs imposed by under-investment in metrics.
Analyses are wrong.
A poorly implemented metrics solution is often worse than no metrics solution at all. Metrics that are inconsistent or have errors lead to either decision making based on false data that people believe is true or distrust of data that is actually accurate. When I worked at Swipely, our CEO/Founder Angus Davis would always use a flying instruments analogy for metrics: If you’re flying and an instrument is wrong you cover it up so it doesn’t affect your decision making in the air. Similarly, if your metrics are inconsistent in little ways cover them up, or “land” and fix them.
Product managers and executives don’t ask questions they would otherwise ask.
This is the silent unseen killer. Metrics follows the law of supply and demand just like anything else, if you lower the cost to consume it then consumption will go up. Poorly implemented metrics are “costly” to use and see lower consumption as a result. When PMs and management can’t measure things effectively they are reduced to guessing. An efficient product organization does not base key decisions on uninformed guessing.
In order to diagnose if you are under-invested in metrics lets briefly cover key features of a “good” metrics system:
- User behavior is tracked on a per-session basis and what exactly led to each session can easily be identified.
You should be able to answer, on a per-user basis, questions like: Did a user come from email? Did they come in organically? via Google? via a Facebook ad? If they came via a Facebook ad what campaign was it? How many times did the user visit before they signed up? before they purchased? You should be able to answer these questions on both mobile and web if your product spans both devices.
All key events that a user can do on each page in your site/app are tracked and all user-level meta-properties on those events are properly attributed.
You should be recording as much user level metadata as possible given the current context. Some examples include: login_type (Facebook or Email), age, gender, original_referrer, paid_user (originally came from paid traffic), city, country, phone_type, carrier, wifi_on, etc. Anything that might cause user behavior to differ significantly you want to track if possible.
Pre-login behavior is tied to post-login behavior and behavior is tied between devices as much as possible.
You need to have a solid understanding of what is going on both before the user signs in and after. Unfortunately most platforms (e.g. Mixpanel, Amplitude) only support single ID aliasing making this difficult to understand multi-computer or web & mobile behavior, but you should at least understand on a single device what the user is doing both pre and post login.
Events are coherently named, and you have a list with a textual description of what each event represents.
Even well instrumented metrics systems can be a pain to use if events are poorly named. You should have names that are over-descriptive rather than under-descriptive. If an average user of your product looked at your metrics event names they should be able to understand what they mean. There should always be a reference document with all metrics events and a detailed description of what each event name represents.
Metrics development keeps up with product development.
Features should not get released without instrumentation. Lack of instrumentation should make a feature un-shippable. Features without metrics are broken.
All metrics data is recorded into a raw data-store (e.g. Amazon Redshift) from which you can easily run ad-hoc analyses or verify 3rd party analytics reporting.
All 3rd party analytics systems are insufficient and too inflexible for your needs. At some point you will want to answer ad hoc metrics questions and to do so you will need to access all your data. Amazon Redshift is a cheap and fast solution for this. I can’t recommend it enough.
Don’t fall prey to the hidden killer of metrics debt. If you are pre-product-market-fit go talk to customers. If you are post-product-market-fit invest early and often in metrics. If you re-prioritize product feature initiatives over metrics initiatives for too long it will come back and bite you in the ass.
Discussion on Hacker News here.