By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

How to Evolve Your Startup's Data Strategy and Identify Critical Metrics

Avoid these common reporting mistakes that can affect your most important metrics.

TOC
...
Table of Contents
Read More

Cherrydeck/Unsplash

Table of contents
By
Michael Perez
Michael Perez
By M13 Team
Link copied.
May 26, 2022
|

6 min

DTC companies generate a wealth of raw transactional data that needs to be refined into metrics and dimensions that founders and operators can interpret on a dashboard.

If you’re the founder of an e-commerce startup, there’s a pretty good chance you’re using a platform like Shopify, BigCommerce, or Woocommerce, and one of the dozens of analytics extensions like RetentionX, Sensai metrics, or Profitwell that provide off-the-shelf reporting.

analytic extension examples
Examples of off-the-shelf dashboards

At a high level, these tools are excellent for helping you understand what’s happening in your business. In our experience as founders and operators, you’ll inevitably find yourself asking questions that your off-the-shelf extensions simply can’t answer. Here are common problems that you or your data team may encounter with off-the-shelf dashboards:

Charts are typically based on a few standard dimensions. They don’t provide enough flexibility to examine a certain segment from every angle necessary to fully understand the charts.

Dashboards have calculation errors that are impossible to fix. It’s not uncommon for off-the-shelf dashboards to report the pre-discounted retail amount for orders that promo codes applied at checkout. In the worst cases, this can lead unsuspecting founders to drastically overestimate their Lifetime Value (LTV) and overspend on marketing campaigns with low returns.

Even in the best of cases, when founders are fully aware of the shortcomings of their data, they can find it difficult to take decisive action with confidence.

Pro Tip

We’re generally big fans of plug-and-play business intelligence tools, but they won’t scale with your business. Don’t get stuck relying on them past the point at which you’ve outgrown them.

How to evolve your startup’s data strategy

Building a data stack costs far fewer resources than it did a decade ago. As a result, many businesses are doing so earlier and earlier, and harnessing the compounding value of these insights earlier in their journey—but it’s no trivial task. For early-stage founders, the opportunity cost for any big project is immense. Many early-stage companies find themselves in an uncomfortable situation—they feel paralyzed by a lack of high-fidelity data. They need better business intelligence (BI) to become data-driven, but they don’t have the resources that they need to manage and execute the project.

This leaves founders with a few options:

Hire a seasoned data leader in a competitive market for talent

Hire a junior data professional and supplement them with experienced consultants

Hire and manage experienced consultants directly

All of these options have merits and drawbacks. Any of these options can be executed well or poorly. Many companies delay building a data warehouse because of the cost of getting it right, or the fear of messing it up. Both are valid concerns!

At M13, we have data-focused operators who can help companies reason through these difficult decisions, based on the specifics of their business. We’ve also developed frameworks and resources to help early-stage DTC companies do more with less, avoid expensive missteps, and climb up the growth curve faster.

Start by identifying your critical metrics

Our Retail Modeling Checklist is a simple, but effective resource to help stakeholders gain alignment on the definitions of critical enterprise metrics, such as net revenue and gross margin. The checklist should guide the early discussions and discovery into edge cases that materially affect the critical metrics.

Retail Modeling Checklist

M13 uses the Retail Modeling Checklist to find and edit the sections of the SQL template that need to be adapted to each company’s specific definitions.

Even if you’re starting from scratch without any SQL templates, this checklist can help ensure that your SQL developer is aligned with the stakeholders who will be consuming the data.

Defining enterprise metrics is a critical step that’s often overlooked because the definitions seem obvious. Most of your employees are familiar with these enterprise metrics at a surface level, but that doesn’t mean they fully understand them. The details matter! If you asked your employees these questions, would they all give the same answer?

“Is the price of shipping included in gross revenue?”

“When do gift card sales count toward revenue—at the time of sale or redemption?”

In many organizations, employees don’t give consistent answers because:

The metrics have never been explicitly defined

There was no concerted effort to educate employees on their definitions

Data literacy is often synonymized with analytical aptitude, but they aren’t the same thing. Even analytically-savvy employees are likely to have data illiteracy challenges that they aren’t aware of. In many cases, your data-driven employees will be most affected by data illiteracy, because they’ll be the ones consuming data, generating insights, and making decisions—all without realizing what they don’t know.

Why data details matter

These blind spots in data literacy cause errors in interpretation. They’re small at first, so it’s tempting to sweep them under the rug. A known error might be ignored because it only causes a couple of percentage points of error at an aggregate level. This reasoning overlooks the fact that the error is rarely distributed evenly. Any error is bound to affect some customers, products, or geographies more than others. It’s common to have small errors in both directions that can wash out on average but amplify each other at a more granular level.

Most important operational decisions are made on a relative basis—not an aggregate level. When deciding which products to prune, or which marketing strategies to double down on, you’re generally making comparisons between dozens or hundreds of observations, not thousands.

Questions that deal in relative comparisons are subject to greater error:

“How well is product A selling relative to B?”

“How much higher was LTV for customer segment C vs. segment D last month?”

Be aware of these subtle risks:

The more finely you slice your data, the greater the error grows relative to the signal.

The more comparisons you make, the more likely it is that the biggest differences are being amplified by noise rather than a true signal.

Noise is only part of the problem. There are also cases in which the error creates sustained bias. Organizations that gloss over the details of their enterprise metrics risk making egregiously bad decisions without even realizing it.

Imagine an e-commerce company that failed to consider gift card purchases in its definition of gross revenue. If gift card purchases are treated the same as other purchases, they’ll typically double-count toward revenue. Off-the-shelf business intelligence tools—Facebook and Google Ads included—typically count gift card purchases as revenue. Then when the gift cards are redeemed as a payment option, they count toward revenue again, resulting in inflated LTVs, and unrealistic Cost Per Actions (CPAs).

Even companies that have correctly anticipated this issue can fall victim to more subtle issues. Many companies don’t recognize revenue for gift card purchases until the gift cards are redeemed. If a marketer uses gross revenue to measure the results of a holiday gifting campaign that yielded a large uptick in gift card purchases, they may write the campaign off as a failure prematurely. The same marketer may have an inflated opinion of the lower-funnel paid marketing campaigns that ran in January, when the gift card recipients spent their balances.

Not all issues are so subtle. A shortage of data governance and data literacy can cause avoidable headaches among your organization’s leaders.

Companies that skip the definition and communication steps often have multiple different versions of the “same” metric floating around on different dashboards.

You don’t want to end up in a situation where your finance dashboards and your e-commerce dashboards show inconsistent week-over-week revenue growth and your senior leaders don’t know why! These misunderstandings cause friction in the form of wasted time and decreased trust. They also make it more difficult for employees across teams to collaborate effectively.

If this sounds like an eerily familiar scenario, you’re not alone. Seasoned data leaders should have strong opinions on the best ways to mitigate these issues, but they’ll need buy-in from founders and other leaders to invest in the organizational overhead required to create a data-driven company.

Two common data mistakes to avoid

We’ve seen many companies embark on their first big data project, only to skip some critically important steps and immediately start creating tech debt. Oftentimes, these projects start with an innocuous request like “replicate this RetentionX dashboard in Looker.”

Many novice engineers or contractors make the mistake of focusing on short-term deliverables at the expense of a scalable architecture.

We’ve seen a few versions of this mistake. Generally, the issues begin when:

1

The metrics or dimensions are created too coarsely

2

The metrics are created too far downstream

1. What happens when metrics are created at the wrong level of granularity (i.e. grain)

The grain matters because it’s always possible to aggregate a metric to a higher level (i.e. coarser grain) downstream, but you can’t split it up into a lower level (i.e. finer grain) than it was originally created. If you create gross revenue at an order grain, it’s very easy to aggregate it to a customer grain and measure average gross revenue LTV by cohort, but it’ll be impossible to measure what percentage of gross revenue is attributable to any given product. That’s because products exist at the order line grain, which is a finer grain than order.

Don’t end up in a situation where you can’t split up your revenue by product types!

Many companies make this mistake once, then “fix” the issue by copying and pasting their gross revenue calculation in multiple places, repeating it throughout their codebase. This is an anti-pattern that’s guaranteed to cause bugs down the road because metric definitions are never set in stone. They’re constantly being reevaluated and updated based on changes to the business.

Imagine that your company starts taking backorders for products that are out of stock, and you need to update your gross revenue definition with new logic. An engineer will struggle to make this update if their architecture has gross revenue calculations copied and pasted multiple times throughout their codebase and their BI tool. They’ll also find it difficult to test each occurrence for accuracy.

2. What happens when an engineer creates metrics too far downstream

BI tools like Looker make it easy to reference your raw data directly through their tool and start creating metrics and dimensions using their proprietary web user interfaces (UIs) and languages. Looker is a great example of this—just because you can use LookML, Looker’s proprietary language, to create your enterprise business metrics doesn’t mean you should.

Side-by-side examples of diagrams of databases and data destinations
Side-by-side examples of diagrams of databases and data destinations Source: M13

When you create critical metrics in Looker and make that your source of truth, it’s hard to get the truth out. Data warehouses support a robust set of integrations, but BI tools don’t. Teams that make this mistake typically create data silos or brittle integrations. Avoid this by creating your enterprise metrics in your data warehouse and sending them to your BI tools and other applications.

In general, you’ll want enterprise metrics to be defined as far upstream as possible, so they can be referenced by any software application, vendor, or internal use case.

You also don’t want to be stuck in a situation where your enterprise metrics are defined in someone else’s proprietary coding language, leaving you very little leverage when negotiating your next contract.

Takeaways & next steps

If we can leave you with one takeaway, it’s that many common issues that lead to tech debt are avoidable if you have the right resources and practices. Non-technical early-stage founders can’t be expected to see every potential issue ahead, they should seek advice from experienced practitioners. Advisors can be mentors, employees, former colleagues, or even investors. At M13, our Propulsion team of vertically focused operators is committed to advising our founders in many key domains, including data.

If you’re a data professional looking for a new challenge at an early-stage startup with the support of M13’s venture engine, apply below.

DTC companies generate a wealth of raw transactional data that needs to be refined into metrics and dimensions that founders and operators can interpret on a dashboard.

If you’re the founder of an e-commerce startup, there’s a pretty good chance you’re using a platform like Shopify, BigCommerce, or Woocommerce, and one of the dozens of analytics extensions like RetentionX, Sensai metrics, or Profitwell that provide off-the-shelf reporting.

analytic extension examples
Examples of off-the-shelf dashboards

At a high level, these tools are excellent for helping you understand what’s happening in your business. In our experience as founders and operators, you’ll inevitably find yourself asking questions that your off-the-shelf extensions simply can’t answer. Here are common problems that you or your data team may encounter with off-the-shelf dashboards:

Charts are typically based on a few standard dimensions. They don’t provide enough flexibility to examine a certain segment from every angle necessary to fully understand the charts.

Dashboards have calculation errors that are impossible to fix. It’s not uncommon for off-the-shelf dashboards to report the pre-discounted retail amount for orders that promo codes applied at checkout. In the worst cases, this can lead unsuspecting founders to drastically overestimate their Lifetime Value (LTV) and overspend on marketing campaigns with low returns.

Even in the best of cases, when founders are fully aware of the shortcomings of their data, they can find it difficult to take decisive action with confidence.

Pro Tip

We’re generally big fans of plug-and-play business intelligence tools, but they won’t scale with your business. Don’t get stuck relying on them past the point at which you’ve outgrown them.

How to evolve your startup’s data strategy

Building a data stack costs far fewer resources than it did a decade ago. As a result, many businesses are doing so earlier and earlier, and harnessing the compounding value of these insights earlier in their journey—but it’s no trivial task. For early-stage founders, the opportunity cost for any big project is immense. Many early-stage companies find themselves in an uncomfortable situation—they feel paralyzed by a lack of high-fidelity data. They need better business intelligence (BI) to become data-driven, but they don’t have the resources that they need to manage and execute the project.

This leaves founders with a few options:

Hire a seasoned data leader in a competitive market for talent

Hire a junior data professional and supplement them with experienced consultants

Hire and manage experienced consultants directly

All of these options have merits and drawbacks. Any of these options can be executed well or poorly. Many companies delay building a data warehouse because of the cost of getting it right, or the fear of messing it up. Both are valid concerns!

At M13, we have data-focused operators who can help companies reason through these difficult decisions, based on the specifics of their business. We’ve also developed frameworks and resources to help early-stage DTC companies do more with less, avoid expensive missteps, and climb up the growth curve faster.

Start by identifying your critical metrics

Our Retail Modeling Checklist is a simple, but effective resource to help stakeholders gain alignment on the definitions of critical enterprise metrics, such as net revenue and gross margin. The checklist should guide the early discussions and discovery into edge cases that materially affect the critical metrics.

Retail Modeling Checklist

M13 uses the Retail Modeling Checklist to find and edit the sections of the SQL template that need to be adapted to each company’s specific definitions.

Even if you’re starting from scratch without any SQL templates, this checklist can help ensure that your SQL developer is aligned with the stakeholders who will be consuming the data.

Defining enterprise metrics is a critical step that’s often overlooked because the definitions seem obvious. Most of your employees are familiar with these enterprise metrics at a surface level, but that doesn’t mean they fully understand them. The details matter! If you asked your employees these questions, would they all give the same answer?

“Is the price of shipping included in gross revenue?”

“When do gift card sales count toward revenue—at the time of sale or redemption?”

In many organizations, employees don’t give consistent answers because:

The metrics have never been explicitly defined

There was no concerted effort to educate employees on their definitions

Data literacy is often synonymized with analytical aptitude, but they aren’t the same thing. Even analytically-savvy employees are likely to have data illiteracy challenges that they aren’t aware of. In many cases, your data-driven employees will be most affected by data illiteracy, because they’ll be the ones consuming data, generating insights, and making decisions—all without realizing what they don’t know.

Why data details matter

These blind spots in data literacy cause errors in interpretation. They’re small at first, so it’s tempting to sweep them under the rug. A known error might be ignored because it only causes a couple of percentage points of error at an aggregate level. This reasoning overlooks the fact that the error is rarely distributed evenly. Any error is bound to affect some customers, products, or geographies more than others. It’s common to have small errors in both directions that can wash out on average but amplify each other at a more granular level.

Most important operational decisions are made on a relative basis—not an aggregate level. When deciding which products to prune, or which marketing strategies to double down on, you’re generally making comparisons between dozens or hundreds of observations, not thousands.

Questions that deal in relative comparisons are subject to greater error:

“How well is product A selling relative to B?”

“How much higher was LTV for customer segment C vs. segment D last month?”

Be aware of these subtle risks:

The more finely you slice your data, the greater the error grows relative to the signal.

The more comparisons you make, the more likely it is that the biggest differences are being amplified by noise rather than a true signal.

Noise is only part of the problem. There are also cases in which the error creates sustained bias. Organizations that gloss over the details of their enterprise metrics risk making egregiously bad decisions without even realizing it.

Imagine an e-commerce company that failed to consider gift card purchases in its definition of gross revenue. If gift card purchases are treated the same as other purchases, they’ll typically double-count toward revenue. Off-the-shelf business intelligence tools—Facebook and Google Ads included—typically count gift card purchases as revenue. Then when the gift cards are redeemed as a payment option, they count toward revenue again, resulting in inflated LTVs, and unrealistic Cost Per Actions (CPAs).

Even companies that have correctly anticipated this issue can fall victim to more subtle issues. Many companies don’t recognize revenue for gift card purchases until the gift cards are redeemed. If a marketer uses gross revenue to measure the results of a holiday gifting campaign that yielded a large uptick in gift card purchases, they may write the campaign off as a failure prematurely. The same marketer may have an inflated opinion of the lower-funnel paid marketing campaigns that ran in January, when the gift card recipients spent their balances.

Not all issues are so subtle. A shortage of data governance and data literacy can cause avoidable headaches among your organization’s leaders.

Companies that skip the definition and communication steps often have multiple different versions of the “same” metric floating around on different dashboards.

You don’t want to end up in a situation where your finance dashboards and your e-commerce dashboards show inconsistent week-over-week revenue growth and your senior leaders don’t know why! These misunderstandings cause friction in the form of wasted time and decreased trust. They also make it more difficult for employees across teams to collaborate effectively.

If this sounds like an eerily familiar scenario, you’re not alone. Seasoned data leaders should have strong opinions on the best ways to mitigate these issues, but they’ll need buy-in from founders and other leaders to invest in the organizational overhead required to create a data-driven company.

Two common data mistakes to avoid

We’ve seen many companies embark on their first big data project, only to skip some critically important steps and immediately start creating tech debt. Oftentimes, these projects start with an innocuous request like “replicate this RetentionX dashboard in Looker.”

Many novice engineers or contractors make the mistake of focusing on short-term deliverables at the expense of a scalable architecture.

We’ve seen a few versions of this mistake. Generally, the issues begin when:

1

The metrics or dimensions are created too coarsely

2

The metrics are created too far downstream

1. What happens when metrics are created at the wrong level of granularity (i.e. grain)

The grain matters because it’s always possible to aggregate a metric to a higher level (i.e. coarser grain) downstream, but you can’t split it up into a lower level (i.e. finer grain) than it was originally created. If you create gross revenue at an order grain, it’s very easy to aggregate it to a customer grain and measure average gross revenue LTV by cohort, but it’ll be impossible to measure what percentage of gross revenue is attributable to any given product. That’s because products exist at the order line grain, which is a finer grain than order.

Don’t end up in a situation where you can’t split up your revenue by product types!

Many companies make this mistake once, then “fix” the issue by copying and pasting their gross revenue calculation in multiple places, repeating it throughout their codebase. This is an anti-pattern that’s guaranteed to cause bugs down the road because metric definitions are never set in stone. They’re constantly being reevaluated and updated based on changes to the business.

Imagine that your company starts taking backorders for products that are out of stock, and you need to update your gross revenue definition with new logic. An engineer will struggle to make this update if their architecture has gross revenue calculations copied and pasted multiple times throughout their codebase and their BI tool. They’ll also find it difficult to test each occurrence for accuracy.

2. What happens when an engineer creates metrics too far downstream

BI tools like Looker make it easy to reference your raw data directly through their tool and start creating metrics and dimensions using their proprietary web user interfaces (UIs) and languages. Looker is a great example of this—just because you can use LookML, Looker’s proprietary language, to create your enterprise business metrics doesn’t mean you should.

Side-by-side examples of diagrams of databases and data destinations
Side-by-side examples of diagrams of databases and data destinations Source: M13

When you create critical metrics in Looker and make that your source of truth, it’s hard to get the truth out. Data warehouses support a robust set of integrations, but BI tools don’t. Teams that make this mistake typically create data silos or brittle integrations. Avoid this by creating your enterprise metrics in your data warehouse and sending them to your BI tools and other applications.

In general, you’ll want enterprise metrics to be defined as far upstream as possible, so they can be referenced by any software application, vendor, or internal use case.

You also don’t want to be stuck in a situation where your enterprise metrics are defined in someone else’s proprietary coding language, leaving you very little leverage when negotiating your next contract.

Takeaways & next steps

If we can leave you with one takeaway, it’s that many common issues that lead to tech debt are avoidable if you have the right resources and practices. Non-technical early-stage founders can’t be expected to see every potential issue ahead, they should seek advice from experienced practitioners. Advisors can be mentors, employees, former colleagues, or even investors. At M13, our Propulsion team of vertically focused operators is committed to advising our founders in many key domains, including data.

If you’re a data professional looking for a new challenge at an early-stage startup with the support of M13’s venture engine, apply below.

Read more

No items found.

The views expressed here are those of the individual M13 personnel quoted and are not the views of M13 Holdings Company, LLC (“M13”) or its affiliates.This content is for general informational purposes only and does not and is not intended to constitute legal, business, investment, tax or other advice. You should consult your own advisers as to those matters and should not act or refrain from acting on the basis of this content.This content is not directed to any investors or potential investors, is not an offer or solicitation and may not be used or relied upon in connection with any offer or solicitation with respect to any current or future M13 investment partnership.Past performance is not indicative of future results. Unless otherwise noted, this content is intended to be current only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others.Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in funds managed by M13, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by M13 is available at m13.co/portfolio.