“Do the metrics you’re using today actually address the challenges you just identified?”

Moments earlier, a group of senior innovation leaders had listed reasons why measuring innovation is problematic. But when we asked whether their current metrics and dashboards addressed those problems, the answer was simple: not really.

And that’s the interesting part. Innovation metrics have been hotly debated for decades. I entered the field in 2010, and it was already a standing-room-only topic at conferences. It still is. Which raises a fair question: if we’ve been talking about this for so long, why does it remain unresolved?

Frost conference-SmartOrg's breakout workshop, Sunnyvale, Feb 2026

SmartOrg workshop attendees discuss innovation metrics that matter.

That was the starting point for our recent workshop at the Frost & Sullivan Innovation Workshop and Tour in Sunnyvale, California. The question we posed to participants was this: Why is measuring innovation problematic? Their responses fell generally into five key themes: 

  1. Misaligned time horizons: The business focuses on short-term financial cycles, but innovation—especially breakthroughs—can take years to achieve.  
  2. Vanity metrics: Most innovation dashboards are designed as roll-ups of various activities that are easy to count and where more is better. These so-called “vanity” metrics contribute to innovation theater, where there is an appearance that good things are happening but no demonstrated connection to innovation success.  
  3. Metric gaming: Goodhart’s Law states that “When a measure becomes a target, it ceases to be a good measure.” That is, metrics don’t just measure behavior, they shape behavior—which can lead to unintended consequences. 
  4. Attribution complexity: Rarely does a single person or team deliver an innovation in an easily defined timeframe. Involvement of multiple stakeholders, cross-functional contributions, external partners, and the evolution of an idea into an innovation all contribute to measurement complexity. 
  5. Execution bias: Too often the focus is on throughput and output, which promotes execution of tasks over exploration of opportunity—a critical step when it comes to highly uncertain innovation opportunities.  

Measuring innovation remains a persistent issue, not because it’s a math problem, but because it’s an organizational one. It’s a leadership system challenge—one that no individual innovator can fix alone.

But that doesn’t mean we shouldn’t try. After exploring existing metrics and confirming that they don’t fully address these challenges, we decided to take a step toward improvement. Participants separated into four groups with the following task: Pick a problem area, give it a name; define why it’s problematic; and suggest a few metrics to try that might make a difference. In essence, they set out to identify metrics that truly matter. 

Each group shared a short read-out, captured below. Here’s a brief synopsis before you watch.

Elaine’s group tackled the perennial challenge of ROI. They argued the real issue isn’t a lack of metricsit’s a lack of shared language, attribution clarity, and baseline discipline before pilots even begin. 

Jinesh’s group focused on the acceleration phase, distinguishing between outputs and outcomesand challenging the idea that impact equals revenue. They wrestled with whether a “holistic metric” is even practical in a non-linear world. 

Shilpa’s group pushed the conversation upstreamarguing that before we measure innovation, we need to ensure we’re solving the right-sized problem. They also challenged the assumption that B2B and B2C efforts should be evaluated using the same yardstick. 

Dianne’s group asked a blunt question: If innovation doesn’t move the needle for shareholders, does it matter? They explored the tension between R&D confidence and financial reporting—and why a $300M opportunity can be transformational for one company and invisible to another. 

It will surprise no one to learn that we did not solve 20 years of uncertainty around measuring innovation in a one-hour workshop. From our perspective, though, the conclusion is clear: How well you are “doing” innovation isn’t strictly defined by measuring just the output or outcomes from your innovation program. And it’s certainly not measured by counting up all the activities people have completed.

Any set of metrics that matter must include the ambiguous parts of the innovation process, too. That would include opportunity exploration, hypothesis testing, business case modeling, experimenting, risk reduction, upside expansion—all the learning that takes place during the messy middle of the innovation process.

By measuring what matters, you will actually understand what’s working in your innovation program and what needs improvement. Isn’t that the point of an innovation dashboard?