Everyone's Measuring Developer Productivity. Almost No One Knows What It Means
Someone once asked me: “Our builds take 45 minutes. How do we make them faster?”
I asked, “Why do they take 45 minutes?”
They looked at me like I’d grown a second head.
This is the problem with developer productivity metrics today. Everyone’s measuring. Almost no one’s understanding.
The Dashboard Problem
Walk into any tech company in 2025 and you’ll find the same thing: DORA metrics on dashboards. SPACE frameworks in quarterly reviews. Cycle time tracking in sprint retros. Engineering leaders are drowning in data.
But here’s what I’ve seen after 15+ years in this space: teams are treating these metrics like scorecards instead of diagnostics. They’re cargo culting because “that’s what Google does” or “that’s what the McKinsey report said” without understanding why those metrics matter for their specific context.
The villain here isn’t the metrics themselves. DORA is great. SPACE is thoughtful. The problem is using them as endpoints instead of starting points.
Metrics tell you WHERE to look, not WHAT to do.
Measure → Understand → Correlate
Here’s the framework I use when working with teams on developer productivity:
1. MEASURE: Establish your baseline
You need to know where you are. Pick metrics that matter to your business. Track them consistently. This is the easy part—most teams stop here.
2. UNDERSTAND: Ask why it’s that way
This is where the real work begins. Your deployment takes 2 hours. Okay, why?
Are you doing gradual rollouts to 1% of traffic at a time?
Do you have 100 million users where an outage would be catastrophic?
Are you running extensive integration tests against external services?
Is there a manual approval gate because of regulatory requirements?
Maybe 2 hours isn’t a problem. Maybe 2 hours is exactly what smart looks like for your situation.
I’ve seen teams obsess over “slow” CI pipelines that take 30 minutes, only to discover those 30 minutes include comprehensive test coverage that prevented multiple production incidents. Making it faster by cutting tests would be optimization in the wrong direction.
3. CORRELATE: Connect to outcomes and human reality
The final step: does this metric actually connect to something you care about?
Fast deployments are great—unless they’re fast because you removed the safety checks. High commit frequency looks impressive—until you realize it’s because someone’s force-pushing to fix their own mistakes.
And here’s the part most frameworks miss: we aren’t computers. You can’t throw more megahertz at people and expect linear improvements. Life happens. Context switching has real costs. Developer happiness matters because happiness often leads to productivity.
The Human Element: What GitHub Gets Right
GitHub is one of the companies doing this well. They don’t just measure velocity or throughput—they actively track and care about developer happiness. Why? Because they understand that happy developers are more productive developers. Most of the time.
This is your differentiator as an engineering leader: understanding that the human element makes measuring developer productivity incredibly difficult. We’re not machines. You can’t just add more resources and expect proportional output.
But there are things you can measure and understand. And one of the most powerful metrics? Just asking people.
How do you feel about your ability to get work done?
What’s blocking you?
What would make your day better?
Sometimes the best metric is a conversation.
What Understanding Actually Looks Like
When you look at your metrics, ask yourself these questions:
Why is this metric what it is?
Not “is it good or bad” but “what factors contribute to this number?”
What would “better” actually mean for our business?
Faster isn’t always better. Sometimes slower and safer wins.
What are we optimizing for?
Speed? Safety? Consistency? Developer experience? You can’t optimize for everything at once.
Does this metric connect to actual outcomes we care about?
Are you measuring things that matter, or things that are easy to measure?
Have we asked the humans involved?
Your developers know where the friction is. Have you asked them?
The Real Work Begins
Here’s the truth: measuring is easy. Understanding is hard.
That’s why this field is growing. That’s why companies like Stripe, Microsoft, and GitHub invest heavily in developer productivity teams. They know that the numbers alone don’t tell the story—understanding the meaning behind them is where the real value lives.
Don’t just look at your metrics and wonder if they’re “good enough.” Look at them and ask “what are these telling me about how work gets done here?”
The difference between those two questions is the difference between vanity metrics and actual insight.
Let’s Figure This Out Together
This is a growing field and honestly, we’re all still figuring it out. The frameworks help, but they’re not prescriptive solutions—they’re tools for asking better questions.
If you’re wrestling with this at your company—if you’re tracking metrics but not sure you’re learning anything useful—drop me a note. I’d love to compare notes. I’ve been thinking about this stuff for 15+ years, and I’m always learning from what others are seeing.
Because at the end of the day, the goal isn’t to stack rank developers or hit arbitrary targets. The goal is to identify papercuts, reduce friction, and help people do their best work.
And you can’t do that with numbers alone. You need understanding.
---
Matt Gardner
Writing about developer experience, platform engineering, and building better teams.


Imagine a simple efficiency score function that any company using Claude can customize to their liking. The score from the function would appear next to any employee's usage, and would allow them to determine who is the most efficient employee using Claude.
I built a small app around the formula, would love to demo it to you sometime when you have the time.