The Platform Scorecard: Measuring IDP Value Beyond DORA Metrics

Introduction

You’ve built an Internal Developer Platform. Golden paths are paved, self-service portals are live, and developers can spin up environments in minutes instead of days. But when leadership asks „what’s the ROI?“, you find yourself scrambling for numbers that don’t quite capture the value you’ve created.

DORA metrics—deployment frequency, lead time, change failure rate, mean time to recovery—have become the default answer. But in 2026, they’re increasingly insufficient. AI-assisted development can inflate deployment frequency while masking review bottlenecks. Lead time improvements might come at the cost of technical debt. And none of these metrics capture what platform teams actually deliver: developer productivity and organizational capability.

This article introduces the Platform Scorecard—a framework for measuring IDP value that combines traditional delivery metrics with developer experience indicators, adoption signals, and business impact measures. It’s designed for platform teams who need to justify investment, prioritize roadmaps, and demonstrate value beyond „we deployed more stuff.“

Why DORA Metrics Fall Short

DORA metrics revolutionized how we think about software delivery performance. The research is solid, the correlations are real, and every platform team should track them. But they were designed to measure delivery capability, not platform value.

The AI Inflation Problem

With AI coding assistants generating more code faster, deployment frequency naturally increases. But this doesn’t mean developers are more productive—it might mean they’re spending more time reviewing AI-generated PRs, debugging subtle issues, or managing technical debt that accumulates faster than before.

A platform team that enables 10x more deployments hasn’t necessarily delivered 10x more value. They might have just enabled 10x more churn.

The Attribution Problem

When lead time improves, who gets credit? The platform team who built the CI/CD pipelines? The SRE team who optimized the deployment process? The developers who adopted better practices? The AI tools that generate boilerplate faster?

DORA metrics measure outcomes at the organizational level. Platform teams need metrics that measure their specific contribution to those outcomes.

The Experience Gap

A platform can have excellent DORA metrics while developers hate using it. Friction might be hidden in workarounds, shadow IT, or teams simply avoiding the platform altogether. DORA doesn’t capture whether developers want to use your platform—only whether code eventually ships.

The Platform Scorecard Framework

The Platform Scorecard measures platform value across four dimensions:

┌─────────────────────────────────────────────────────────────┐
│                   PLATFORM SCORECARD                        │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐        │
│  │   MONK      │  │   DX Core   │  │  Adoption   │        │
│  │ Indicators  │  │     4       │  │   Metrics   │        │
│  └──────┬──────┘  └──────┬──────┘  └──────┬──────┘        │
│         │                │                │                │
│         └────────────────┼────────────────┘                │
│                          ▼                                 │
│                 ┌─────────────┐                            │
│                 │  Business   │                            │
│                 │   Impact    │                            │
│                 └─────────────┘                            │
└─────────────────────────────────────────────────────────────┘
  1. MONK Indicators: Platform-specific capability metrics
  2. DX Core 4: Developer experience measurements
  3. Adoption Metrics: Platform usage and engagement signals
  4. Business Impact: Translation to organizational value

MONK Indicators: Measuring Platform Capability

MONK stands for four platform-specific indicators that measure what your IDP actually enables:

M — Mean Time to Productivity

How long does it take a new developer to ship their first meaningful change?

This isn’t just „time to first commit“—it’s time to first production deployment that delivers user value. It captures the entire onboarding experience: environment setup, access provisioning, documentation quality, and golden path effectiveness.

Level MTTP What It Indicates
Elite < 1 day Fully automated onboarding, excellent docs
High 1-3 days Good automation, minor manual steps
Medium 1-2 weeks Significant manual setup, tribal knowledge
Low > 2 weeks Broken onboarding, high friction

How to measure: Track the timestamp of a developer’s first day against their first production deployment. Survey new hires about blockers. Instrument your onboarding automation to identify where time is spent.

O — Observability Coverage

What percentage of services have adequate observability?

„Adequate“ means: structured logging, distributed tracing, key metrics dashboards, and alerting. If developers can’t debug their services without SSH-ing into production, your platform isn’t delivering on its observability promise.

Level Coverage What It Indicates
Elite > 95% Observability is default, opt-out not opt-in
High 80-95% Most services instrumented, some gaps
Medium 50-80% Inconsistent adoption, manual setup
Low < 50% Observability is an afterthought

How to measure: Scan your service catalog for observability signals. Check for active traces, log streams, and dashboard usage. Automate detection of services without adequate instrumentation.

N — Number of Services on Golden Paths

How many services use your platform’s recommended patterns?

Golden paths only deliver value if teams actually walk them. This metric tracks adoption of your templates, scaffolding, and recommended architectures versus custom or legacy approaches.

Level Adoption What It Indicates
Elite > 80% Golden paths are genuinely useful
High 60-80% Good adoption, some justified exceptions
Medium 30-60% Mixed adoption, paths may need improvement
Low < 30% Teams prefer alternatives, paths aren’t valuable

How to measure: Tag services by creation method (template vs. custom). Track which CI/CD patterns are in use. Survey teams about why they didn’t use golden paths.

K — Knowledge Accessibility

Can developers find answers without asking humans?

This measures documentation quality, search effectiveness, and self-service capability. Every question that requires Slack escalation is a failure of your platform’s knowledge layer.

Level Self-Service Rate What It Indicates
Elite > 90% Excellent docs, effective search, AI-assisted
High 70-90% Good docs, some gaps in edge cases
Medium 50-70% Inconsistent docs, frequent escalations
Low < 50% Tribal knowledge dominates

How to measure: Track support ticket volume per developer. Survey developers about where they find answers. Analyze search query success rates in your portal.

DX Core 4: Measuring Developer Experience

The DX Core 4 framework, developed by DX (formerly GetDX), measures developer experience through four key dimensions:

Speed

How fast can developers complete common tasks?

  • Time to create a new service
  • Time to add a new dependency
  • Time to deploy a change
  • Time to rollback a bad deployment
  • CI/CD pipeline duration

Effectiveness

Can developers accomplish what they’re trying to do?

  • Task completion rate for common workflows
  • Error rates in self-service operations
  • Percentage of tasks requiring manual intervention
  • First-try success rate for deployments

Quality

Does the platform help developers build better software?

  • Security vulnerability detection rate
  • Policy compliance scores
  • Test coverage trends
  • Production incident rates by platform-generated vs. custom services

Impact

Do developers feel they’re making meaningful contributions?

  • Percentage of time on feature work vs. toil
  • Developer satisfaction scores (quarterly surveys)
  • Net Promoter Score for the platform
  • Voluntary platform adoption rate

Adoption Metrics: Measuring Platform Usage

Adoption metrics tell you whether developers are actually using your platform—and how deeply.

Breadth Metrics

  • Active users: Monthly active developers using the platform
  • Team coverage: Percentage of teams with at least one active user
  • Service coverage: Percentage of production services managed by the platform

Depth Metrics

  • Feature adoption: Which platform capabilities are actually used?
  • Engagement frequency: How often do developers interact with the platform?
  • Workflow completion: Do users complete multi-step workflows or drop off?

Retention Metrics

  • Churn rate: Teams that stop using the platform
  • Return rate: Users who come back after initial use
  • Expansion: Teams adopting additional platform features

Shadow IT Indicators

  • Workaround detection: Teams building alternatives to platform features
  • Escape hatch usage: How often do teams need to bypass the platform?
  • Manual process survival: Legacy processes that should be automated

Business Impact: Translating to Value

Ultimately, platform investment needs to translate to business outcomes. The Platform Scorecard connects capability metrics to value through:

Cost Metrics

  • Infrastructure cost per service: Does the platform optimize resource usage?
  • Time savings: Developer hours saved by automation (valued at loaded cost)
  • Incident cost reduction: MTTR improvements × average incident cost
  • Onboarding cost: MTTP improvement × new hire cost per day

Risk Metrics

  • Security posture: Vulnerability exposure window, compliance violations
  • Operational risk: Single points of failure, bus factor for critical systems
  • Regulatory risk: Audit findings, compliance gaps

Capability Metrics

  • Time to market: How fast can the organization ship new products?
  • Experimentation velocity: A/B tests launched, feature flags toggled
  • Scale readiness: Can the organization 10x without 10x headcount?

Implementing the Platform Scorecard

Start Simple

Don’t try to measure everything at once. Pick one metric from each category:

  1. MONK: Mean Time to Productivity (easiest to measure)
  2. DX Core 4: Developer satisfaction survey (quarterly)
  3. Adoption: Monthly active users
  4. Business Impact: Developer hours saved

Automate Collection

Manual metrics decay quickly. Invest in:

  • Event tracking in your developer portal
  • CI/CD pipeline instrumentation
  • Automated surveys triggered by workflow completion
  • Service catalog scanning for compliance

Review Cadence

  • Weekly: Adoption metrics (leading indicators)
  • Monthly: MONK indicators, DX speed/effectiveness
  • Quarterly: Full scorecard review, business impact calculation

Benchmark and Trend

Absolute numbers matter less than trends. A 70% golden path adoption rate might be excellent for your organization or terrible—context determines meaning. Track improvement over time and benchmark against similar organizations when possible.

Presenting to Leadership

When presenting Platform Scorecard results to leadership, focus on:

  1. Business impact first: Lead with cost savings and risk reduction
  2. Trends over absolutes: Show improvement trajectories
  3. Developer voice: Include satisfaction quotes and NPS
  4. Comparative context: Industry benchmarks where available
  5. Investment connection: Link metrics to roadmap priorities

Conclusion

DORA metrics remain valuable, but they’re not enough to measure platform value. The Platform Scorecard provides a comprehensive framework that captures what platform teams actually deliver: developer capability, experience improvement, and organizational value.

The key insight is that platforms are products, and products need product metrics. Deployment frequency tells you code is shipping. The Platform Scorecard tells you whether developers are thriving, the organization is more capable, and your investment is paying off.

Start measuring what matters. Your platform’s value is real—now you can prove it.

Measuring Developer Productivity in the AI Era: Beyond Velocity Metrics

Introduction

The promise of AI-assisted development is irresistible: 10x productivity gains, code written at the speed of thought, junior developers performing like seniors. But as organizations deploy GitHub Copilot, Claude Code, and other AI coding assistants, a critical question emerges: How do we actually measure the impact?

Traditional velocity metrics — story points completed, lines of code, pull requests merged — are increasingly inadequate. They measure output, not outcomes. Worse, they can be gamed, especially when AI can generate thousands of lines of code in seconds. This article explores modern frameworks for measuring developer productivity in the AI era, separating hype from reality and providing practical guidance for engineering leaders.

The Problem with Traditional Velocity Metrics

For decades, engineering teams have relied on metrics like:

  • Lines of Code (LOC): More code doesn’t mean better software. AI makes this metric meaningless — you can generate 10,000 lines in minutes.
  • Story Points / Velocity: Measures estimation consistency, not actual value delivered. Teams optimize for completing stories, not solving problems.
  • Pull Requests Merged: Encourages many small PRs over thoughtful changes. Doesn’t capture review quality or long-term impact.
  • Commits per Day: Trivially gameable. Says nothing about the value of those commits.

These metrics share a fundamental flaw: they measure activity, not productivity. In the AI era, activity is cheap. An AI can produce endless activity. What matters is whether that activity translates to business outcomes.

The SPACE Framework: A Holistic View

The SPACE framework, developed by researchers at GitHub, Microsoft, and the University of Victoria, offers a more nuanced approach. SPACE stands for:

  • Satisfaction and well-being
  • Performance
  • Activity
  • Communication and collaboration
  • Efficiency and flow

The key insight: productivity is multidimensional. No single metric captures it. Instead, you need a balanced set of metrics across all five dimensions, combining quantitative data with qualitative insights.

Applying SPACE to AI-Assisted Teams

When developers use AI coding assistants, SPACE metrics take on new meaning:

  • Satisfaction: Do developers feel AI tools help them? Or do they create frustration through incorrect suggestions and context-switching?
  • Performance: Are we shipping features that matter? Is customer satisfaction improving? Are we reducing incidents?
  • Activity: Still relevant, but must be interpreted carefully. High activity with AI might indicate productive use — or it might indicate the developer is blindly accepting suggestions.
  • Communication: Does AI change how teams collaborate? Are code reviews more or less effective? Is knowledge sharing happening?
  • Efficiency: Are developers spending less time on boilerplate? Is time-to-first-commit improving for new team members?

DORA Metrics: Outcomes Over Output

The DORA (DevOps Research and Assessment) metrics focus on delivery performance:

  • Deployment Frequency: How often do you deploy to production?
  • Lead Time for Changes: How long from commit to production?
  • Change Failure Rate: What percentage of deployments cause failures?
  • Mean Time to Recovery (MTTR): How quickly do you recover from failures?

DORA metrics are outcome-oriented: they measure the effectiveness of your entire delivery pipeline, not individual developer activity. In the AI era, they remain highly relevant — perhaps more so. AI should theoretically improve all four metrics. If it doesn’t, something is wrong.

AI-Specific DORA Extensions

Consider tracking additional metrics when AI is involved:

  • AI Suggestion Acceptance Rate: What percentage of AI suggestions are accepted? Too high might indicate rubber-stamping; too low suggests the tool isn’t helping.
  • AI-Assisted Change Failure Rate: Do changes written with AI assistance fail more or less often?
  • Time Saved per Task Type: For which tasks does AI provide the most leverage? Boilerplate? Tests? Documentation?

The „10x“ Reality Check

Marketing claims of „10x productivity“ with AI are pervasive. The reality is more nuanced:

  • Studies show 10-30% improvements in specific tasks like writing boilerplate code, generating tests, or explaining unfamiliar codebases.
  • Complex problem-solving sees minimal AI uplift. Architecture decisions, debugging subtle issues, and understanding business requirements still depend on human expertise.
  • Junior developers may see larger gains — AI helps them write syntactically correct code faster. But they still need to learn why code works, or they’ll introduce subtle bugs.
  • 10x claims often compare against unrealistic baselines (e.g., writing everything from scratch vs. using any tooling at all).

A realistic expectation: AI provides meaningful productivity gains for certain tasks, modest gains overall, and requires investment in learning and integration to realize benefits.

Practical Metrics for AI-Era Teams

Based on SPACE, DORA, and real-world experience, here are concrete metrics to track:

Quantitative Metrics

Metric What It Measures AI-Era Considerations
Main Branch Success Rate % of commits that pass CI on main Should improve with AI; if not, AI may be introducing bugs
MTTR Time to recover from incidents AI-assisted debugging should reduce this
Time to First Commit (new devs) Onboarding effectiveness AI should accelerate ramp-up
Code Review Turnaround Time from PR open to merge AI-generated code may need more careful review
Test Coverage Delta Change in test coverage over time AI can generate tests; is coverage improving?

Qualitative Metrics

  • Developer Experience Surveys: Regular pulse checks on tool satisfaction, flow state, friction points.
  • AI Tool Usefulness Ratings: For each major task type, how helpful is AI? (Scale 1-5)
  • Knowledge Retention: Are developers learning, or becoming dependent on AI? Periodic assessments can reveal this.

Tooling: Waydev, LinearB, and Beyond

Several platforms now offer AI-era productivity analytics:

  • Waydev: Integrates with Git, Jira, and CI/CD to provide DORA metrics and developer analytics. Offers AI-specific insights.
  • LinearB: Focuses on workflow metrics, identifying bottlenecks in the development process. Good for measuring cycle time and review efficiency.
  • Pluralsight Flow (formerly GitPrime): Deep git analytics with focus on team patterns and individual contribution.
  • Jellyfish: Connects engineering metrics to business outcomes, helping justify AI tool investments.

When evaluating tools, ensure they can:

  1. Distinguish between AI-assisted and non-AI-assisted work (if your tools support this tagging)
  2. Provide qualitative feedback mechanisms alongside quantitative data
  3. Avoid creating perverse incentives (e.g., rewarding lines of code)

Avoiding Measurement Pitfalls

  • Don’t use metrics punitively. Metrics are for learning, not for ranking developers. The moment metrics become tied to performance reviews, they get gamed.
  • Don’t measure too many things. Pick 5-7 key metrics across SPACE dimensions. More than that creates noise.
  • Do measure trends, not absolutes. A team’s MTTR improving over time is more meaningful than comparing MTTR across different teams.
  • Do include qualitative data. Numbers without context are dangerous. Regular conversations with developers provide essential context.
  • Do revisit metrics regularly. As AI tools evolve, so should your measurement approach.

Conclusion

Measuring developer productivity in the AI era requires abandoning simplistic velocity metrics in favor of holistic frameworks like SPACE and outcome-oriented measures like DORA. The „10x productivity“ hype should be tempered with realistic expectations: AI provides meaningful but not transformative gains, and those gains vary significantly by task type and developer experience.

The organizations that will thrive are those that invest in thoughtful measurement — combining quantitative data with qualitative insights, tracking outcomes rather than output, and continuously refining their approach as AI tools mature.

Start by auditing your current metrics. Are they measuring activity or productivity? Then layer in SPACE dimensions and DORA outcomes. Finally, talk to your developers — their lived experience with AI tools is the most valuable data point of all.

Internal Developer Portals: Backstage, Port.io, and the Path to Self-Service Platforms

Platform Engineering: The 2026 Megatrend

The days when developers had to write tickets and wait for days for infrastructure are over. Internal Developer Portals (IDPs) are the heart of modern Platform Engineering teams — enabling self-service while maintaining governance.

Comparing the Contenders

Backstage (Spotify)

The open-source heavyweight from Spotify has established itself as the de facto standard:

  • Software Catalog — Central overview of all services, APIs, and resources
  • Tech Docs — Documentation directly in the portal
  • Templates — Golden paths for new services
  • Plugins — Extensible through a large community

Strength: Flexibility and community. Weakness: High setup and maintenance effort.

Port.io

The SaaS alternative for teams that want to be productive quickly:

  • No-Code Builder — Portal without development effort
  • Self-Service Actions — Day-2 operations automated
  • Scorecards — Production readiness at a glance
  • RBAC — Enterprise-ready access control

Strength: Time-to-value. Weakness: Less flexibility than open source.

Cortex

The focus is on service ownership and reliability:

  • Service Scorecards — Enforce quality standards
  • Ownership — Clear responsibilities
  • Integrations — Deep connection to monitoring tools

Strength: Reliability engineering. Weakness: Less developer experience focus.

Software Catalogs: The Foundation

An IDP stands or falls with its catalog. The core questions:

  • What do we have? — Services, APIs, databases, infrastructure
  • Who owns it? — Service ownership must be clear
  • What depends on what? — Dependency mapping for impact analysis
  • How healthy is it? — Scorecards for quality standards

Production Readiness Scorecards

Instead of saying „you should really have that,“ scorecards make standards measurable:

Service: payment-api
━━━━━━━━━━━━━━━━━━━━
✅ Documentation    [100%]
✅ Monitoring       [100%]
⚠️  On-Call Rotation [ 80%]
❌ Disaster Recovery [ 20%]
━━━━━━━━━━━━━━━━━━━━
Overall: 75% - Bronze

Teams see at a glance where action is needed — without anyone pointing fingers.

Integration Is Everything

An IDP is only as good as its integrations:

  • CI/CD — GitHub Actions, GitLab CI, ArgoCD
  • Monitoring — Datadog, Prometheus, Grafana
  • IaC — Terraform, Crossplane, Pulumi
  • Ticketing — Jira, Linear, ServiceNow
  • Cloud — AWS, GCP, Azure native services

The Cultural Shift

The biggest challenge isn’t technical — it’s the shift from gatekeeping to enablement:

Old (Gatekeeping) New (Enablement)
„Write a ticket“ „Use the portal“
„We’ll review it“ „Policies are automated“
„Takes 2 weeks“ „Ready in 5 minutes“
„Only we can do that“ „You can, we’ll help“

Getting Started

The pragmatic path to an IDP:

  1. Start small — A software catalog alone is valuable
  2. Pick your battles — Don’t automate everything at once
  3. Measure adoption — Track portal usage
  4. Iterate — Take developer feedback seriously

Platform Engineering isn’t a product you buy — it’s a capability you build. IDPs are the visible interface to that capability.