Where we see the greatest opportunity for buyers is in harnessing the “March of the Penguins”: The continuous stream of snapshots of our...
Sam Cotterall
Director of Client Enablement
Most teams don’t discover PPA underperformance when it starts. They discover it after financial, reporting, or contractual consequences appear, when someone asks why the numbers don’t line up, why the forecast moved, or why expected value didn’t show up.
That lag isn’t because people aren’t paying attention. It’s because power purchase agreements don’t behave like traditional owned assets, and most organizations still treat them like a contracting exercise followed by periodic oversight.

Underperformance in a PPA can accumulate quietly:
The missing piece for early detection is almost always the same: a credible baseline for potential generation … a way to separate “the plant couldn’t have produced more” from “the plant left production on the table,” before value is lost.
This page defines PPA performance management as an operating discipline built around that baseline, so underperformance shows up as an early signal, not a downstream surprise. Put simply, PPA performance management is how teams detect underperformance early … before settlement, reporting, or stakeholder escalation forces the conversation.
PPA performance management is a continuous discipline for identifying, explaining, and contextualizing performance outcomes across a PPA’s life not as a post-hoc reporting function, and not as a finance-close exercise. The point of PPA performance management is not to produce a better monthly package. It’s to establish an early baseline for performance accountability.
At its core, it answers a simple question with operational rigor:
Did the project deliver what it was capable of delivering, given the resource and the constraints of the agreement, before you factor in timing, settlement mechanics, and price noise?
That requires three things that many teams don’t formally maintain:
PPA performance management spans:
It is distinct from:
Organizations sign power purchase agreements and then, understandably, treat them as contracts to administer. The contract is real. The invoices are real. The reporting deadlines are real. So oversight tends to gravitate toward what is concrete and periodic.
But PPA asset management starts from a different premise:
Even if you don’t own the facility, you are exposed to its performance. You should manage that exposure like an asset.
Oversight vs. management (conceptually)
Traditional oversight commonly centers on:
In practice, PPA performance management is the operating layer that turns “oversight” into true asset accountability.
PPA asset management centers on:
This isn’t about telling teams they’re “doing it wrong.” It’s acknowledging that most corporate programs were built to buy PPAs and report on them not to operate them as managed performance exposures.
The shift matters because power purchase agreements create a long-running interface between:
If you only look at the interface monthly, you will find problems late.
Underperformance is usually detected downstream for structural reasons not because someone missed a chart.
Even in well-run programs, there’s a natural delay between when:
By the time monthly, quarterly, or annual reporting closes, you’re not detecting underperformance. You’re explaining it. This is exactly where PPA performance management breaks down: the organization only sees the problem once it has already become a downstream variance.
Energy teams often track operational narratives. Finance teams track explainability and forecast confidence. Sustainability teams track claims and compliance. Executives want a signal they can trust.
Those perspectives are all valid, but they rarely share a single baseline. So “performance” becomes a debate over whose view is “right,” rather than a shared diagnosis of what’s happening.
At signing, assumptions get locked into models and presentations. Then reality moves: resource regimes change year to year, curtailment patterns evolve, the plant’s availability shifts, market shapes change, transmission constraints appear.
If the baseline isn’t actively maintained, drift looks like “bad luck” until it becomes too large to ignore.
One practical consequence is confidence erosion. In many energy programs, the most damaging outcome isn’t a miss. It’s a miss that nobody can explain. As one industry leader put it: opacity is punished more than misses, especially when stakeholders feel surprised.
Two common traps:
Without an operational baseline, you end up attributing everything to markets or everything to the project based on whichever stakeholder is speaking.
Most teams can tell you what happened. Fewer can tell you what should have been possible.
Potential generation is the expected output given observed conditions and constraints, such as resource availability, plant characteristics, and the realities of curtailment and outages. It is not the same thing as a:
Potential generation answers: Under the conditions that actually occurred, what should the project reasonably have generated?
That distinction matters because:
If you only compare actual generation to a static plan, you can’t reliably separate:
Why variance alone is insufficient
A negative variance to plan can be:
Potential generation gives you a baseline that makes the variance interpretable early before finance close forces the narrative into a retroactive explanation. Without that baseline, PPA performance management devolves into after-the-fact attribution instead of early detection.
A disciplined definition of performance has to withstand scrutiny from energy, finance, sustainability, and executives. That means performance cannot be reduced to a single metric.
In practice, PPA performance includes:
This is the operational core: did the project deliver what it was capable of delivering?
Even with strong generation, outcomes can diverge due to:
But financial outcomes are not a reliable early detector of operational underperformance on their own because markets can overwhelm the signal.
Finance and leadership don’t need perfect precision. They need a forecast they can trust to behave predictably, especially when it changes.
That requires:
Explainability is what prevents “fire drills.” When stakeholders can’t reconcile why outcomes moved, they assume governance is weak even if the program is sound.
A common real-world version of this is the risk of a “first-year surprise.” When the first settlements start arriving, the disruption isn’t only the number—it’s realizing there wasn’t an early feedback loop to recalibrate expectations.
Strong PPA performance management reduces the cost of explanation—because drivers are clear before the close, not reconstructed after it.
Executives don’t want noise. They want a stable signal:
Ultimately, performance management serves this purpose.
The teams that detect underperformance early aren’t doing something exotic. They’re doing something disciplined: they treat performance detection as a governance function, not a reporting byproduct.
Common characteristics:
Plans and budgets are necessary. They are not sufficient as detection baselines.
Potential-based measurement helps teams surface issues while there’s still time to:
Settlement is too late for detection. It’s confirmation.
Leading teams establish early indicators that are closer to the physical reality of the asset, so the first meaningful signal does not arrive inside an invoice or an annual availability report.
This is where many programs break down. Each team has data, but not a shared frame.
Early detection requires a common language:
Static models create brittle expectations. Living assumptions create resilience.
When the world changes (resource patterns, curtailment patterns, transmission constraints, market shapes) leading teams don’t pretend the original picture still holds. They re-baseline transparently, with interim metrics that preserve confidence rather than erode it.
None of this requires a step-by-step playbook to be true. It requires alignment on what the discipline is trying to accomplish: detect drift early, explain it credibly, and manage the lifecycle exposure as conditions evolve.
Annual reporting is not the enemy. In many organizations, it’s the default cadence for PPA performance (availability summaries, true-ups, or developer reporting).
The problem is timing: if the first “performance” conversation happens annually, underperformance can accumulate for months before anyone sees a coherent signal.
Moving to a monthly close and monthly performance reporting is progress. But the issue is definitional:
Reporting is designed to consolidate outcomes. Detection is designed to surface deviation early.
Monthly reporting tends to be:
Detection needs to be:
So when teams rely on reporting cycles—annual or monthly—to “catch issues,” they’re implicitly accepting that:
This is why many organizations experience a familiar pattern:
If this is resonating, it’s worth exploring the breakdowns behind reporting lag and reconciliation timing, and how teams separate detection from close without creating extra burden.
When performance management is working, the benefits show up differently depending on who is accountable, but the root is the same. Fewer surprises, clearer causality, and more credible confidence.
In other words: the goal isn’t perfect accuracy. It’s governable confidence.
Underperformance doesn’t have a single “moment.” It has a trajectory. Over long contract terms, PPA performance management is less about a single miss and more about spotting drift before it becomes the new normal.
To manage performance, you have to recognize that a PPA changes character across its lifecycle:
There is often a long quiet period between signing and operations. During that time:
Without a deliberate feedback loop during this period, the first meaningful signal often arrives at settlement when it’s hardest to absorb and explain.
Early outcomes shape internal confidence for years. If the first year is dominated by surprises and opaque variance, stakeholders learn that the PPA is “unpredictable,” even if the drivers were diagnosable.
This is also where initial availability, curtailment, and operational maturity issues can create performance gaps that are easy to miss without a potential baseline. When performance signals aren’t defensible, it’s harder to get follow-on sustainability actions approved because stakeholders don’t trust the baseline.
Over time, small degradations accumulate:
If performance management is limited to point-in-time variance, drift looks like “normal volatility” until it becomes large enough to force a reset.
Lifecycle-aware performance management keeps the program anchored to:
This is also where portfolio governance matters: the question shifts from “did one project miss?” to “are we seeing systematic underperformance patterns across the portfolio, and are they controllable?”
Most organizations don’t need more data. They need clarity on where their performance governance breaks.
For many organizations, “PPA performance” is still an annual exercise … often a report delivered by the developer, rather than something the buyer produces and owns. That cadence can be useful for retrospectives, but it won’t surface underperformance early.
A simple test of PPA performance management maturity is whether you can answer these questions without waiting for settlement or month-end reconciliation.
A useful readiness view usually comes down to questions like:
If you can’t answer these with confidence, it usually means the organization is relying on downstream artifacts (settlement, invoices, monthly reporting) to do a job they were never designed to do: early detection.
The practical question to leave on the table is straightforward:
How confident are you that your organization would detect underperformance early before it shows up as a financial surprise or a credibility problem?