Every asset-intensive organisation operates under the same fundamental tension. Physical assets – platforms, pipelines, turbines, processing plants, rail networks, water treatment facilities – are the productive core of the enterprise. They generate revenue, carry safety obligations, and underpin regulatory compliance. Yet the discipline responsible for managing those assets across their lifecycle has, until now, lacked a single, internationally comparable measure of how well it is done.
Financial performance has the income statement. Safety performance has total recordable incident rates. Environmental compliance has emissions reporting frameworks validated by international protocol. Asset performance management – the discipline that governs how physical assets are maintained, inspected, invested in, and ultimately retired – has nothing equivalent.
This is not a trivial omission. The global replacement value of industrial physical assets runs into the tens of trillions of dollars. The annual maintenance expenditure of a single major oil and gas operator can exceed $500 million. Mining companies routinely allocate 30–50% of their operating costs to asset maintenance and reliability. Power utilities operate assets with design lives measured in decades, where a single turbine failure can cost millions in lost generation and emergency repair. In every case, the effectiveness of asset management directly determines whether those assets deliver value or destroy it.
And yet, when a VP of Asset Management is asked to demonstrate that their organisation manages its assets well – relative to peers, relative to standards, relative to what good practice actually looks like – there is no independent, standardised answer available to them.
This article examines why that gap exists, what it costs, and how GARPI™ – the Global Asset Reliability & Performance Index – is designed to close it.
The benchmarking gap is structural, not accidental
Benchmarking in asset management is not new. What is new is the recognition that existing approaches have structural limitations that prevent them from functioning as a genuine industry index. A review of the principal methods reveals five categories, each with distinct strengths and constraints.
Standards-based maturity assessments – such as those aligned to ISO 55001 and the earlier PAS 55 – measure management system conformance. They are robust and well-established; their limitation is that they measure policy, process, and documentation rather than whether the assets are actually performing. An organisation can score highly on governance maturity while its assets underperform.
Metrics-based benchmarking – through bodies such as SMRP and EFNMS – compares specific KPIs across organisations. These surveys produce useful data, but they measure operational outputs in isolation. A reactive maintenance ratio of 25% tells you what is happening; it does not tell you whether the organisation has the governance, strategy, and capability to sustain or improve that position.
Proprietary sector surveys – such as the Solomon benchmarking programme for refining – offer deep, sector-specific comparison. Their limitation is precisely their specificity: they do not enable cross-sector comparison, and participation is typically restricted to a single industry.
Consultancy maturity models provide tailored assessment but lack the independence and standardisation required for a genuine industry benchmark. Results are not comparable across organisations assessed by different firms.
The pattern is clear: each approach measures a part of the picture. None integrates operational outcomes with management system maturity. None spans sectors. None produces a single, comparable score that an asset management professional in mining can hold alongside one from power generation, water, or manufacturing and draw meaningful conclusions.
The cost of not measuring
The absence of a global benchmark is not a theoretical problem. It has practical, measurable consequences that play out in boardrooms, budget negotiations, regulatory submissions, and operational decisions every day.
Investment decisions made without context. A mining operator reports 94% critical asset availability. Is this first quartile, median, or below average for mining operators of comparable scale in the same region? Without a benchmark, the number is descriptively useful but strategically inert. It cannot inform capital allocation, board reporting, or transformation programme prioritisation. When a VP of Asset Management requests funding for a reliability improvement programme, the argument is materially stronger when supported by evidence that the organisation sits in the third quartile relative to comparable peers – not by an internal metric that has no external reference point.
Improvement programmes without baselines. Organisations invest in asset management improvement – ISO 55001 implementation, RCM programmes, digital transformation – without an independent baseline against which to measure progress. The improvement is measured against the organisation's own history, which is better than nothing, but insufficient. A 10% improvement in maintenance planning compliance is meaningful only if you know where the starting point sits relative to the industry.
Regulators without visibility. Safety-case regulators, utility regulators, and environmental agencies make decisions that depend on the quality of asset management within regulated entities. Yet they have no standardised instrument to assess, compare, or monitor that quality across their regulated population.
A profession without evidence. The asset management profession has grown significantly since the publication of PAS 55 in 2004 and ISO 55001 in 2014. Dedicated roles exist that did not a decade ago. Yet the profession lacks the empirical data to answer its own fundamental questions: What is the average maturity of asset-intensive organisations globally? Which dimensions are strongest and weakest across sectors? What distinguishes top-quartile performers from the median? These are not academic questions. They are the evidence base that a mature profession requires to set priorities, allocate resources, and demonstrate value.
GARPI™: the Optimal® response
The Global Asset Reliability & Performance Index – GARPI™ – is Optimal®'s direct response to this measurement gap. It is the first independent benchmark designed from the ground up to satisfy six requirements that any credible global asset management benchmark must meet: measure outcomes and capability together; align to international standards; enable meaningful peer comparison; produce actionable diagnostic output; be practical to complete; and generate longitudinal value over time.
GARPI™ assesses asset performance management across eight weighted dimensions – spanning operational outcomes, governance, maintenance strategy, digital capability, investment decision-making, workforce culture, supply chain readiness, and risk management. It produces a composite 0–100 score with classification into five maturity tiers. Every dimension is explicitly mapped to ISO 55001 clause requirements and GFMAM Asset Management Landscape subject areas.
The instrument is completed online in approximately ten minutes by a single senior practitioner – a maintenance manager, reliability lead, VP of Asset Management, or equivalent. Respondents answer structured questions across all eight dimensions, plus an organisational profile section that enables peer cohort construction by sector, sub-sector, geography, revenue band, asset type, regulatory environment, and ISO 55001 certification status.
The eight dimensions of asset performance management
Together, the eight GARPI™ dimensions cover the full scope of what it means to manage physical assets effectively – from the operational results those assets deliver today to the strategic resilience that determines whether that performance is sustainable over the long term.
The first dimension – Asset Performance Outcomes – anchors the index in hard operational reality: critical asset availability, reactive maintenance ratio, unplanned downtime cost, and maintenance investment levels. The remaining seven dimensions assess the organisational capability behind those outcomes. This dual measurement is what distinguishes GARPI™ from existing approaches: an organisation delivering 97% availability through round-the-clock reactive effort is fundamentally different from one delivering 97% availability through systematic preventive and predictive programmes. Only an instrument that measures both the result and the mechanism behind it can distinguish between the two.
From score to action: five maturity tiers
A benchmark that produces a number without context is a curiosity. A benchmark that produces a number with peer comparison, diagnostic insight, and prioritised recommendations is a management tool. GARPI™ is designed to be the latter.
Every GARPI™ composite score is classified into one of five maturity tiers. The tier names are self-explanatory and free of jargon – a director who reads “Reactive” understands the implication immediately.
| Tier | What it means |
|---|---|
| Firefighting | Asset management is largely reactive. No formal management system exists. Performance depends on individual effort rather than organisational capability. |
| Reactive | Basic structures exist but are inconsistently applied. Maintenance is predominantly reactive with some planned elements. Governance is informal. |
| Emerging | A defined asset management approach is in place. Policies, strategies, and plans exist and are applied in most areas. ISO 55001 requirements are partially met. |
| Advanced | Asset performance management is systematically implemented. ISO 55001 requirements are substantially met. Data drives decisions. Governance is robust. |
| Asset Performance Leader | The organisation sets standards others aspire to. Asset management is fully integrated into strategic planning. Continuous improvement is embedded and evidence-based. |
The output is not a single number and a tier label. Each participant receives a dimension-level breakdown showing where they score relative to their selected peer cohort, with specific gaps identified and mapped to ISO 55001 clauses. The breakdown can serve as a proxy gap analysis – an organisation with weaknesses identified in the governance dimension can map those weaknesses directly to ISO 55001 Clauses 4–6, creating a targeted improvement programme that simultaneously raises its GARPI™ score and moves it toward formal compliance or certification.
The certification question
ISO 55001 certification has grown steadily since 2014, but adoption remains modest relative to ISO 9001 and ISO 14001. The reasons are well-documented: the standard is perceived as abstract, the path from awareness to certification is unclear, and the return on investment is difficult to quantify.
GARPI™ addresses all three barriers. It translates the abstract clauses of ISO 55001 into concrete, measurable statements that practitioners recognise from their daily work. The dimension scores create a visual roadmap that makes the path to compliance visible. And the benchmark comparison provides the quantitative evidence that justifies investment: when an organisation can see that it scores materially below its peers in specific dimensions, the case for action is no longer theoretical.
Preliminary analysis indicates significant score variance between ISO 55001 certified and non-certified organisations, confirming that certification correlates with higher maturity – but the variance is wide. Some non-certified organisations demonstrate strong maturity, while some certified organisations score below the median. GARPI™ makes this variation visible for the first time, providing empirical evidence that certification is a useful signal but not a sufficient proxy for operational capability.
The compounding value of longitudinal data
The arguments above apply from the first annual cycle. But the most significant value of GARPI™ compounds over time. A single cycle produces a benchmark. Multiple cycles produce a dataset. Three or more cycles produce trend data with predictive value.
For individual organisations, longitudinal participation creates a performance trajectory. An organisation that scores in the Reactive tier in Year 1 and the Emerging tier in Year 2 has not just improved by a number of points – it has changed its tier classification, shifted its position relative to a moving peer baseline, and produced evidence that its improvement programme is delivering measurable results. This is the kind of evidence that sustains executive sponsorship and protects asset management budgets through economic downturns.
For the industry, the accumulating dataset will reveal patterns that are currently invisible. Are some sectors improving faster than others? Which dimensions prove most resistant to improvement? Is the gap between top-quartile and median organisations narrowing or widening? Do organisations that achieve ISO 55001 certification sustain their maturity gains, or do scores plateau post-certification? These questions cannot be answered by any instrument that runs once. They require a longitudinal index – and that is precisely what GARPI™ is designed to become.
Who GARPI™ is for
Asset owners and operators gain context. Most asset-intensive organisations track internal performance metrics; what they lack is an external reference point against which to interpret those metrics. GARPI™ provides that reference point – not as a global average, but as a structured comparison against peers with similar characteristics. The VP of Operations who can present a GARPI™ report showing specific dimension gaps relative to top-quartile peers is making a fundamentally different argument from one who can only say “we think we need to improve.” The first is evidence. The second is assertion.
The asset management profession gains its first empirical dataset on global practice. The IAM, GFMAM, and national asset management bodies can use GARPI™ data to identify where the profession is advancing and where it remains immature – informing standards development, training priorities, and research agendas with evidence rather than assumption.
Regulators and investors gain a structured framework for assessing asset management risk. An Advanced-tier organisation presents a different risk profile from a Reactive-tier organisation, and that distinction now has a common language and a quantitative basis.
The industry benchmarks financial performance with precision. It is time to benchmark asset management capability with the same rigour.
Ten minutes. Eight dimensions. One score that puts your asset management maturity in context — against peers, against standards, against what good looks like. Start your GARPI™ assessment →