The Critical Gap in Customer Experience Management Most Companies Miss

The Critical Gap in Customer Experience Management Most Companies Miss

Customer experience management dominates strategic conversations across banking, utilities, telecoms, and retail. Companies invest heavily in CRM systems, digital channels, and customer journey mapping. Yet a fundamental gap persists: the lack of integrated visibility into how company-wide change initiatives shape customer perceptions.

This guide reveals why traditional approaches fall short, quantifies the risks of disconnected change efforts, and provides a practical roadmap for creating a true single view of the customer through change impact integration.

What Prevents Companies from Achieving a Single View of the Customer?

Recent research confirms persistent challenges in customer experience management. A 2024 Forrester study found 48% of enterprises still struggle with unified customer data across channels and departments. Similarly, Gartner reports 52% cite building cohesive new experiences as their top barrier.

The core issue lies beyond siloed CRM data. Companies lack visibility into the cumulative impact of concurrent initiatives—product changes, pricing adjustments, IT rollouts, regulatory communications—that collectively define customer reality.

Why Traditional CRM Approaches Fall Short

CRM systems excel at marketing automation, sales tracking, and contact centre efficiency. However, they capture only transactional interactions, missing the broader context of organisational change.

Traditional CRM Focus Limitations

  • Marketing campaign data
  • Sales conversion metrics
  • Service interaction logs
  • Customer segmentation profiles

These systems overlook how product updates, pricing shifts, or compliance communications alter customer perceptions between tracked touchpoints.

The Missing Piece: Change Impact Tracking

The critical gap involves mapping all customer-impacting initiatives into a unified view. This includes marketing campaigns plus operational changes affecting service delivery.

Change Initiatives Shaping Customer Experience

  • Product lifecycle changes (end-of-life, new features)
  • Pricing and billing adjustments
  • IT system rollouts impacting service access
  • Regulatory compliance communications
  • Employee training initiatives influencing service quality
  • Partner or supplier changes affecting delivery

Without this integrated picture, companies cannot anticipate cumulative customer confusion or frustration.

Traditional CRM vs Change Impact Data vs Integrated CX View

Data SourceFocusCustomer InsightStrategic Value
CRM SystemsMarketing, sales, service transactionsIndividual touchpointsTactical optimisation
Change Impact DataCompany initiatives affecting customersPlanned experience shiftsRisk anticipation
Integrated ViewCombined datasetsHolistic customer realityStrategic CX orchestration

This table illustrates why isolated CRM investments yield incomplete results.

Risks of Disconnected Change Initiatives

Without integrated change visibility, companies create conflicting customer signals that erode trust and satisfaction. Real-world examples illustrate the consequences.

Common Customer Confusion Scenarios

  • One department ends a credit card product while sales teams push aggressive uptake targets
  • IT rollout disrupts online banking while marketing promotes digital-first convenience
  • Pricing changes coincide with loyalty program promotions, confusing value messaging
  • Regulatory communications clash with personalised marketing campaigns

These disconnects compound across multiple initiatives, overwhelming customers.

Financial Impact of Poor CX Coordination

The stakes are substantial. Recent studies quantify the cost:

  • Forrester 2024: Companies lose $1,200+ per negative customer experience
  • Gartner 2025: 42% of telecom households report negative experiences from conflicting communications
  • McKinsey: Utilities face 28% churn risk from uncoordinated service disruptions

Cumulative impact across customer bases represents millions in lost revenue annually.

Customer experience of change impacts

The Solution: Integrated Customer Change Impact Management

Create a unified view combining CRM data with change impact analytics for holistic CX orchestration.

Core Components of Integrated CX Visibility

  1. Centralised Change Repository: Track all customer-impacting initiatives across departments
  2. Customer Segmentation Mapping: Align change impacts with specific personas and journeys
  3. Timing & Volume Analysis: Visualise change saturation by customer segment over time
  4. Impact Correlation Engine: Link initiatives to expected CX outcomes and risks
  5. Strategy Alignment Dashboard: Compare planned changes against customer experience goals

5 Strategic Benefits

  • Anticipate cumulative customer confusion before rollout
  • Optimise change sequencing to minimise disruption peaks
  • Align departmental initiatives with unified CX strategy
  • Quantify ROI from coordinated vs siloed change efforts
  • Enable proactive service recovery planning

Customer Change Impact Matrix Example

Customer SegmentProduct ChangePricing ShiftIT RolloutRegulatory Comm.Total Impact Score
Premium BankingMediumHighLowMediumHigh
Mass MarketLowHighHighLowHigh
Digital NativeHighLowHighLowHigh

This matrix reveals saturation risks by segment.

Implementation Roadmap for Integrated CX Change Management

Phase 1: Foundation (0-3 Months)

  • Inventory all customer-impacting initiatives across departments
  • Map initiatives to customer segments and journey touchpoints
  • Establish cross-functional CX governance council
  • Build baseline change impact repository

Phase 2: Integration (3-6 Months)

  • Connect change data with existing CRM/customer systems
  • Deploy change saturation dashboards by segment
  • Implement automated conflict detection alerts
  • Launch pilot optimisation for high-risk periods

Phase 3: Optimisation (6-12 Months)

  • Embed CX alignment reviews in initiative approval processes
  • Scale predictive impact modelling across portfolio
  • Establish continuous improvement feedback loops
  • Benchmark against industry CX leaders

Governance and Success Factors

Essential Governance Elements

  • Executive sponsorship with direct profit/loss accountability
  • Cross-departmental representation in change review forums
  • Standardised change impact assessment templates
  • Monthly portfolio saturation reporting to leadership

Critical Success Metrics

  • Reduction in customer confusion complaints (25% target)
  • Improved Net Promoter Score during change periods
  • 30% faster issue resolution through proactive planning
  • Higher departmental collaboration scores

Frequently Asked Questions (FAQ)

What is the biggest gap in customer experience management?
Lack of integrated visibility into how company-wide change initiatives collectively shape customer perceptions and experiences.

Why do CRM systems alone fail to deliver unified CX?
CRM captures transactions but misses operational changes like product updates, pricing shifts, and IT rollouts that define customer reality.

How much do poor CX experiences cost companies?
Recent studies show $1,200+ lost per negative experience, with millions annually across customer bases in banking and utilities.

What does integrated CX change management look like?
Centralised change repositories, customer segmentation mapping, saturation dashboards, and strategy alignment analytics working together.

How do you identify customer change saturation risks?
Use impact matrices showing concurrent initiatives by segment, highlighting high-risk periods needing sequencing adjustments.

What is the first step toward CX change integration?
Conduct an inventory of all customer-impacting initiatives across departments to establish baseline visibility.

Change Management Measures: An Enterprise, Business and Project Framework

Change Management Measures: An Enterprise, Business and Project Framework

Change management measurement remains one of the most underdeveloped capabilities in the field. Many organisations track change activities diligently — who attended what, which communications went out, whether training was completed — but struggle to demonstrate the connection between those activities and the business outcomes the change was designed to produce. The result is a discipline that is frequently undervalued by executives, precisely because it cannot show its own impact in the language that executives care about.

The fundamental problem is that most change measurement frameworks operate at a single level — typically the project level — and focus on activities rather than outcomes. A more useful framework operates across three distinct levels: enterprise, business unit, and project. Each level asks different questions, uses different data, and serves different decision-makers. Together, they provide the complete picture that neither programme-level nor portfolio-level measurement alone can deliver.

Why most change measurement falls short

The most common change measurement approach is to track the activities of a specific change programme: how many people were trained, how many communications were sent, what the survey results showed at go-live. This is not without value. Programme-level activity data provides accountability for change delivery and allows teams to identify when specific components — training, communication, stakeholder engagement — are underperforming relative to plan.

But activity measurement has a fundamental limitation: it measures what the change programme did, not whether what it did worked. A programme can achieve 95 percent training completion and still fail to produce the behaviour change the business needs. Prosci’s research on change management ROI consistently finds that programmes with excellent activity metrics but poor adoption outcomes are common — and that the gap between activity and adoption is the primary measurement failure in the field.

The second limitation is that programme-level measurement is blind to the portfolio effect. A team absorbing three major changes simultaneously may show adequate readiness on each programme’s assessment, while its actual adaptive capacity is severely depleted. No programme-level measurement system can detect this, because each programme sees only its own impact on the team. Portfolio-level measurement — the enterprise and business unit levels of the framework — is required to make the cumulative picture visible.

The three levels of change management measurement

A comprehensive change measurement framework operates simultaneously across three levels. Each level has its own measurement purpose, its own data requirements, and its own primary audience. Building measurement capability at all three levels is what distinguishes organisations that can genuinely manage their change portfolio from those that can only report on individual programme activities.

Enterprise level

Enterprise-level change measurement answers the question: how well is our organisation managing change as a strategic capability? It is concerned with the aggregate picture — the total change load being absorbed across the organisation, the distribution of that load across different parts of the business, and the organisation’s overall change capacity and maturity. The primary audience for enterprise-level metrics is the executive team and board, for whom change management is a risk and capability question rather than a delivery question.

Key enterprise-level measures include the total volume of change programmes in flight across the portfolio, the concentration of change load in specific divisions or role groups, trend data on change saturation and fatigue indicators (attrition rates during high-change periods, engagement score movements, absenteeism), and overall adoption rates across major transformation programmes. Enterprise-level measurement also includes benchmarking: how does the organisation’s change capacity compare to research-derived standards or to prior periods?

The enterprise-level view is what enables the most consequential change governance decisions: whether to defer a programme because specific teams are already at or beyond their absorption capacity, whether to invest in additional change resources because the portfolio is systematically under-resourced, or whether specific divisions require targeted capability development to handle the rate of change expected of them.

Business unit level

Business unit-level measurement answers the question: how well is change landing in this part of the organisation? It operates at the level of a division, department, or significant team, and is primarily concerned with the change experience of a defined employee group across all the changes affecting them simultaneously — not just the changes coming from a single programme.

Business unit-level measures include the aggregate change impact score for the group — a composite measure of how many changes are affecting the group, how significant those changes are, and how they are distributed across the year. They include readiness assessments that capture the group’s preparedness for their current change load, not just for individual programmes. They include adoption indicators aggregated across the changes the group has been through in the past 12 months, providing a baseline against which new changes can be assessed. And they include the qualitative picture: what are managers and employees in this group experiencing, and what does that tell us about the group’s current adaptive capacity?

Business unit-level measurement is most valuable to the leaders who are accountable for the performance of those groups — general managers, division heads, and the people leaders who sit one or two levels below them. It gives them data they cannot obtain from programme-level reporting, because programme-level reporting does not aggregate across programmes and does not show the cumulative picture.

Project level

Project-level measurement is the most familiar tier and the most developed in most organisations. It answers the question: how well is this specific change programme delivering its intended outcomes? The primary audience is the programme sponsor, the change management team, and the project governance board.

Best-practice project-level change measurement tracks through three phases: plan, execute, and realise. In the planning phase, measurement focuses on impact assessment quality — how thoroughly the change’s impacts on specific roles and teams have been identified and documented. In the execution phase, it covers the full range of change activity metrics (stakeholder engagement, communication reach, training completion) alongside early readiness and comprehension indicators. In the realisation phase, it shifts to adoption and benefit metrics: are employees performing in the new way? Are the business benefits the change was designed to produce materialising?

Prosci’s ADKAR model provides a useful framework for structuring project-level measurement across the individual adoption journey: awareness, desire, knowledge, ability, and reinforcement. Measuring at each stage of the ADKAR sequence helps change teams identify where in the adoption journey the programme is losing traction, rather than receiving undifferentiated feedback that “the change isn’t landing.”

Connecting the three levels: the measurement flow

The three measurement levels are not independent. They form a connected system in which data flows upward from project to business unit to enterprise, and governance decisions flow downward in the opposite direction. Understanding how this flow works is essential to building a measurement framework that actually influences decisions rather than simply producing reports.

The upward flow begins with structured impact assessment at the project level. Each programme systematically identifies which teams and role groups are affected, what types of impacts they are experiencing, and how significant those impacts are. This data is aggregated at the business unit level to produce a picture of the cumulative change load on each group. That business unit data is then aggregated at the enterprise level to produce the portfolio-wide picture that executives need to make strategic resource and sequencing decisions.

The downward flow of governance decisions takes the enterprise and business unit data and translates it into constraints and guidance for individual programmes. If the enterprise-level data shows that a specific division is at capacity, the governance decision might be to defer a planned programme affecting that division by one quarter. If the business unit data shows that a team’s adoption of a recently completed change is low, the governance decision might be to provide additional stabilisation support before launching the next wave of change on that team.

This connected measurement system is what platforms like The Change Compass are designed to support. By providing a shared data layer across all three measurement levels — with structured impact data collected at the project level and automatically aggregated to business unit and enterprise views — these platforms make the full measurement framework operationally viable rather than theoretically sound but practically unworkable.

Making measurement actionable

The purpose of change measurement is not to produce reports. It is to enable better decisions. A measurement framework that generates data that no one acts on has failed its purpose, regardless of how sophisticated the metrics are. Making measurement actionable requires three things: the right data at the right time, a clear governance process for acting on it, and decision-makers who have both the authority and the appetite to make difficult calls based on what the data shows.

The right data at the right time means measurement that is aligned to decision windows. Enterprise-level data needs to be available when portfolio investment decisions are being made — typically quarterly, in alignment with financial planning cycles. Business unit-level data needs to be available to division leaders when they are making decisions about programme timing and resourcing. Project-level data needs to be available to programme teams on a continuous basis, so that course corrections can be made during implementation rather than identified in a post-implementation review when it is too late to act.

The governance process for acting on measurement data is frequently the weakest link. Many organisations collect reasonable change data but have no clear process for what happens when the data shows a problem. McKinsey research on change programme failures consistently finds that the most common cause of poor change outcomes is not the quality of the change design but the quality of the in-flight decision-making when early signals indicate the programme is not landing as expected. A measurement framework without a governance process for acting on what it reveals is a reporting system, not a management tool.

Frequently asked questions

What are the three levels of change management measurement?

A comprehensive change management measurement framework operates at enterprise, business unit, and project levels. The enterprise level measures the organisation’s overall change portfolio, capacity, and management maturity. The business unit level measures the aggregate change load and adoption experience of specific employee groups across all concurrent changes affecting them. The project level measures the delivery and adoption outcomes of individual change programmes. Each level serves different decision-makers and requires different data.

Why is measuring training completion not enough?

Training completion is an activity measure — it tells you someone participated in a training programme, not whether they understood the content, can apply it, or have adopted the new process or behaviour. Outcome measures — adoption rates, error rates in new processes, productivity recovery — are what demonstrate whether a change programme has achieved its purpose. Organisations that rely primarily on activity measures consistently overestimate their change effectiveness and underestimate their adoption gaps.

How does change measurement support portfolio governance?

Portfolio-level change measurement makes visible the aggregate change load on specific employee groups across all concurrent programmes — information that is invisible to programme-level measurement systems. This data enables portfolio governance decisions about sequencing, pacing, and resourcing that cannot be made without it. When enterprise and business unit-level measurement shows that specific teams are at or beyond their absorption capacity, the governance body has the evidence it needs to defer or descope programmes affecting those teams rather than proceeding and generating resistance and attrition.

What does effective change measurement look like in practice?

Effective change measurement is structured, consistent, and connected to decision processes. It uses a shared taxonomy for impact types across all programmes, so that data can be aggregated across the portfolio. It is timed to decision windows rather than reporting cycles. It covers all three measurement levels — not just project-level activity. And it has a clear governance process for what happens when the data shows a problem, so that measurement informs action rather than just generating reports.

References

Our simple design philosophy: from formulas to packaging.

It all begins with an idea. Maybe you want to launch a business. Maybe you want to turn a hobby into something more.

 


Stocksy Comp 3094565

 

Maybe you have a creative project to share with the world. Whatever it is, the way you tell your story online can make all the difference.

Don’t worry about sounding professional. Sound like you. There are over 1.5 billion websites out there, but your story is what’s going to separate this one from the rest. If you read the words back and don’t hear your own voice in your head, that’s a good sign you still have more work to do.

Be clear, be confident and don’t overthink it. The beauty of your story is that it’s going to continue to evolve and your site can evolve with it. Your goal should be to make it feel right for right now. Later will take care of itself. It always does.

I discovered these 5 surprises in managing an agile digital project

I discovered these 5 surprises in managing an agile digital project

As someone who is normally oversees the change management side of large programs and portfolios, I now find myself being in the shoes of a project manager. Here’s the background. I now manage a digital software-as-a-service business (The Change Compass) aimed at those who are driving multiple changes in their organizations. In terms of managing change deliverables and stakeholders, I was perfectly comfortable, having done this with some of the largest organizations in the world. However, I was not trained as a project manager, particularly not in managing a digital product.

Having worked on very large digital projects over the years I‘m familiar with the different phases of the project lifecycle and lean/agile/scaled agile methodologies. However, managing a digital project hands-on has revealed some very surprising learnings for me. I will share this in the following.

1. The customer/user doesn’t always know the best

Over the years we have received quite a lot of customer feedback about what worked and what didn’t work and we have iteratively morphed the application in line with customer wishes. However, a ‘customer/user suggestion or wish’ is not always the best for them. There are some features that we have developed to enable the user to build different reports. However, after lots of feedback, and iterations, we’ve found that the users don’t use these features much at all. On the other hand, there are other features designed based on our observations of how users have behaved that are very frequently used. In the design phase, some users have commented that they are not sure if these features will work. However, after trialing these they have easily adopted them and have not made any suggestions or comments since.

It is probably similar to when the first iPhone was released. A lot of people were negative about how it did not have a keyboard and that the lack of tactile pressing of buttons was a sure sign that it was not going to work. Did Apple derive the iPhone purely based on customer feedback? Did customers already know what they wanted and simply told Apple? Nope. Well, the screen-only mobile phone with no or limited buttons is now a standard across mobile phone design.

Example: In our digital project management experience with The Change Compass, we initially prioritized implementing a feature based on numerous customer requests. This feature allowed users to customize their dashboard layout extensively. However, after analyzing user behavior data post-launch, we discovered that this feature was rarely used by our target audience. Surprisingly, users preferred a simpler default layout that we had originally designed based on our understanding of their workflow and preferences. As a result, we decided to refine the default layout further and focus on enhancing features that aligned more closely with user needs and behaviors within our change management software.

To read more about avoiding key gaps in managing customer experience click here.

2. Setting clear expectations is critical


At The Change Compass, we have a very diverse and scattered team. We have our development team in India, a UX designer in Canada, a graphic designer in Europe, and Analysts in Australia. Most of our team members are quite familiar with agile practices. They are familiar with each phase of the agile life cycle, Kanban boards, iterating releases, etc. For our Ultimate Guide to Agile for Change Managers click here.

However, one big lesson I learned was the importance of setting clear and mutually agreed-to-work deliverables. With such a diverse team composition comes a diverse understanding of the same concept. In agile, we try not to over-document and rely on discussions and ongoing engagement to achieve collaboration and clarity.

However, what I learned was that clear documentation is critical to ensure that there is a crystal clear understanding of the scope, what each deliverable looks like, what quality processes are in place to reach the outcome, the dependencies across different pieces of work, and what each person is accountable and not accountable for. All of these sound like common sense. However, the point is that it is common for agile projects to err on the side of too light in documentation, therefore leading to frustrations, confusion, and lack of outcome achievement. In our experience, documentation is critical.

Example: At The Change Compass, we’ve learned the importance of setting clear and mutually agreed-upon work deliverables, especially with our diverse global team. Despite our team’s familiarity with agile practices, we realized that documentation is critical to ensure a crystal-clear understanding of project scope, deliverables, quality processes, dependencies, and individual accountabilities. By documenting these aspects thoroughly within our change management software, we’ve achieved better collaboration, clarity, and outcome achievement across our distributed team.

3. Boil everything down to its most basic meaning

In digital projects, there is a lot of technical jargon with backend, front end, and mid-layer design elements. Like any technology project, there seems to be a natural inclination to become overwhelmed with what is the best technical solution. Since I did not have a technology background I forced myself to become very quickly familiar with the various technical jargon in delivery to try to compensate.

However, what I found was that with such a diverse team, even within the technical team there is often misunderstanding about what a technical term means. On top of this, we have other non-technical team members such as Analysts, UX designers, and Graphic Designer. We have experienced lots of team miscommunications and frustrations as a result of too much technical language.

To ensure the whole team is clear on what we are working on, how we are approaching it, and their roles in this along the way, we’ve tried hard to ‘dumb down’ the use of technical jargon into basic language as much as possible. Yes – there is a basic set of digital language necessary for delivery that all members should understand. But, beyond this, we’ve tried to keep things very simple to keep everyone on the same page. And the same can also be applied to non-technical language, for example, graphic design technical terms that the techies may not be able to understand can also cause misunderstanding.

Example: In our digital project management endeavors with The Change Compass, we’ve encountered challenges due to technical jargon and miscommunications within our diverse team. To mitigate this, we’ve prioritized simplifying technical language into basic terms that everyone, including non-technical team members like Analysts, UX designers, and Graphic Designers, can understand. By keeping communication simple and clear, we ensure that everyone is on the same page regarding project objectives, approaches, and roles within our change management platform.

4. Team dynamics is still key …. Yes, even in a digital project


To get on the agile bandwagon a lot of project practitioners invest deeply to undergo various training to become more familiar with how agile projects are conducted. While this is critical what I’ve found is that no matter what project methodology, agile or non-agile, digital or non-digital, the basics remain that effective team dynamics are key to a high-performing project team.

Most of the issues we have faced are around team communications, shared understanding, how different team members work with each other, and of course cross-cultural perceptions and behaviors. Any effort we have placed in discussing and resolving team dynamics and behaviors has always led to improved work performance.

Example: Despite the focus on agile methodologies and digital tools, effective team dynamics remain crucial within The Change Compass. We’ve observed that issues around team communications, shared understanding, and cross-cultural perceptions can significantly impact project performance. By investing effort in discussing and resolving team dynamics and behaviors, we’ve consistently improved work performance and collaboration within our change management software, resulting in better outcomes for our clients.

5. The struggle of releasing something that isn’t perfect is hard


Being a typical corporate guy having worked in various large corporate multinationals it is ingrained in me that quality assurance and risk management are key to any work outcome. Quality work ticks all boxes with no flaws and that does not expose any risks to the company. In the typical corporate world, any flaws are to be avoided. Thorough research,, analysis, and testing are required to ensure the quality is optimal.

Example: As individuals with a background in corporate change management, we initially struggled with the agile approach of releasing minimum viable products (MVPs) within The Change Compass software. While ingrained in the notion of quality assurance and risk management, we learned to embrace the agile principle of continuous improvement. Instead of aiming for perfection upfront, we focus on releasing usable features and iterating based on ongoing customer feedback. This approach allows us to deliver value incrementally and adapt our change management software to evolving user needs and preferences.

The agile approach challenges this notion head-on. The assumption is that it is not possible to know exactly what the customer or user reaction is going to be. Therefore, it makes sense to start with a minimum viable product, and iterate continuously to improve, leveraging ongoing customer feedback. In this approach, it is expected that what is released will not be perfect and cannot be perfect. The aim is to have something usable first. Then, work to gradually perfect it.

Whilst in theory, it makes sense, I’ve personally found it very difficult not to try and tick all boxes before releasing something to the customer. There are potentially hundreds of features or designs that could be incorporated to make the overall experience better. We all know that creating a fantastic customer experience is important. Yet, an agile approach refrains from aiming to perfect the customer experience too much, instead, relying on continuous improvement.

Ready to streamline your change management process and drive better outcomes with The Change Compass? Book a demo today to see how our software can help your organization succeed.

How to Lead Change and Drive Business Results Through Data

How to Lead Change and Drive Business Results Through Data

In most modern organisations, data drives decisions. Marketing teams track conversion rates to the decimal point. Finance teams model scenarios with precision. Operations leaders measure throughput, defect rates, and cycle times as a matter of course. Yet change management, a discipline that directly influences whether transformation programmes succeed or fail, has long operated on a different basis. Change leaders frequently rely on anecdote, stakeholder intuition, and high-level readiness surveys that tell them very little about what is actually happening on the ground. The result is a discipline that struggles to justify its value and, more critically, struggles to course-correct when things go wrong.

This gap is not simply a matter of preference or professional culture. It reflects a deeper structural challenge: change management has historically lacked the tools, frameworks, and shared standards required to turn complex human and organisational behaviour into reliable, actionable data. Where a project manager can point to schedule variance and earned value, a change leader has often had to rely on statements like “people seem engaged” or “resistance is lower than last quarter.” These observations may be accurate, but they do not give executives the confidence to invest further, adjust scope, or make time-sensitive decisions about programme delivery.

The good news is that this is changing. A growing number of change leaders are adopting data-driven approaches that connect change activity to measurable business outcomes. Platforms like The Change Compass are making it practical for organisations to collect, visualise, and act on change data in ways that were simply not possible a decade ago. This article explores why data maturity matters in change management, what good change data looks like in practice, and how change leaders can use it to earn executive confidence and drive results.

Why change management lags other disciplines in data maturity

Change management emerged largely from behavioural science, organisational psychology, and consulting practice rather than from the quantitative traditions of engineering or finance. The foundational models – Lewin’s unfreeze-change-refreeze, Kotter’s eight steps, the ADKAR model from Prosci – are deeply valuable, but they were designed as conceptual frameworks rather than measurement systems. This means that while they help practitioners think clearly about change, they do not inherently produce the kind of data that boards and executive committees use to evaluate business performance.

A 2023 Prosci benchmarking report found that organisations with excellent change management programmes are six times more likely to meet project objectives than those with poor change management. Despite this compelling evidence, many organisations still struggle to translate that finding into a data collection discipline within their own programmes. The challenge is partly methodological and partly cultural. Change practitioners are often stretched across multiple concurrent initiatives, leaving little capacity to design and maintain rigorous measurement systems. There is also a widespread belief that human behaviour is simply too complex to quantify in meaningful ways.

Gartner research on digital transformation has consistently highlighted that the human and organisational dimensions of change are the leading cause of programme failure, yet these dimensions receive the least structured measurement attention. When a technology implementation stalls, it is rarely because the software does not work – it is because adoption is lagging, training has not translated to behaviour change, or frontline managers are not reinforcing the new ways of working. Without data, these problems go undetected until they become crises. With data, they can be spotted early and addressed systematically.

What good change data actually looks like

Good change data is specific, timely, and connected to business outcomes. It goes beyond the typical “readiness survey” that asks employees whether they feel prepared for an upcoming change. While readiness surveys have their place, they represent only one dimension of what change leaders need to manage effectively. A robust change measurement system captures data across at least three categories: the volume and complexity of change hitting different parts of the organisation, the progress and effectiveness of change enablement activities, and early indicators of adoption and sustained behaviour change.

Change volume and complexity data helps leaders understand the cumulative burden being placed on different employee populations. A business unit that is simultaneously navigating a technology replacement, a restructure, and a new performance management framework is under far greater change pressure than one experiencing a single initiative. Without visibility into that cumulative load, leaders may unknowingly overload teams, driving disengagement, absenteeism, and productivity decline. The Change Compass platform was specifically designed to give organisations a consolidated view of their change portfolio, enabling leaders to see where change saturation is occurring and to sequence or reprioritise initiatives accordingly.

Enablement activity data tracks the completion and quality of change management deliverables such as stakeholder engagement sessions, training completions, communications sent, and manager briefings conducted. This data answers the question of whether the change programme is being executed as designed. Adoption indicators, by contrast, measure whether behaviour is actually shifting. These might include system login rates, process compliance metrics, quality scores, or customer satisfaction results that can be directly linked to the changes being implemented. Together, these three data streams give change leaders a genuinely comprehensive picture of how their programmes are progressing.

Using data to influence executive decision-making

One of the most important applications of change data is in executive communications. Senior leaders are accustomed to receiving data dashboards from finance, operations, and technology. When a change leader walks into a steering committee meeting with comparable data – showing adoption rates by business unit, change saturation scores by team, and leading indicators of programme risk – it fundamentally changes the conversation. Instead of providing subjective commentary, the change leader becomes a peer who is contributing to the evidence base the organisation uses to make decisions.

McKinsey research on large-scale transformation has found that programmes with strong senior sponsorship and clear performance data are substantially more likely to deliver their intended value. The data dimension matters because it gives sponsors something tangible to act on. When a change leader can show that adoption in one region is 40 per cent below target and identify the specific barriers driving that gap, a senior sponsor can intervene with authority and specificity. When the only information available is “adoption seems slower in the north,” the sponsor has no clear basis for action and is likely to default to pressure rather than problem-solving.

Building executive influence through data also requires change leaders to understand what executives actually care about. Board members and executive committees are typically focused on financial performance, risk exposure, customer outcomes, and employee engagement. Change data becomes far more compelling when it is framed in those terms. Instead of reporting that “80 per cent of managers have completed their briefings,” a data-driven change leader might show that business units with high manager engagement scores are tracking 25 per cent ahead of adoption targets, with a projected positive impact on the revenue run rate of the new system. That framing connects change activity to the things executives are accountable for delivering.

Connecting change metrics to business performance outcomes

The most sophisticated change measurement systems do not stop at tracking change activities – they create a line of sight between change management inputs and business performance outputs. This is sometimes called the change value chain, and building it requires deliberate design at the outset of a programme rather than an afterthought at the end. Change leaders who wait until a programme is complete to evaluate its impact will always struggle to demonstrate causality. Those who define their measurement framework at the start, identifying which business metrics should move as a result of successful adoption, are in a far stronger position.

Consider a customer experience transformation programme designed to reduce complaint volumes and improve Net Promoter Score. A well-designed measurement framework for this programme would track not only whether employees have completed the required training, but also whether their interactions with customers are changing in observable ways – perhaps through call quality monitoring, customer feedback scores, or first-call resolution rates. If training completion is high but customer metrics are not improving, the data points clearly to a gap between learning and on-the-job application. That insight allows the programme team to investigate and address the specific barrier, whether it is inadequate coaching from team leaders, a process that does not support the desired behaviour, or a cultural norm that is overriding the training content.

A Harvard Business Review analysis of large transformation programmes found that only 30 per cent of them succeed in meeting their original objectives, and the primary differentiator between successful and unsuccessful programmes is not the quality of the strategy but the quality of execution, including the people and change dimensions. Connecting change metrics to business outcomes is the mechanism by which change leaders can demonstrate that they are not just managing the process of change but actively driving the conditions for success.

Building a data-driven change team

Shifting to a data-driven approach requires more than adopting a new platform or adding a measurement step to existing processes. It requires building a team capability and a team culture that treats evidence as the foundation of professional practice. This is a meaningful cultural shift for many change functions, which have traditionally valued qualitative insight, relationship skills, and experiential wisdom over analytical rigour. The most effective change teams combine both – they do not abandon the human judgment and empathy that good change practice requires, but they augment it with data that improves the quality and confidence of their decisions.

Practically, this means investing in data literacy across the change team. Change practitioners do not need to become data scientists, but they do need to understand how to design measurement frameworks, interpret dashboards, identify patterns in data, and communicate data-driven insights to different audiences. Organisations can support this through targeted skill development, pairing change practitioners with data or analytics colleagues, and building data collection and review into the standard rhythms of programme governance. The Change Compass platform supports this transition by providing change teams with visualisation tools and reporting capabilities that do not require deep technical expertise to operate.

Leadership commitment is equally important. When the head of change or the chief people officer consistently asks for data in programme reviews and holds teams accountable to evidence-based conclusions, it sends a clear signal about what is valued. Conversely, when leaders accept anecdote and opinion as the basis for major programme decisions, they inadvertently undermine the case for building measurement capability. The shift to data-driven change management is ultimately a leadership choice as much as a technical one, and it tends to succeed when it is championed from the top and embedded in the operating model of the change function.

How The Change Compass enables data-driven change leadership

The Change Compass was built specifically to address the data gap that has long held change management back. The platform provides change leaders with a consolidated, visual view of their organisation’s change portfolio, making it possible to assess the volume, complexity, and distribution of change across different business units and employee groups. This portfolio view is one of the most immediately useful capabilities for organisations running multiple concurrent programmes, because it surfaces change saturation risks that are otherwise invisible until they start driving disengagement or resistance.

Beyond portfolio visibility, The Change Compass enables teams to track change readiness and adoption metrics at a programme level, linking activity data to the business outcomes that executive sponsors care about. The platform’s reporting and visualisation features are designed to be accessible to change practitioners who are not data specialists, making it practical to generate executive-ready dashboards without relying on separate analytics support. This reduces the time change leaders spend compiling reports and increases the time they spend acting on what the data reveals.

The platform also supports the benchmarking of change performance over time and across programmes, helping organisations build an institutional understanding of what good change looks like in their specific context. Over time, this benchmarking capability enables more accurate scoping and resourcing of future programmes, reducing both the over-investment that comes from guessing conservatively and the under-investment that comes from underestimating complexity. For organisations serious about building a mature, data-driven change capability, The Change Compass provides both the infrastructure and the discipline to make it happen.

Frequently asked questions

What is the most important change management metric to track?

There is no single most important metric, because the right measures depend on the nature and objectives of the programme. However, adoption rate – the proportion of the target population that has shifted to the new ways of working – is consistently one of the most valuable indicators because it directly reflects whether the change is achieving its intended effect. Adoption data is most useful when it is disaggregated by business unit, role group, or geography, so that low-adoption pockets can be identified and addressed rather than masked by an average figure.

How can change leaders make the case for data investment to sceptical executives?

The most effective approach is to frame data investment in terms of risk reduction and return on programme investment. Executives who have experienced transformation programmes that failed to deliver expected benefits – a common experience, given the research findings on transformation success rates – are typically receptive to an argument that better change measurement would have identified the adoption gap earlier and enabled corrective action. Concrete examples from other organisations, combined with a clear proposal for how a measurement framework would work in practice, tend to be more persuasive than abstract arguments about data maturity.

How does change saturation data help organisations manage their portfolios?

Change saturation data quantifies the cumulative change burden being experienced by different employee groups at any given point in time. When this data is mapped across a programme portfolio, it reveals which teams are approaching or exceeding their capacity to absorb change effectively. Leaders can use this information to sequence initiatives more thoughtfully, delay lower-priority changes when teams are already under significant pressure, or target additional change management support to the most saturated groups. Without this visibility, organisations frequently over-burden their highest-performing teams – those most likely to be involved in multiple change programmes simultaneously – which can drive the very disengagement they are trying to avoid.

Can small change teams realistically adopt a data-driven approach?

Yes, and the investment required is often lower than practitioners expect. A data-driven approach does not require a large analytics team or a complex technology infrastructure. It starts with defining two or three meaningful metrics for each programme, establishing a simple collection method, and reviewing the data consistently in governance forums. Platforms like The Change Compass are specifically designed to be accessible to small and mid-sized change functions, providing out-of-the-box visualisation and reporting that does not require technical expertise to configure or maintain. Starting small and building measurement discipline gradually is far more effective than waiting for a perfect system before beginning.

References