Most change practitioners know the deliverables they are expected to produce: a change impact assessment, a stakeholder analysis, a communications plan, a training plan, a change plan. What is less commonly understood is that these deliverables are not a menu to pick from or a checklist to tick off in any convenient order. They form a logical sequence, where each piece of work depends directly on the quality of what came before it. Skipping steps or producing thin early deliverables does not just create gaps in documentation. It creates compounding problems downstream that are expensive and disruptive to fix.
The analogy is architectural. You would not commission detailed interior design before the structural engineering was complete. You would not specify the cabling layout before you knew where the walls were going. Change management deliverables work the same way. The change impact assessment informs the stakeholder analysis. The stakeholder analysis shapes the change strategy. The change strategy provides the logic for the communications plan and the learning design. And the change plan is the scheduling vehicle that holds all of it together. Disrupt this sequence and you end up with communications that miss the real concerns, training that covers the wrong things, and plans that bear little resemblance to what employees actually need.
This matters because poor quality early deliverables have a multiplier effect. As Prosci’s research on defining change impact makes clear, the data collected during impact assessment shapes everything downstream: without clarity on impact, you cannot accurately scope training needs, cannot properly segment stakeholders, and cannot build a realistic change management strategy. The further downstream you travel before discovering the gap, the more rework is required and the more likely the change effort is to fall behind schedule or lose credibility with the business.
The logical sequence of change deliverables is not a methodology preference. It reflects the underlying information dependencies between each piece of work. You cannot define who your high-priority stakeholders are until you know which groups are most affected by the change. You cannot design your communications approach until you understand the nature and depth of the impact on each group. You cannot design learning until you know what new knowledge, skills, or behaviours each role will require. Each deliverable consumes the outputs of those that precede it.
This dependency structure is what distinguishes a well-sequenced change programme from one where deliverables are produced in parallel by different team members without coordination. In the latter scenario, the communications team writes messages based on assumptions about stakeholder concerns, the learning designer creates modules based on a high-level project brief, and the change plan is assembled at the end to reflect what has already been done rather than to coordinate what needs to happen. The result is deliverables that exist but do not cohere, and change management that is more a compliance exercise than a genuine support mechanism for people through transition.
Gartner’s research on organisational change management has found that organisations using a structured, sequenced change approach can increase the probability of change success by up to 22 percent and significantly reduce the implementation time and employee effort involved. That uplift is not incidental. It reflects the compounding benefit of each deliverable building cleanly on well-founded predecessors.
The foundation: change impact assessment
The change impact assessment is the first substantive deliverable in any change programme, and it is the one that most directly determines the quality of everything that follows. Its purpose is to translate the organisational-level change into an understanding of what will be different for specific groups of people, in their day-to-day roles, processes, tools, and behaviours. This translation from macro to micro is what gives later deliverables their specificity and credibility.
A high-quality change impact assessment goes beyond listing the features of the new system or the restructured process. It articulates, for each impacted role or group, what will change, what will stop, and what new activities or ways of working will be required. It assesses the depth and breadth of impact – distinguishing between roles that face significant behavioural change and those where the impact is largely procedural. It captures the timing of when impacts will be felt and any dependencies between groups. And it identifies where the change conflicts with existing workloads, change fatigue, or capability gaps.
The most common failure mode in change impact assessments is superficiality. Teams under time pressure produce high-level impact summaries that describe the change rather than its effects on people. These documents look like deliverables but do not contain the information needed to drive a stakeholder analysis or a communications plan. The downstream consequence is that those later deliverables are built on assumptions rather than evidence, and the assumptions are frequently wrong. Investing time in depth at the impact assessment stage is among the highest-return activities in change management.
Stakeholder analysis and segmentation
With a solid change impact assessment in hand, the stakeholder analysis becomes a far more precise exercise. Rather than producing a generic power-interest grid that maps all stakeholders against broad categories, a well-founded stakeholder analysis uses the impact data to differentiate groups by the nature and severity of the change they are experiencing. Two groups may both appear as “high impact” at the project level but require entirely different engagement approaches because the nature of their experience is different.
Effective stakeholder segmentation identifies not just who is affected, but how each group is likely to respond, what their primary concerns are, who the influential voices within each group are, and what organisational history might shape their readiness for this particular change. The Prosci ADKAR model is useful here because it provides a framework for thinking about where different stakeholder groups are likely to be in their awareness, desire, knowledge, ability, and reinforcement journey at any given point. Groups with low desire for the change need a different engagement approach than groups with high desire but low knowledge.
A stakeholder analysis that lacks the specificity provided by a good impact assessment tends to treat all stakeholders as broadly similar, applying the same engagement strategies across groups with fundamentally different experiences. This wastes effort on the wrong interventions and misses the resistance or confusion that is building in the groups that most need targeted support. Getting stakeholder segmentation right is what enables the change strategy to be genuinely tailored rather than generic.
From impact to engagement: the change strategy
The change strategy is the pivotal deliverable in the sequence. It sits between the diagnostic work – impact assessment and stakeholder analysis – and the operational planning work of communications, learning design, and scheduling. Its role is to define the overall approach the change programme will take to support each stakeholder group through the transition, and to make the key design decisions that will guide all subsequent activity.
A well-constructed change strategy makes explicit choices about sequencing, for example, which stakeholder groups will be engaged first and why. It defines the tone and positioning of the change – particularly when the change is sensitive or involves restructuring, role reductions, or significant shifts in how work is done. It identifies the engagement mechanisms that will be used for each segment, whether that is town halls, manager-led conversations, direct emails, working groups, or peer champions. And it sets out the high-level milestones for when different groups need to reach different points in their change journey.
The Prosci 3-Phase Process specifically identifies the change management strategy as the key deliverable from Phase 1, noting that it directly informs all activities in the manage change phase. This is the correct positioning: the strategy is not a standalone document but the blueprint from which the operational change management plans are derived. A thin or generic strategy – one that says “we will communicate regularly and train all impacted staff” without the specific logic that comes from impact and stakeholder data – produces equally thin and generic plans downstream.
Communications and engagement planning
Communications planning is where many change practitioners feel most confident, and where the consequences of poor upstream deliverables are most visible. When communications plans are produced without a well-grounded stakeholder analysis or a clear change strategy, they default to information dissemination: announcements, newsletters, and project updates that describe what is happening but do not address the specific concerns, uncertainties, or resistance of the people receiving them.
Effective communications planning uses the stakeholder segmentation to build distinct message architectures for different audiences, reflecting the different things each group most needs to understand and feel about the change at each stage. It uses the change strategy to determine the right channels, frequency, and tone for each audience. And it sequences messages deliberately, building awareness before creating desire, and providing detailed knowledge only once the foundational “why this matters and why now” has been established.
Research published in the Harvard Business Review on stakeholder buy-in reinforces the importance of segmented, tailored communication: effective approaches require a clear understanding of the needs, motivators, and concerns of each stakeholder segment, with communication designed to speak directly to those rather than to the general project narrative. Communications that apply a one-size-fits-all approach to audiences with very different concerns and very different relationships to the change consistently underperform, even when the volume of communication is high.
Learning design and capability development
Learning design is the deliverable most directly dependent on a thorough change impact assessment. The impact assessment identifies, at the role level, what new knowledge, skills, and behaviours will be required for each group to perform effectively in the changed environment. Without this, learning designers are forced to build programmes based on the features of the new system or process rather than on the specific capability gaps of the people who will use it.
The distinction matters enormously in practice. A system walkthrough that explains all features of a new platform is not the same as a learning programme designed around the three or four tasks a particular role group will perform most frequently, the errors they are most likely to make, and the decision-making judgements the new process requires of them. The former is easier to produce but frequently fails to create the actual capability shift the organisation needs. The latter requires a clear picture of what each group’s day-to-day work will look like after go-live, and that picture comes from the impact assessment.
Capability development in change programmes also needs to account for the difference between knowledge and ability. The ADKAR model’s distinction between knowledge – knowing how to do something – and ability – being able to do it under real working conditions – is a useful reminder that learning design must include practice, feedback, and reinforcement, not just information transfer. This is especially important for changes that require behavioural shifts, where the learning design must create opportunities for people to apply new approaches in a supported environment before they are expected to perform independently.
The change plan as the culmination of all deliverables
The change plan – sometimes called the master change management plan or the integrated change plan – is the deliverable that brings all preceding work together into a coordinated, time-phased activity schedule. Its quality is entirely a function of the quality of the deliverables that feed into it. A change plan built on a thin impact assessment and a generic stakeholder analysis will be a scheduling exercise. A change plan built on rigorous upstream deliverables will be a genuine roadmap for managing people through a transition.
The integrated change plan should sequence activities in a way that reflects the logical dependencies between them. Sponsor briefings happen before the broader communication rollout, because sponsors need to be equipped to answer questions and model the right behaviours before employees receive the formal change announcement. Manager capability sessions precede the manager-led conversations with their teams. Training for go-live support roles happens before training for the broader user population. These sequencing decisions are not arbitrary; they reflect the ADKAR journey and the practical requirements of the change, and they can only be made correctly when the upstream deliverables have provided the necessary clarity.
One of the most common mistakes organisations make is treating the change plan as a tracking document rather than a planning instrument. In this mode, the change plan reflects what has been done and scheduled without the underlying logic of why those activities are sequenced that way, who they are designed for, and what outcome they are intended to achieve at each stage. A rigorous change plan includes the rationale, not just the schedule, and it is updated as the programme evolves and new information about stakeholder readiness or emerging resistance comes to light.
How The Change Compass structures and connects change deliverables
One of the practical challenges in managing the logical sequence of change deliverables is that different team members often work on different deliverables at different times, using different tools and templates, without a shared view of how the pieces connect. The data from the impact assessment does not flow automatically into the stakeholder analysis. The stakeholder segmentation does not automatically structure the communications plan. Each connection requires deliberate effort and coordination.
The Change Compass is designed around this challenge. Rather than providing a set of standalone templates, it provides an integrated platform where change impact data, stakeholder information, communications activity, and change plans are connected within a single view. Change leads can see which stakeholder groups are most impacted, how their engagement activities are tracking against the planned sequence, and where there are gaps between planned and actual activity across the programme portfolio. This connected view makes it much harder for the logical dependencies between deliverables to be lost in the day-to-day pressures of programme delivery.
For organisations running multiple concurrent change programmes, this integration is particularly valuable. The cumulative impact on any given stakeholder group is visible across all programmes rather than assessed in isolation, which means the communications and engagement planning for each programme can account for the total load that group is experiencing. This is the kind of visibility that prevents the coordination failures that so often erode employee confidence and programme outcomes during periods of significant organisational change.
Frequently asked questions
What is a change deliverables structure and why does it matter?
A change deliverables structure is the defined set of outputs that a change management programme produces, arranged in the logical sequence in which they should be developed. It matters because each deliverable builds on the quality of those that precede it: the change impact assessment informs the stakeholder analysis, the stakeholder analysis shapes the change strategy, and so on through to the communications plan, learning design, and integrated change plan. Understanding this sequence helps practitioners prioritise investment in early deliverables and helps organisations avoid the compounding problems that result from shortcutting foundational work.
What happens if the change impact assessment is low quality?
A low-quality change impact assessment produces a cascade of problems downstream. Without specific, role-level impact data, the stakeholder analysis defaults to broad categorisation rather than meaningful segmentation. Communications plans address assumed concerns rather than actual ones. Learning design covers system features rather than targeted capability gaps. The result is a change programme with technically complete deliverables that fail to support people effectively through the transition, increasing resistance, reducing adoption, and often requiring costly rework after go-live. Prosci’s research consistently identifies impact assessment quality as a predictor of overall programme success.
How does stakeholder analysis feed into the communications plan?
The stakeholder analysis provides the segmentation logic that makes a communications plan genuinely targeted rather than generic. It identifies which groups are most impacted, what their primary concerns and motivators are, how likely they are to support or resist the change, and who the influential voices within each group are. The communications plan then uses this information to build distinct message architectures, select appropriate channels, and sequence communications so that each group receives the right messages at the right points in their change journey. A communications plan produced without this grounding tends to default to project announcements that describe the change rather than addressing the human experience of going through it.
Can change management deliverables be produced in parallel to save time?
Some parallelism is possible and often necessary under programme timelines, but it carries risk that must be managed carefully. The core dependency is informational: each deliverable consumes outputs from those before it, so producing them simultaneously requires that the team make assumptions about those inputs, and assumptions that turn out to be wrong create rework. A practical approach is to complete the change impact assessment before beginning the stakeholder analysis in earnest, and to have at least a draft stakeholder analysis in place before finalising the change strategy. Communications planning and learning design can often begin in parallel once the strategy is set, but both should be revisited and refined as the impact and stakeholder work matures. Shortcuts at the front end of the sequence consistently create more time lost later than they save at the start.
Few phrases in management thinking have proved as durable as Peter Drucker’s famous dictum: “culture eats strategy for breakfast.” First popularised in the early 2000s and widely attributed to Drucker, the saying captures a truth that executives and change practitioners continue to learn the hard way – that even the most carefully crafted strategic plan will be neutralised if the underlying organisational culture is left unaddressed. Strategy lives on paper; culture lives in people. And people, in the end, will always revert to what they know, what they are rewarded for, and what the leaders around them visibly model.
The gap between strategic intent and cultural reality is one of the most persistent challenges in organisational life. A leadership team can articulate a compelling vision, invest significantly in new technology or operating models, and communicate the change through every available channel – yet, months later, find that frontline behaviours have barely shifted. Employees nod in the right meetings, complete the mandatory training modules, and tick the compliance boxes, while quietly continuing to work exactly as they did before. This is surface compliance rather than genuine behavioural change, and it is far more common than most organisations care to admit.
For change management practitioners, understanding the relationship between culture and strategy is not merely an academic exercise – it is the foundation of effective practice. When culture is treated as a backdrop rather than a variable to be actively managed, change programmes routinely fall short of their intended outcomes. Research by McKinsey found that roughly 70 percent of change programmes fail to achieve their goals, and culture-related resistance is among the most frequently cited causes. Addressing this requires a different way of thinking about culture itself, one that moves away from vague notions of “the way we do things around here” and towards a concrete, behavioural framework that leaders and change teams can actually work with.
What “culture eats strategy for breakfast” really means
To use this principle effectively, it is worth understanding what it does and does not mean. The dictum is not an argument against strategy. Sound strategic thinking remains essential to organisational success. Rather, the phrase is a warning about sequencing and priority: if you design a strategy without accounting for the cultural conditions in which it will be implemented, the culture will win. The informal rules, rituals, norms, and shared assumptions that constitute culture are more powerful day-to-day forces than any document or directive.
Edgar Schein, widely regarded as one of the foremost scholars of organisational culture, described culture as operating across three levels: visible artefacts (office layouts, rituals, language), espoused values (what the organisation says it believes), and underlying assumptions (what people actually believe, often unconsciously). The critical insight is that most change efforts operate at the level of artefacts and espoused values while leaving underlying assumptions untouched. A new strategy might introduce a new set of stated values, but if the underlying assumptions – about how decisions really get made, who is rewarded and why, what behaviours are actually tolerated – remain unchanged, the culture will absorb and neutralise the strategy.
This is why culture change cannot be mandated from above through a memo or a values poster in the lift lobby. Culture is reproduced through thousands of daily micro-interactions: how a manager responds when someone raises a concern, whether results are celebrated over relationships, whether risk-taking is quietly punished even when it is publicly championed. Changing culture means changing those micro-interactions, which means changing behaviour.
Why strategy fails when culture is neglected
The mechanisms by which culture undermines strategy are well documented. When an organisation attempts to implement a new strategy without addressing culture, it typically encounters one or more of the following failure patterns. The first is value conflict, where the behaviours the new strategy requires are incompatible with what the existing culture actually rewards. An organisation may announce a strategy centred on customer-centricity, but if internal processes still optimise for operational efficiency over customer outcomes, and if employees are measured and promoted on efficiency metrics, the strategy will make little headway.
The second failure pattern is what organisational psychologists call “immunity to change” – a concept developed by Robert Kegan and Lisa Laskow Lahey at Harvard. Organisations, like individuals, hold competing commitments that work against the stated goal of change. A management team may genuinely want to build a culture of psychological safety and candid feedback, while simultaneously holding an unexamined assumption that visible confidence and certainty signal competence. These competing commitments are rarely explicit, which is precisely why they are so hard to dislodge.
A third failure pattern concerns informal influence networks. Every organisation has a formal hierarchy, and it has a parallel informal network of influencers – people whose opinions shape how others interpret and respond to change initiatives. When a strategy is rolled out without engaging these informal networks, and without understanding whether the informal culture is aligned or misaligned with the strategy’s requirements, the informal network will set the actual tone. Gartner research has consistently highlighted the role of informal employee networks in determining whether change lands or stalls, noting that change fatigue and cultural misalignment are amplified when change programmes ignore these dynamics.
Breaking culture down into observable behaviours
One of the most practical advances in thinking about culture change is the shift from treating culture as a monolithic, abstract entity to treating it as a portfolio of observable behaviours. Culture cannot be measured or changed directly, but behaviours can. When we decompose culture into its constituent behavioural elements, we move from the intangible to the manageable.
This behavioural lens asks a deceptively simple question: what would we see people doing differently if this cultural attribute were genuinely present? If we want a culture of continuous improvement, what specific behaviours does that require? It requires team leaders to regularly ask “what could we do better?” in team meetings rather than simply reviewing performance against targets. It requires individuals to surface problems early rather than waiting until they escalate. It requires managers to respond to raised concerns with curiosity rather than defensiveness. Each of these is observable, measurable, and coachable.
Prosci’s research into change management best practices similarly emphasises that the individual dimension of change – shifting what specific people actually do – is the most critical and most often underestimated element of any change effort. The ADKAR model, which focuses on Awareness, Desire, Knowledge, Ability, and Reinforcement, is essentially a behavioural framework: it maps the psychological and practical journey an individual must complete before a new behaviour becomes habitual. This is directly applicable to culture change, where the goal is not simply awareness of a new value, but the embedding of new habitual behaviours across the workforce.
Designing change that addresses cultural barriers
Designing change initiatives that genuinely address cultural barriers requires a deliberate diagnostic phase before any intervention is designed. This means mapping the current culture in behavioural terms – identifying the existing norms, the informal rules, and the unwritten expectations that govern how work actually gets done. It also means identifying which of those existing behaviours are enablers of the desired change and which are inhibitors.
John Kotter’s eight-step change model, while broad in scope, contains important cultural wisdom. The importance Kotter places on creating a guiding coalition, communicating the vision, and generating short-term wins are all, at their core, strategies for shifting the cultural environment in which the change must take root. Short-term wins are particularly significant from a cultural standpoint: they demonstrate that the new behaviours are not only desirable but achievable within the constraints of the existing organisation, which begins to shift the informal narrative from “this isn’t how things work here” to “maybe things really are changing.”
Beyond Kotter’s framework, effective cultural change design also requires attention to the structural elements that reinforce behaviour: performance management systems, recognition and reward mechanisms, hiring and promotion criteria, and meeting norms. These are the environmental levers that make new behaviours easy and old behaviours costly. A change initiative that asks people to behave differently without modifying the structural environment that shapes behaviour is asking them to swim against the current indefinitely. McKinsey’s influence model highlights that consistent role modelling from leaders, combined with reinforcing systems and capability building, creates the conditions for genuine and lasting behavioural change.
Leader behaviours as the primary culture signal
Of all the levers available to organisations trying to shift culture, leader behaviour is the most powerful and the most scrutinised. Employees watch what leaders do far more carefully than they listen to what leaders say. When a senior leader espouses collaboration but consistently makes unilateral decisions, the organisation learns that collaboration is a stated value, not a lived one. When a leader says psychological safety matters and then visibly dismisses someone who raises an uncomfortable truth in a leadership forum, the culture absorbs that signal within hours.
This is not merely anecdotal. Research published in the Harvard Business Review has consistently demonstrated that leader behaviour is the single most influential factor in shaping team and organisational culture. Leaders set the tone for what is acceptable, what is rewarded, what is discussed, and what is avoided. In the context of change management, this means that any culture change programme must include explicit attention to leader behaviour change – not just leader communication of the change message, but visible modelling of the new behaviours in everyday interactions.
Practically, this means change practitioners should work with leaders to identify two or three specific, observable behaviours they will commit to changing and to building accountability mechanisms that make those commitments visible to their teams. It may mean introducing behavioural feedback loops for senior leaders, so that the gap between their intended behaviour and their actual impact is made visible in a psychologically safe way. This is difficult, uncomfortable work, but it is also the highest-leverage activity available to any change programme with cultural ambitions.
Measuring and sustaining behavioural change
One of the most common failure modes in culture change programmes is the absence of rigorous measurement. Culture is often treated as too soft or too complex to measure, which becomes a self-fulfilling prophecy: without measurement, it is impossible to demonstrate progress, which makes it difficult to sustain momentum and executive attention over the multi-year timelines that genuine culture change typically requires.
Effective measurement of behavioural change operates at multiple levels. At the individual level, 360-degree behavioural assessments and structured observations can track whether specific target behaviours are increasing in frequency and quality. At the team level, team health surveys, retrospective practices, and pulse checks on psychological safety indicators provide leading indicators of cultural shift. At the organisational level, tracking the decisions that get made, the stories that get told, and the behaviours that get rewarded or sanctioned reveals whether the cultural operating system is genuinely evolving.
Sustaining behavioural change over time requires active reinforcement. Research in behavioural psychology is unambiguous: behaviours that are not reinforced fade. This means change programmes must build reinforcement mechanisms into the operating rhythm of the organisation – recognition practices that celebrate the new behaviours, team rituals that embed the new ways of working, and performance conversations that explicitly reference the behavioural expectations associated with the desired culture. Without these reinforcement structures, even well-intentioned behavioural change reverts within three to six months as old habits and environmental cues reassert themselves.
How The Change Compass supports cultural and behavioural tracking
The Change Compass platform is designed to address the practical challenge that change practitioners face when managing multiple change initiatives simultaneously: understanding not just what changes are happening, but what collective impact those changes are having on people’s capacity, behaviours, and ways of working. This is directly relevant to the culture challenge, because culture is not shaped by any single change initiative – it is the cumulative product of everything the organisation asks people to do differently, all at once.
When an organisation is running dozens of concurrent change programmes, each with its own behavioural requirements, the risk of change saturation is high. People who are overwhelmed with change demands revert to their most deeply embedded habits – which are, by definition, the habits of the existing culture. The Change Compass enables change leaders to visualise the volume and pace of change across the organisation, identify where change saturation is occurring, and make evidence-based decisions about sequencing, prioritisation, and capacity management.
By tracking change impacts at the team and individual level, The Change Compass also supports the behavioural specificity that effective culture change requires. Rather than treating change as an organisational-level abstraction, the platform helps change practitioners understand which specific groups are being asked to change which specific behaviours, and at what pace – creating the conditions for targeted support, reinforcement, and measurement that are essential to embedding lasting cultural change.
Frequently asked questions
Did Peter Drucker actually say “culture eats strategy for breakfast”?
The phrase is widely attributed to Drucker, but there is no definitive written source confirming he coined it. It was popularised significantly by Mark Fields, former President of Ford Motor Company, who displayed it in the Ford war room around 2006. Regardless of its precise origin, the principle reflects ideas that are consistent with Drucker’s broader body of work on management, people, and organisational effectiveness.
How long does it take to change organisational culture?
Genuine, sustainable culture change typically takes three to seven years in large, complex organisations. This timeline reflects the time needed to shift underlying assumptions, embed new behaviours deeply enough that they become the default, and replace the informal stories and norms that sustain the existing culture. Visible behavioural change can occur more quickly – within 12 to 18 months in focused change programmes – but sustaining those changes and allowing them to become the new cultural baseline requires the longer timeframe.
What is the difference between culture change and behavioural change?
Culture change is the broader, longer-term shift in the shared norms, values, and assumptions that shape how an organisation functions. Behavioural change is the more immediate, observable shift in how specific individuals act in specific situations. The relationship between them is that culture change is achieved through – and evidenced by – sustained behavioural change at scale. You cannot change culture directly; you change it by consistently changing the behaviours that constitute and reproduce it.
Why do so many culture change programmes fail?
The most common reasons culture change programmes fail are: insufficient leadership commitment and visible role-modelling; failure to translate cultural aspirations into specific, measurable behavioural targets; structural systems (performance management, incentives, hiring) that continue to reward old behaviours; underestimating the time and persistence required; and treating culture change as a communications exercise rather than a sustained behavioural intervention. Change fatigue driven by too many concurrent changes also plays a significant role, as it prevents any single set of new behaviours from being sufficiently reinforced to become habitual.
Many parts of the world are starting to brace for economic down turn. The Wall Street Journal and lots of publications talk of recession for the US. Some industries such as technology firms have already started cutting back staff. Real estate prices have been dropping. We are still struggling with inflation. The writing is on the wall.
As companies start to tighten their belt expenditures project investment is the first to come under fire. Project and initiative investments are naturally reviewed, consolidated, and cut to try and save money. Large companies typically invest millions to billions to execute their strategy, maintain competitiveness, and improve business effectiveness. Typical cuts in the project world translate to cutting project funding which means that change practitioners like other project professionals may be in the firing line.
As companies start to focus on the critical operations of the business the frequent question that gets asked is “what is the value of change management?”. “Can we save cost by cutting change management?”. Managers would already have a preconception of the value of change management when making this decision.
The challenge then becomes ‘what is ultimately the ‘proof’ of the value of implementing effective change?’ Many will argue that it is that employees are more engaged, managers are communicating the right messages, that employees have the right skills, and that they feel that they are ready for the change. However, ultimately, a project has a set of benefits it is targeted to achieve and the question then becomes what ‘proof’ is that the benefits have been achieved.
For a lot of the work that change practitioners are involved in, the ‘proof’ is the change in the behaviours from A to B. For example, adopting different conversations with the customer, operating a different system, selling a new product, reporting on incidents, following the required steps in completing a form, etc. Ultimately the change in the behaviour results in the targeted benefits being achieved whether it is improved customer experience, cost savings, efficiency in operating a system, or generating greater insights through new data.
What are some of the ways to demonstrate that we are setting the course for ultimate behaviour realisation?
Clear identification of core behaviours
To be able to implement behaviour change we need to know what behaviours we are focused on changing. The trick is not to try and come up with an exhaustive list of all the various types of behaviours that need to take place in the end state. Instead, focus on the core behaviours that will make the most differences in achieving the ultimate benefit.
For example, what are the core 2-3 behaviours that leaders need to display in the end state to ensure those insights are captured and utilised to make better business decisions? It could be being confident in interpreting the data and using any system prompts as required, highlighting the insight generated in planning meetings, and using the insight to make better decisions that result in a better outcome for the organisation.
Measurement
Behaviour realisation need to be measured and as we all know since “what get’s measured get’s managed”. Behaviours may be measured based on a survey, observation, system reports, etc.
Ongoing tracking
In order to successfully embed the new behaviour into business-as-usual ongoing tracking is required. Tracking ensures that the status of the behaviour change becomes visible and therefore becomes a goal to be focused on.
Tracking does not need to be cumbersome and overbearing. It could be as simple as incorporating the reporting into an existing weekly team meeting or a monthly planning meeting. It could also be a system-generated report that is sent to managers.
Our ultimate challenge as change practitioners in driving behaviour changes becomes even more crucial during these difficult financial times. We need to constantly demonstrate how our work directly links to benefit realisation. This may require stakeholder education. Are your stakeholders clear in terms of the importance of behaviours in reaching the benefits? Do they understand the design that has been in place to drive impacted groups toward the end state?
Most change programmes are built around a single implementation plan. The team agrees on a timeline, designs the training and communication approach around it, and executes. When reality diverges from the plan — and it usually does — the response is reactive: escalations, emergency rescheduling, scope cuts under pressure. The alternative is scenario planning: building multiple plausible futures into the programme design before commitment, so that when conditions shift, there is already a prepared response rather than an improvised one.
Scenario planning originated in military strategy and was adapted for business by Shell in the 1970s, where it helped the company anticipate the 1973 oil crisis and respond more effectively than competitors who had planned for a single future. In change management, the same logic applies. The organisations that navigate disruption well are rarely those that predicted the exact form it would take. They are those that had considered a range of possibilities and knew in advance how they would respond to each.
This article explains how to apply scenario planning specifically to change management and organisational transformation, with practical frameworks and examples.
Why single-path change planning fails under disruption
The fundamental problem with single-path planning is that it treats uncertainty as a temporary state that will resolve into a known future. In practice, the conditions affecting a change programme — stakeholder alignment, external environment, organisational capacity, technology stability — are genuinely uncertain throughout delivery. A change plan built on a single set of assumptions becomes increasingly unreliable as those assumptions are tested by events.
Research published in Harvard Business Review on risk management in complex programmes found that the most common cause of large-scale programme failures was not technical problems but planning rigidity: the inability to adjust when the programme environment changed in ways that had not been anticipated. Scenario planning addresses this by explicitly building in anticipated variation and pre-designing responses to it.
In change management specifically, the variables most likely to invalidate a single-path plan are: stakeholder readiness (whether affected groups are prepared to adopt at the planned pace), external disruption (regulatory changes, market events, or organisational crises that shift priorities), and change capacity (whether the organisation’s ability to absorb change remains constant across the programme timeline). All three are highly variable and largely outside the programme team’s control. Scenario planning is the mechanism for managing this variability deliberately.
The four-step scenario planning framework for change
Applying scenario planning to change management does not require a dedicated strategy team or elaborate modelling tools. The core process is straightforward and can be conducted in a facilitated workshop with the programme leadership team.
Step 1: Identify the critical uncertainties
Begin by identifying the two or three variables that are most uncertain and most consequential for the programme’s success. These are different from risks — risks are known negative events with probabilities attached. Uncertainties are genuinely unknown: you do not know whether they will materialise, and if they do, in what form. For most change programmes, the most significant uncertainties fall into two categories:
Stakeholder readiness and acceptance: Will the affected groups be ready and willing to adopt at the planned pace, or will resistance, capacity constraints, or comprehension gaps slow adoption?
External environment stability: Will the broader organisational and market context remain stable enough to support the planned timeline, or will competing priorities, regulatory changes, or external disruption force reprioritisation?
Step 2: Build the scenario matrix
Take the two most critical uncertainties and treat each as an axis, with two poles (high/low, favourable/unfavourable). This creates a 2×2 matrix with four distinct scenarios — four plausible futures for the programme. Each quadrant represents a coherent combination of conditions:
Scenario A (benefit achievement): High stakeholder readiness, stable environment. The programme proceeds broadly as planned, with good adoption and manageable issues.
Scenario B (adoption laggard): Low stakeholder readiness, stable environment. The technology or process change lands on time but adoption is slow. Benefits are delayed.
Scenario C (external disruption): High stakeholder readiness, unstable environment. People are ready but external events force timeline or scope changes.
Scenario D (compounded challenge): Low readiness, unstable environment. The most demanding scenario, requiring significant replanning and stakeholder intervention.
Step 3: Develop response strategies for each scenario
For each scenario, define in advance: what early signals would indicate you are moving toward this scenario, what the adapted change approach would look like, and what decisions would need to be made. This is the step most scenario planning exercises skip, and it is the most valuable. Having a pre-designed response to Scenario B (adoption laggard) means that when the early signals appear — lower-than-expected training engagement, manager feedback suggesting confusion, delays in process sign-off — the change team can activate the prepared response rather than entering a reactive planning cycle.
McKinsey’s guidance on scenario planning emphasises that the value of scenario planning is not prediction but preparation: building the decision-making capacity to respond effectively when conditions shift, regardless of which scenario materialises.
Step 4: Define the monitoring and trigger points
Scenario planning is only useful if the programme team is actively monitoring for the early signals that indicate which scenario is developing. Define specific, observable indicators for each scenario and assign responsibility for monitoring them. Build a regular review point into programme governance — typically monthly — where the team assesses which scenario the programme is tracking toward and whether a response strategy needs to be activated.
Applying scenario planning to specific change contexts
Technology transformation programmes
Technology transformations are particularly vulnerable to single-path planning failures because they combine technical uncertainty (the system may not perform as specified) with human uncertainty (users may not adopt as planned). The most common failure mode is a technically successful go-live followed by poor adoption: the system works but people do not use it correctly, resulting in data quality problems, workarounds, and delayed benefits realisation.
Scenario planning for a technology transformation should explicitly include an adoption laggard scenario and pre-design the hypercare and remediation approach for that eventuality. This means having additional training resources, business process specialists, and floor-walking support ready to deploy if early adoption indicators fall below threshold — rather than trying to mobilise these resources reactively after the adoption problem has already been identified in a post-implementation review.
Organisational restructures
Restructures are the change context where external disruption scenarios are most consequential. A restructure announced in one set of market conditions may need to be significantly modified if those conditions change during implementation. Key leadership departures, regulatory interventions, or sudden competitive pressure can shift the rationale for the restructure in ways that require mid-programme replanning.
Scenario planning for a restructure should include explicit consideration of what triggers would cause the programme team to recommend pausing, accelerating, or descoping, and what the communication approach would be for each of those decisions. Having this pre-designed does not commit the organisation to a particular course of action, but it means that when the decision point arrives, the governance forum has a framework for making it rather than starting from scratch.
Using portfolio data to inform scenario assumptions
One of the most common weaknesses in change scenario planning is that the scenarios are built on qualitative assumptions rather than evidence. The stakeholder readiness axis, for example, is often defined by the programme sponsor’s optimism rather than by any objective data on the affected groups’ current change load, recent adoption history, or capacity.
Platforms like The Change Compass provide the portfolio-level change impact data that makes scenario assumptions evidence-based. By showing the cumulative change load on the groups most affected by a programme, alongside their adoption performance on recent changes, the platform provides a factual baseline for scenario planning: whether the adoption laggard scenario is a theoretical possibility or a near-certainty based on current capacity data. This data-grounded approach makes scenario planning significantly more useful to governance forums that are accustomed to making decisions based on evidence rather than professional intuition.
The organisations that get the most value from scenario planning are those that build it into their standard programme governance rather than treating it as an ad hoc exercise for high-risk programmes. When scenario planning is a standard agenda item in programme initiation — alongside the business case, risk register, and change impact assessment — it becomes a normal part of how the organisation thinks about change, not an exceptional response to exceptional circumstances.
Prosci’s best practice research on change management maturity consistently identifies proactive planning — including contingency planning and scenario development — as a hallmark of high-maturity change organisations. The investment in scenario planning at the outset of a programme is consistently lower than the cost of reactive replanning mid-delivery when the single-path plan breaks down.
Frequently asked questions
What is scenario planning in change management?
Scenario planning in change management is the practice of developing multiple plausible futures for a change programme and pre-designing response strategies for each, rather than building a single implementation plan. It is particularly valuable in complex or uncertain environments where stakeholder readiness, external disruption, or organisational capacity are likely to vary from initial assumptions.
How is scenario planning different from risk management?
Risk management focuses on known negative events and their probabilities. Scenario planning addresses genuine uncertainty — situations where you do not know whether a condition will materialise or what form it will take. Scenario planning is not about listing what might go wrong but about mapping the range of possible futures and preparing considered responses to each, including positive scenarios as well as challenging ones.
How many scenarios should a change programme develop?
Most change programmes benefit from four scenarios, built from a 2×2 matrix of the two most critical uncertainties. This number is practical for governance forums to understand and monitor. Fewer than four scenarios tends to collapse into a best-case/worst-case binary, which loses the nuance of the most common middle-ground situations. More than four scenarios tends to create analysis paralysis without proportionate benefit.
When in the programme lifecycle should scenario planning happen?
Scenario planning is most valuable at programme initiation, when options are still open and response strategies can be genuinely built into the programme design. It should be reviewed at major milestone points — typically at the end of each phase — to assess whether conditions have shifted and whether the programme is tracking toward a different scenario than initially anticipated. A rapid scenario review should also be triggered whenever a significant external disruption occurs that could affect the programme.
Change heatmaps have become the default measurement tool for change management practitioners across organisations of every size. They make volume visible – you can see at a glance which business units are touched by how many initiatives in any given month. That visibility is genuinely useful. But a heatmap represents only one dimension of a complex change landscape, and when organisations treat it as the primary, or worse the only, measurement instrument, they are navigating with an incomplete map. Volume without context tells you that a team is busy; it does not tell you whether that team is saturated, recovering, or approaching the threshold beyond which performance will deteriorate.
The good news is that there are five measurable, actionable ways to move beyond heatmaps toward a more sophisticated change measurement approach – one that gives leaders the full picture they need to make confident portfolio decisions. Each of these five ways builds on the heatmap foundation rather than replacing it, adding layers of insight that transform raw volume data into strategic intelligence. The video below walks through each of these five ways in depth, drawing on practical examples from organisations that have made this journey.
Why heatmaps alone are insufficient
The appeal of the change heatmap is its simplicity. A colour scale from green to red, a grid of business units against a timeline, and suddenly the entire portfolio is visible on a single page. For organisations just beginning to manage change at a portfolio level, that visibility is a genuine step forward. The problem is not the heatmap itself – it is the assumption that presence equals impact. A heatmap shows that a change is happening to a particular group of people in a particular period. It does not show how much adaptation that change requires, how quickly the group can absorb it, or whether the accumulated load is approaching the point of failure.
Research from McKinsey has consistently found that large-scale transformation programmes fail at a high rate, with organisational fatigue and change overload among the most commonly cited root causes (McKinsey, “Losing from Day One”). A heatmap can show you that a transformation is in flight; it cannot show you whether the people carrying it are already running at capacity from the five other initiatives that appear alongside it on the same grid. That distinction matters enormously for sequencing, resourcing, and governance decisions. The five ways below are designed to close that gap.
Way 1 – Weight impacts by intensity, not just presence
The most immediate limitation of a standard heatmap is that it treats all changes as equal. A global enterprise resource planning migration sits in the same coloured cell as a minor update to an approval form. Both show up as a mark in the grid, but the adaptation burden they place on employees is orders of magnitude apart. The first requires people to learn new systems, change long-established workflows, and often restructure how entire functions operate. The second takes an afternoon to understand. Treating them identically in your measurement framework produces a picture that is technically accurate but practically misleading.
The solution is to introduce impact weighting – scoring each change not simply on whether it touches a group, but on how deeply it requires that group to change their knowledge, capability, and behaviour. The Prosci ADKAR model offers a practical framework for this: changes that require significant shifts in Awareness, Desire, Knowledge, Ability, and Reinforcement carry a substantially higher adaptation burden than those requiring only one or two of those dimensions. A system migration typically demands all five. A process clarification might only require updated Knowledge. When you weight your heatmap data by ADKAR-dimension intensity, the saturation picture that emerges is dramatically more accurate. High-intensity changes show up with proportionally greater weight, allowing you to identify genuine saturation risk even in periods when the raw count of initiatives looks manageable.
Way 2 – Measure the pace of change, not just volume
Volume measurement captures how many changes are in flight at any point in time. Pace measurement captures something different and arguably more important: the interval between significant changes, and whether employees have enough time to stabilise before the next major disruption arrives. A heatmap that shows three major initiatives across a twelve-month period might look manageable – until you realise that all three land in the same six-week window, followed by six months of relative quiet. The volume is moderate, but the pace is brutal.
Human beings need time to consolidate change. Neurologically and psychologically, the process of embedding new behaviours and workflows requires a period of reduced pressure during which the new way of working becomes habitual rather than effortful. When the next significant change arrives before that consolidation has occurred, people are being asked to adapt from a position of instability rather than a position of readiness. Harvard Business Review research on change fatigue identifies precisely this dynamic – it is not just the number of changes that exhausts people, but the relentlessness of the pace. Measuring recovery windows between significant change events, and building those windows explicitly into your portfolio calendar, is one of the highest-leverage actions a change practitioner can take. The heatmap shows presence; pace measurement shows whether anyone has time to breathe.
Way 3 – Build capacity baselines for employee groups
Not all employee groups are equally capable of absorbing change at any given moment. A frontline team that has just completed a major system rollout has depleted change capacity. A corporate function that has been stable for two years has a full reservoir. A heatmap treats every cell in the grid as if the people behind it have identical absorptive capacity, which they manifestly do not. The third way to graduate beyond the heatmap is to establish capacity baselines for each major employee group, and then compare actual change load against those baselines rather than against an undifferentiated average.
Gartner research on change fatigue suggests that employees can effectively absorb approximately three concurrent major changes before performance and engagement begin to deteriorate materially. That figure is not universal – it varies by the nature of the work, the maturity of the organisation’s change capability, and the level of leadership support available – but it provides a useful starting point for establishing thresholds. The Change Compass platform enables organisations to set group-specific capacity thresholds and track actual change load against them in real time, generating alerts when a particular employee group is approaching or exceeding their capacity limit. This transforms the heatmap from a passive record of what is happening into an active management tool that flags risk before it becomes failure.
Way 4 – Integrate change data with operational performance metrics
Change measurement that lives only inside the change management function is change measurement that struggles to influence business decisions. Leaders who control the sequencing and resourcing of initiatives are, understandably, more responsive to data expressed in the language of business outcomes than data expressed in the language of change methodology. The fourth way to graduate from heatmaps is to connect your change load data with the operational performance metrics that your organisation already tracks and cares about: productivity indices, customer satisfaction scores, employee engagement survey results, voluntary attrition rates, and error or rework rates.
When you can demonstrate a correlation between periods of high change intensity for a particular group and subsequent dips in that group’s productivity or engagement scores, you are no longer making a theoretical argument about change capacity. You are showing the business cost of change saturation in terms that finance leaders, operations directors, and executive sponsors can immediately understand and act upon. McKinsey’s research on people and organisational performance consistently shows that organisations that treat change capacity as a measurable business variable – rather than a soft concern – achieve significantly better transformation outcomes. Integrating your change data with operational metrics is the step that makes that connection tangible and defensible.
Way 5 – Connect measurement to portfolio governance
Measurement without governance action is, ultimately, just reporting. The fifth and most consequential way to graduate from heatmaps is to connect your change saturation data directly to the portfolio governance processes where sequencing, prioritisation, deferral, and resourcing decisions are actually made. This means ensuring that change capacity data is a standing input to your portfolio review forums, that there are clear thresholds that trigger mandatory review of an initiative’s timing when capacity limits are breached, and that change practitioners have a formal voice at the table when those conversations occur.
In practice, this looks like presenting change load data alongside financial and risk data in portfolio dashboards, using saturation thresholds to inform go/no-go recommendations for new initiative launches, and having the evidence base to argue for deferral of a lower-priority initiative when a critical employee group is already operating at capacity. The Change Compass platform is specifically designed to support this governance integration – providing the portfolio-level visualisation, group-specific capacity tracking, and reporting outputs that change leaders need to participate credibly in executive governance forums. Measurement at this level shifts the change management function from a delivery support role into a genuine strategic capability, one that actively shapes the conditions in which transformation succeeds.
Building a change measurement maturity journey
These five ways are not a single leap – they are a maturity journey that organisations can progress through at a pace that reflects their current capability and the urgency of their change portfolio challenges. Most organisations begin with the heatmap because it is accessible and requires only basic data collection. Adding impact weighting is typically the logical next step, because it requires only a scoring framework applied to data you already have. Pace measurement comes next, as it requires a more disciplined approach to recording change timelines and recovery periods. Capacity baselining requires a modest investment in establishing thresholds and tracking systems. Operational integration requires collaboration across functions. Governance integration requires organisational authority and sustained commitment from executive sponsors.
Organisations that have completed this journey report qualitatively different conversations in their portfolio forums. Instead of debating whether a particular team is “too busy” based on subjective assessments, they are working from data that shows exactly how much change load that team is carrying, how that load compares to their established capacity threshold, what the pace of upcoming changes looks like, and what the likely operational impact of proceeding with the current schedule will be. That is the difference between change management as an art and change management as a discipline. The five ways above are the pathway from one to the other.
Frequently asked questions
How do I start weighting change impacts when we have never scored initiatives that way before?
The most practical starting point is to apply a simple three-tier classification – low, medium, and high intensity – based on the number of ADKAR dimensions a change requires employees to shift. Low-intensity changes require knowledge updates only. Medium-intensity changes require new knowledge and some adjustment to established behaviours. High-intensity changes require significant shifts across all five ADKAR dimensions. Even this rough classification will produce a materially more accurate picture of saturation than an unweighted count, and you can refine the scoring framework as your data matures.
What is a realistic capacity threshold for frontline employee groups?
The Gartner benchmark of approximately three concurrent major changes provides a useful starting point, but it should be calibrated to your specific context. Frontline roles with high operational pressure and limited discretionary time tend to have lower thresholds than knowledge worker roles with more flexible schedules. It is also worth distinguishing between the number of changes and the cumulative intensity of those changes – a frontline team might manage three low-intensity changes comfortably while struggling with even one high-intensity transformation on top of normal operational demands.
How do we get access to operational performance data to connect with our change load data?
This is primarily an organisational relationship challenge rather than a technical one. The change management function typically needs to establish working relationships with HR analytics, operations reporting, and finance teams to access relevant data sets. The most effective approach is to identify one or two high-visibility pilot cases where change saturation is a plausible contributing factor to an observable performance issue, and use those cases to build the business case for ongoing data integration. Once you can demonstrate the value of the connection, access tends to become much easier to negotiate.
How do we ensure change capacity data actually influences governance decisions rather than just being noted and ignored?
The single most important factor is executive sponsorship for the principle that change capacity is a legitimate portfolio constraint – equivalent in status to financial capacity or resourcing capacity. Without that sponsorship, change data tends to be noted and set aside when it conflicts with delivery timelines. With it, there is a formal basis for the change function to request that initiatives be deferred, sequenced differently, or resourced more heavily when capacity thresholds are breached. Building that sponsorship typically requires the kind of operational integration described in Way 4 – showing the business cost of ignoring capacity data is the most powerful lever for establishing governance authority.