I discovered these 5 surprises in managing an agile digital project

I discovered these 5 surprises in managing an agile digital project

As someone who is normally oversees the change management side of large programs and portfolios, I now find myself being in the shoes of a project manager. Here’s the background. I now manage a digital software-as-a-service business (The Change Compass) aimed at those who are driving multiple changes in their organizations. In terms of managing change deliverables and stakeholders, I was perfectly comfortable, having done this with some of the largest organizations in the world. However, I was not trained as a project manager, particularly not in managing a digital product.

Having worked on very large digital projects over the years I‘m familiar with the different phases of the project lifecycle and lean/agile/scaled agile methodologies. However, managing a digital project hands-on has revealed some very surprising learnings for me. I will share this in the following.

1. The customer/user doesn’t always know the best

Over the years we have received quite a lot of customer feedback about what worked and what didn’t work and we have iteratively morphed the application in line with customer wishes. However, a ‘customer/user suggestion or wish’ is not always the best for them. There are some features that we have developed to enable the user to build different reports. However, after lots of feedback, and iterations, we’ve found that the users don’t use these features much at all. On the other hand, there are other features designed based on our observations of how users have behaved that are very frequently used. In the design phase, some users have commented that they are not sure if these features will work. However, after trialing these they have easily adopted them and have not made any suggestions or comments since.

It is probably similar to when the first iPhone was released. A lot of people were negative about how it did not have a keyboard and that the lack of tactile pressing of buttons was a sure sign that it was not going to work. Did Apple derive the iPhone purely based on customer feedback? Did customers already know what they wanted and simply told Apple? Nope. Well, the screen-only mobile phone with no or limited buttons is now a standard across mobile phone design.

Example: In our digital project management experience with The Change Compass, we initially prioritized implementing a feature based on numerous customer requests. This feature allowed users to customize their dashboard layout extensively. However, after analyzing user behavior data post-launch, we discovered that this feature was rarely used by our target audience. Surprisingly, users preferred a simpler default layout that we had originally designed based on our understanding of their workflow and preferences. As a result, we decided to refine the default layout further and focus on enhancing features that aligned more closely with user needs and behaviors within our change management software.

To read more about avoiding key gaps in managing customer experience click here.

2. Setting clear expectations is critical


At The Change Compass, we have a very diverse and scattered team. We have our development team in India, a UX designer in Canada, a graphic designer in Europe, and Analysts in Australia. Most of our team members are quite familiar with agile practices. They are familiar with each phase of the agile life cycle, Kanban boards, iterating releases, etc. For our Ultimate Guide to Agile for Change Managers click here.

However, one big lesson I learned was the importance of setting clear and mutually agreed-to-work deliverables. With such a diverse team composition comes a diverse understanding of the same concept. In agile, we try not to over-document and rely on discussions and ongoing engagement to achieve collaboration and clarity.

However, what I learned was that clear documentation is critical to ensure that there is a crystal clear understanding of the scope, what each deliverable looks like, what quality processes are in place to reach the outcome, the dependencies across different pieces of work, and what each person is accountable and not accountable for. All of these sound like common sense. However, the point is that it is common for agile projects to err on the side of too light in documentation, therefore leading to frustrations, confusion, and lack of outcome achievement. In our experience, documentation is critical.

Example: At The Change Compass, we’ve learned the importance of setting clear and mutually agreed-upon work deliverables, especially with our diverse global team. Despite our team’s familiarity with agile practices, we realized that documentation is critical to ensure a crystal-clear understanding of project scope, deliverables, quality processes, dependencies, and individual accountabilities. By documenting these aspects thoroughly within our change management software, we’ve achieved better collaboration, clarity, and outcome achievement across our distributed team.

3. Boil everything down to its most basic meaning

In digital projects, there is a lot of technical jargon with backend, front end, and mid-layer design elements. Like any technology project, there seems to be a natural inclination to become overwhelmed with what is the best technical solution. Since I did not have a technology background I forced myself to become very quickly familiar with the various technical jargon in delivery to try to compensate.

However, what I found was that with such a diverse team, even within the technical team there is often misunderstanding about what a technical term means. On top of this, we have other non-technical team members such as Analysts, UX designers, and Graphic Designer. We have experienced lots of team miscommunications and frustrations as a result of too much technical language.

To ensure the whole team is clear on what we are working on, how we are approaching it, and their roles in this along the way, we’ve tried hard to ‘dumb down’ the use of technical jargon into basic language as much as possible. Yes – there is a basic set of digital language necessary for delivery that all members should understand. But, beyond this, we’ve tried to keep things very simple to keep everyone on the same page. And the same can also be applied to non-technical language, for example, graphic design technical terms that the techies may not be able to understand can also cause misunderstanding.

Example: In our digital project management endeavors with The Change Compass, we’ve encountered challenges due to technical jargon and miscommunications within our diverse team. To mitigate this, we’ve prioritized simplifying technical language into basic terms that everyone, including non-technical team members like Analysts, UX designers, and Graphic Designers, can understand. By keeping communication simple and clear, we ensure that everyone is on the same page regarding project objectives, approaches, and roles within our change management platform.

4. Team dynamics is still key …. Yes, even in a digital project


To get on the agile bandwagon a lot of project practitioners invest deeply to undergo various training to become more familiar with how agile projects are conducted. While this is critical what I’ve found is that no matter what project methodology, agile or non-agile, digital or non-digital, the basics remain that effective team dynamics are key to a high-performing project team.

Most of the issues we have faced are around team communications, shared understanding, how different team members work with each other, and of course cross-cultural perceptions and behaviors. Any effort we have placed in discussing and resolving team dynamics and behaviors has always led to improved work performance.

Example: Despite the focus on agile methodologies and digital tools, effective team dynamics remain crucial within The Change Compass. We’ve observed that issues around team communications, shared understanding, and cross-cultural perceptions can significantly impact project performance. By investing effort in discussing and resolving team dynamics and behaviors, we’ve consistently improved work performance and collaboration within our change management software, resulting in better outcomes for our clients.

5. The struggle of releasing something that isn’t perfect is hard


Being a typical corporate guy having worked in various large corporate multinationals it is ingrained in me that quality assurance and risk management are key to any work outcome. Quality work ticks all boxes with no flaws and that does not expose any risks to the company. In the typical corporate world, any flaws are to be avoided. Thorough research,, analysis, and testing are required to ensure the quality is optimal.

Example: As individuals with a background in corporate change management, we initially struggled with the agile approach of releasing minimum viable products (MVPs) within The Change Compass software. While ingrained in the notion of quality assurance and risk management, we learned to embrace the agile principle of continuous improvement. Instead of aiming for perfection upfront, we focus on releasing usable features and iterating based on ongoing customer feedback. This approach allows us to deliver value incrementally and adapt our change management software to evolving user needs and preferences.

The agile approach challenges this notion head-on. The assumption is that it is not possible to know exactly what the customer or user reaction is going to be. Therefore, it makes sense to start with a minimum viable product, and iterate continuously to improve, leveraging ongoing customer feedback. In this approach, it is expected that what is released will not be perfect and cannot be perfect. The aim is to have something usable first. Then, work to gradually perfect it.

Whilst in theory, it makes sense, I’ve personally found it very difficult not to try and tick all boxes before releasing something to the customer. There are potentially hundreds of features or designs that could be incorporated to make the overall experience better. We all know that creating a fantastic customer experience is important. Yet, an agile approach refrains from aiming to perfect the customer experience too much, instead, relying on continuous improvement.

Ready to streamline your change management process and drive better outcomes with The Change Compass? Book a demo today to see how our software can help your organization succeed.

How to Lead Change and Drive Business Results Through Data

How to Lead Change and Drive Business Results Through Data

In most modern organisations, data drives decisions. Marketing teams track conversion rates to the decimal point. Finance teams model scenarios with precision. Operations leaders measure throughput, defect rates, and cycle times as a matter of course. Yet change management, a discipline that directly influences whether transformation programmes succeed or fail, has long operated on a different basis. Change leaders frequently rely on anecdote, stakeholder intuition, and high-level readiness surveys that tell them very little about what is actually happening on the ground. The result is a discipline that struggles to justify its value and, more critically, struggles to course-correct when things go wrong.

This gap is not simply a matter of preference or professional culture. It reflects a deeper structural challenge: change management has historically lacked the tools, frameworks, and shared standards required to turn complex human and organisational behaviour into reliable, actionable data. Where a project manager can point to schedule variance and earned value, a change leader has often had to rely on statements like “people seem engaged” or “resistance is lower than last quarter.” These observations may be accurate, but they do not give executives the confidence to invest further, adjust scope, or make time-sensitive decisions about programme delivery.

The good news is that this is changing. A growing number of change leaders are adopting data-driven approaches that connect change activity to measurable business outcomes. Platforms like The Change Compass are making it practical for organisations to collect, visualise, and act on change data in ways that were simply not possible a decade ago. This article explores why data maturity matters in change management, what good change data looks like in practice, and how change leaders can use it to earn executive confidence and drive results.

Download The Change Compass brochure to explore how the platform supports data-driven change leadership.

Why change management lags other disciplines in data maturity

Change management emerged largely from behavioural science, organisational psychology, and consulting practice rather than from the quantitative traditions of engineering or finance. The foundational models – Lewin’s unfreeze-change-refreeze, Kotter’s eight steps, the ADKAR model from Prosci – are deeply valuable, but they were designed as conceptual frameworks rather than measurement systems. This means that while they help practitioners think clearly about change, they do not inherently produce the kind of data that boards and executive committees use to evaluate business performance.

A 2023 Prosci benchmarking report found that organisations with excellent change management programmes are six times more likely to meet project objectives than those with poor change management. Despite this compelling evidence, many organisations still struggle to translate that finding into a data collection discipline within their own programmes. The challenge is partly methodological and partly cultural. Change practitioners are often stretched across multiple concurrent initiatives, leaving little capacity to design and maintain rigorous measurement systems. There is also a widespread belief that human behaviour is simply too complex to quantify in meaningful ways.

Gartner research on digital transformation has consistently highlighted that the human and organisational dimensions of change are the leading cause of programme failure, yet these dimensions receive the least structured measurement attention. When a technology implementation stalls, it is rarely because the software does not work – it is because adoption is lagging, training has not translated to behaviour change, or frontline managers are not reinforcing the new ways of working. Without data, these problems go undetected until they become crises. With data, they can be spotted early and addressed systematically.

What good change data actually looks like

Good change data is specific, timely, and connected to business outcomes. It goes beyond the typical “readiness survey” that asks employees whether they feel prepared for an upcoming change. While readiness surveys have their place, they represent only one dimension of what change leaders need to manage effectively. A robust change measurement system captures data across at least three categories: the volume and complexity of change hitting different parts of the organisation, the progress and effectiveness of change enablement activities, and early indicators of adoption and sustained behaviour change.

Change volume and complexity data helps leaders understand the cumulative burden being placed on different employee populations. A business unit that is simultaneously navigating a technology replacement, a restructure, and a new performance management framework is under far greater change pressure than one experiencing a single initiative. Without visibility into that cumulative load, leaders may unknowingly overload teams, driving disengagement, absenteeism, and productivity decline. The Change Compass platform was specifically designed to give organisations a consolidated view of their change portfolio, enabling leaders to see where change saturation is occurring and to sequence or reprioritise initiatives accordingly.

Enablement activity data tracks the completion and quality of change management deliverables such as stakeholder engagement sessions, training completions, communications sent, and manager briefings conducted. This data answers the question of whether the change programme is being executed as designed. Adoption indicators, by contrast, measure whether behaviour is actually shifting. These might include system login rates, process compliance metrics, quality scores, or customer satisfaction results that can be directly linked to the changes being implemented. Together, these three data streams give change leaders a genuinely comprehensive picture of how their programmes are progressing.

Using data to influence executive decision-making

One of the most important applications of change data is in executive communications. Senior leaders are accustomed to receiving data dashboards from finance, operations, and technology. When a change leader walks into a steering committee meeting with comparable data – showing adoption rates by business unit, change saturation scores by team, and leading indicators of programme risk – it fundamentally changes the conversation. Instead of providing subjective commentary, the change leader becomes a peer who is contributing to the evidence base the organisation uses to make decisions.

McKinsey research on large-scale transformation has found that programmes with strong senior sponsorship and clear performance data are substantially more likely to deliver their intended value. The data dimension matters because it gives sponsors something tangible to act on. When a change leader can show that adoption in one region is 40 per cent below target and identify the specific barriers driving that gap, a senior sponsor can intervene with authority and specificity. When the only information available is “adoption seems slower in the north,” the sponsor has no clear basis for action and is likely to default to pressure rather than problem-solving.

Building executive influence through data also requires change leaders to understand what executives actually care about. Board members and executive committees are typically focused on financial performance, risk exposure, customer outcomes, and employee engagement. Change data becomes far more compelling when it is framed in those terms. Instead of reporting that “80 per cent of managers have completed their briefings,” a data-driven change leader might show that business units with high manager engagement scores are tracking 25 per cent ahead of adoption targets, with a projected positive impact on the revenue run rate of the new system. That framing connects change activity to the things executives are accountable for delivering.

Connecting change metrics to business performance outcomes

The most sophisticated change measurement systems do not stop at tracking change activities – they create a line of sight between change management inputs and business performance outputs. This is sometimes called the change value chain, and building it requires deliberate design at the outset of a programme rather than an afterthought at the end. Change leaders who wait until a programme is complete to evaluate its impact will always struggle to demonstrate causality. Those who define their measurement framework at the start, identifying which business metrics should move as a result of successful adoption, are in a far stronger position.

Consider a customer experience transformation programme designed to reduce complaint volumes and improve Net Promoter Score. A well-designed measurement framework for this programme would track not only whether employees have completed the required training, but also whether their interactions with customers are changing in observable ways – perhaps through call quality monitoring, customer feedback scores, or first-call resolution rates. If training completion is high but customer metrics are not improving, the data points clearly to a gap between learning and on-the-job application. That insight allows the programme team to investigate and address the specific barrier, whether it is inadequate coaching from team leaders, a process that does not support the desired behaviour, or a cultural norm that is overriding the training content.

A Harvard Business Review analysis of large transformation programmes found that only 30 per cent of them succeed in meeting their original objectives, and the primary differentiator between successful and unsuccessful programmes is not the quality of the strategy but the quality of execution, including the people and change dimensions. Connecting change metrics to business outcomes is the mechanism by which change leaders can demonstrate that they are not just managing the process of change but actively driving the conditions for success.

Building a data-driven change team

Shifting to a data-driven approach requires more than adopting a new platform or adding a measurement step to existing processes. It requires building a team capability and a team culture that treats evidence as the foundation of professional practice. This is a meaningful cultural shift for many change functions, which have traditionally valued qualitative insight, relationship skills, and experiential wisdom over analytical rigour. The most effective change teams combine both – they do not abandon the human judgment and empathy that good change practice requires, but they augment it with data that improves the quality and confidence of their decisions.

Practically, this means investing in data literacy across the change team. Change practitioners do not need to become data scientists, but they do need to understand how to design measurement frameworks, interpret dashboards, identify patterns in data, and communicate data-driven insights to different audiences. Organisations can support this through targeted skill development, pairing change practitioners with data or analytics colleagues, and building data collection and review into the standard rhythms of programme governance. The Change Compass platform supports this transition by providing change teams with visualisation tools and reporting capabilities that do not require deep technical expertise to operate.

Leadership commitment is equally important. When the head of change or the chief people officer consistently asks for data in programme reviews and holds teams accountable to evidence-based conclusions, it sends a clear signal about what is valued. Conversely, when leaders accept anecdote and opinion as the basis for major programme decisions, they inadvertently undermine the case for building measurement capability. The shift to data-driven change management is ultimately a leadership choice as much as a technical one, and it tends to succeed when it is championed from the top and embedded in the operating model of the change function.

How The Change Compass enables data-driven change leadership

The Change Compass was built specifically to address the data gap that has long held change management back. The platform provides change leaders with a consolidated, visual view of their organisation’s change portfolio, making it possible to assess the volume, complexity, and distribution of change across different business units and employee groups. This portfolio view is one of the most immediately useful capabilities for organisations running multiple concurrent programmes, because it surfaces change saturation risks that are otherwise invisible until they start driving disengagement or resistance.

Beyond portfolio visibility, The Change Compass enables teams to track change readiness and adoption metrics at a programme level, linking activity data to the business outcomes that executive sponsors care about. The platform’s reporting and visualisation features are designed to be accessible to change practitioners who are not data specialists, making it practical to generate executive-ready dashboards without relying on separate analytics support. This reduces the time change leaders spend compiling reports and increases the time they spend acting on what the data reveals.

The platform also supports the benchmarking of change performance over time and across programmes, helping organisations build an institutional understanding of what good change looks like in their specific context. Over time, this benchmarking capability enables more accurate scoping and resourcing of future programmes, reducing both the over-investment that comes from guessing conservatively and the under-investment that comes from underestimating complexity. For organisations serious about building a mature, data-driven change capability, The Change Compass provides both the infrastructure and the discipline to make it happen.

Frequently asked questions

What is the most important change management metric to track?

There is no single most important metric, because the right measures depend on the nature and objectives of the programme. However, adoption rate – the proportion of the target population that has shifted to the new ways of working – is consistently one of the most valuable indicators because it directly reflects whether the change is achieving its intended effect. Adoption data is most useful when it is disaggregated by business unit, role group, or geography, so that low-adoption pockets can be identified and addressed rather than masked by an average figure.

How can change leaders make the case for data investment to sceptical executives?

The most effective approach is to frame data investment in terms of risk reduction and return on programme investment. Executives who have experienced transformation programmes that failed to deliver expected benefits – a common experience, given the research findings on transformation success rates – are typically receptive to an argument that better change measurement would have identified the adoption gap earlier and enabled corrective action. Concrete examples from other organisations, combined with a clear proposal for how a measurement framework would work in practice, tend to be more persuasive than abstract arguments about data maturity.

How does change saturation data help organisations manage their portfolios?

Change saturation data quantifies the cumulative change burden being experienced by different employee groups at any given point in time. When this data is mapped across a programme portfolio, it reveals which teams are approaching or exceeding their capacity to absorb change effectively. Leaders can use this information to sequence initiatives more thoughtfully, delay lower-priority changes when teams are already under significant pressure, or target additional change management support to the most saturated groups. Without this visibility, organisations frequently over-burden their highest-performing teams – those most likely to be involved in multiple change programmes simultaneously – which can drive the very disengagement they are trying to avoid.

Can small change teams realistically adopt a data-driven approach?

Yes, and the investment required is often lower than practitioners expect. A data-driven approach does not require a large analytics team or a complex technology infrastructure. It starts with defining two or three meaningful metrics for each programme, establishing a simple collection method, and reviewing the data consistently in governance forums. Platforms like The Change Compass are specifically designed to be accessible to small and mid-sized change functions, providing out-of-the-box visualisation and reporting that does not require technical expertise to configure or maintain. Starting small and building measurement discipline gradually is far more effective than waiting for a perfect system before beginning.

References

The death of the change heat map

The death of the change heat map

Change heatmaps are considered valuable by many organizations struggling with facing too much change. Heatmaps are easy to understand and people intuitively get heatmaps without much explanation. The darker colours are bad, or too much change. The lighter colours are good, less change, or at least this is a common interpretation.

Change heatmaps are commonly used to help decipher whether there is or there is not too much change going on. Stakeholder groups such as the program management office, senior managers, operations managers and other project professionals commonly use the heatmap to show the change ‘heat’ levels of a part of the organization.

So why is this article entitled ‘The death of the change heatmap’?

The problem with most organizations using change heatmaps is that it is often blindly used as a singular discussion point.

A program team would typically review the change heatmap with business stakeholders regarding the current slate of changes. Typical conversations would focus on how many initiatives there are within a given month, and arguments would start around whether it is indeed considered ‘too much change’ or ‘not too much change’. This can become quite rhetorical since it is really anyone’s opinion since this is not a quantitative, numerical discussion.

In this type of discussion, the change manager is often the one raising that there could be too much change going on. A typical response from senior managers would be “Yes there is a lot of change and this is what we need to get on with”. From the senior manager’s perspective, even when presented with a ‘red’ heatmap insinuating too much change, the business is still functioning and has not collapsed. Therefore, the change manager must not be connected to the realities of the business (or at least, what goes through the senior manager’s head).

A discussion focused purely on too much, too little, or just right in the level of change is quite limiting to add value to the business. Are we really only concerned about the magnitude of the change without any other consideration? And how could anyone rely on individual opinions of whether there is or there isn’t too much change when this may not be tied to actual business performance? Most change heat maps are not quantitative summations of change impacts but individual opinions – how might we make this more scientific?

From ‘too much’/’too little’ discussion to story-telling

Change is about design. In planning for change we are designing what the change journey is going to be like. When we have a clear picture of what is going to change, and all the elements around when, where, what, who, how, etc. then we are able to truly paint a picture of what the change experience is going to be.

From this clear picture of the change experience (through having a singular integrated picture of change that is data-based), we are then able to start to tell stories of what will happen and what those experiences might look like.

For example, a story could be that the call centre will be facing significant challenges in balancing both customer call volumes and going through the 12 initiatives during November. Customer call volumes are anticipated to start to trend up, whilst 4 different types of training sessions are expected to take place as a part of the roll out of 4 major initiatives. Those teams supporting Product A will be particularly challenged given 2 of the initiatives impact that team more than other teams. In addition, the desired behaviours being focused by the initiatives are quite different. Some are reinforcing compliance and process adherence, whilst others are about thinking out of the box and experimenting with new ways of working.

You can see from the above example that having a rich set of data about what changes is going to happen can paint a vivid picture of what is going to happen, and therefore emerging challenges and opportunities. The conversations go significant beyond the quantity and judgment of magnitude, through to the type of experiences, possible operational and capacity challenges, behavioural expectations and potential customer impacts.

How might we move on to more valuable discussions beyond the heatmap?

For those who have experienced situations where a heatmap discussion did not lead to any significant business outcomes, what else could we talk about?

  1. Link heatmap to quantitative business impacts

At a basic level a heatmap can inform stakeholders the foundational picture of how much change if there is direct linkage to time and capacity impacts. For example, given there are 12 initiatives occurring within the call centre in November, how much time is expected to digest and embed this change within that month? 5 hours per week? And would this impact 1,000 call centre staff? Therefore, how does this compare in terms of manpower planning projections in November? Are there challenges to service projected customer call volumes and go through the 5 hours per week required for call centre agents?

To do this we also need to ensure any generated ‘heat level’s are calculated based on actual quantitative change impact levels of each initiative. With historical data on change impacts, it’s also possible and highly impactful to draw correlations between change impact levels and resulting business performance indicators.

  1. Dive deeper into what is really going on within a particular business.

Instead of focusing on the magnitude of change, zoom in on the specific risks or opportunities. Good questions to ask to paint a story around these include:

  • What has this part of the business experienced in the past and how might they perceive this set of changes?
  • What types of leadership do we have in this business? What are some of these characteristics? How do these impact how change will be implemented and embedded?
  • What support and engagement channels do they have? Do they have change champion networks that can support a multitude of initiatives?
  • How has this business been communicated about the myriad set of changes?
  • Has there been a clear picture painted by the leaders of where they are going and how these changes will get them closer to their goals?
  • How might we better package, sequence or link the set of changes to enhance adoption?
  1. Show how the strategy is being implemented

A set of strategies will only come to live through successful change implementation. Again, armed with a clear integrated single view of change, you can present a picture of what initiatives are impacting the organization for each of the strategic pillars that the organization has invested in. Particular attention may be drawn to to what extent certain strategic pillars may be making more impacts on the organization at particular points in time.

This is one area in which senior managers will be highly interested. The discussion is not just about operational capacity, but instead moving to the strategic realm to question what the strategic change implementation looks like and how this compares to the intent. Is the set of changes we are planning at the right pace? Or are we not executing fast enough? Is it expected that strategic pillar A exerts more impact in business unit A than B? Or are there gaps in how we’ve designed the overall change journey to realize our objectives? What capabilities may be required for certain lines of business units given the set of changes they will be going through?

So, you can see the change heatmap should absolutely not be the singular focus in understanding multiple changes. If you only focused on the change heatmap with your stakeholders then be prepared for lots of challenges and questions as to the value of this artefact.  So, it is not that the change heatmap should not be used or can never add insight. Instead, probe deeper into your data and paint a rich picture of what is going to happen, what this means to the business in tangible terms, and even how the organization’s strategy is being implemented.

For case studies of how some companies have identified risks and opportunities through painting a rich picture of what is going to happen to the business visit here …

Case study 1

Case study 2

Visual images paint a thousand words. If you would like more information on how to visually represent change in all its different facets to tell the full story of what your organization will go through, please contact us here.

Demonstrate the value of managing change – Case Study 2

Demonstrate the value of managing change – Case Study 2

Clearing the runway for change: A sales function success story

Managing change is not just about launching new projects, but also about fitting those changes into the day-to-day operations of the business, especially in high-pressure environments like sales. This was never clearer than when the sales function of a major insurer faced a month with thirteen different change initiatives on the horizon. With looming sales targets and the end of the financial year approaching, the team realized two of these changes were particularly major and urgent, each coming from different parts of the organisation.

The business knew that, based on past data, periods of concentrated change nearly always led to missed targets and extra work. But initially, it looked as if there was no way around the situation. Both major initiatives were tied to compliance deadlines and benefit realization, and pressure was mounting from all directions to deliver everything as soon as possible.

A turning point with clear, consolidated data

Unlike previous years, the team had a new advantage. Using The Change Compass, they could finally see, in a quantified and visual way, just how much change was heading toward the sales function. The full impact across all thirteen initiatives became impossible to ignore. Instead of relying on intuition or guesswork, leaders responded to real numbers and trends that told a clear story: trying to land all thirteen changes would stretch teams thin and likely reduce the benefits of each initiative.

With this understanding, the operations team joined with senior managers and the leads of the two major projects to re-evaluate priorities. Conversations became more productive and focused on what was truly essential during a high-demand period.

Refocus on value, not volume

Decisions were not easy. Stakeholders were keen to meet compliance obligations as fast as possible, but the data showed the risk of overwhelming the sales team and jeopardising both business performance and initiative success. Ultimately, the team decided to deprioritise less urgent projects, trimming the schedule from thirteen initiatives to just six.

This approach ensured the sales function had the capacity and energy to implement the most important changes successfully. The adjustment gave everyone involved a greater sense of control and provided confidence that compliance and business objectives could be met without excessive strain on staff.

Measurable impact for both sales and project outcomes

The result was powerful. The business avoided the exhaustion and confusion that comes with too much simultaneous change, and the two key projects received the attention and resources needed to succeed. Sales performance did not suffer, targets were not sacrificed, and the combined value of managing change in this way was estimated to be more than half a million dollars in avoided additional work.

This experience moved the conversation from how much we can do to what we should do now for the greatest impact. These are decisions that support not just short-term success, but also build long-term resilience and reputation for the sales function.

Here are some estimated figures of the value created during this one incident through Change Compass:

  1. A total of half a million dollars saved in initiative benefit reductions across initiatives
  2. More than $100,000 saved in terms of frontline staffing disruptions
  3. Avoiding customer satisfaction level (CSAT) drop of 10-20%
  4. Protecting the risk of decreased adoption rates from lower than 50% to 75-85%

A new way forward with The Change Compass

The real story here is the shift in mindset that The Change Compass enabled. With reliable and actionable information, teams did not have to rely on last-minute adjustments or gut feel. Senior managers could see the road ahead, make informed trade-offs, and ultimately protect both their people and their goals.

If you have faced times when your organisation juggled more change than your teams could bear, you know how valuable this kind of clarity can be. The Change Compass offers a way to navigate these challenges confidently, support the right conversations, and ensure every change effort delivers its intended value.

Imagine giving your business the tools to see the whole picture, act on what matters most, and achieve meaningful outcomes without compromising performance. With The Change Compass, that possibility is within reach. Take the first step to better change management and stronger business performance by putting insight at the heart of your decision-making.

Download the case study below:

Demonstrate Value of change 2

The Ultimate Guide in Designing a 5 star Change Journey

The Ultimate Guide in Designing a 5 star Change Journey

Designing a change journey is a design process.  Like in any design processes, you need to know intimately the impacted people you are working with, the world they are in and their expectations and behaviours.  In this way, design is about anticipating and shaping the experience of people.

  • What are the elements of a 5 star change experience?
  • What is the role of data in design?
  • How does branding and communications shape the experience?
  • How do we anticipate people needs?

For a comprehensive guide on how to undergo this design process to come up with a 5 star change journey, check out our guide.

Click here to download the guide.

Ultimate guide to 5 star change journey