Measuring behaviours as a part of change adoption is the ultimate measurement that change practitioners should incorporate as a part of tracking to what extent the change has been adopted by impacted stakeholders.
Whilst there could be a wide range of different behaviours depending on the initiative in concern, what are some of the tips in selecting the right behaviours to measure?
Check out our infographic on the top 4 elements to pay attention to when measuring behaviours as a part of change adoption metrics.
Change saturation is a common term used by change practitioners to describe a picture where there may be too many changes being implemented at the same time. The analogy is that of a cup with limited capacity, where if too much change is poured into a fixed volume, the rest will not stay in the cup or be ‘embedded’ as adopted changes.
At the end of 2020, Pivot Consulting conducted extensive research where they asked a range of different roles in organisations about implementing change. When questioned about key challenges to executing strategy and driving change, change fatigue or employees being overwhelmed by multiple initiatives is identified as one of the top 2 most critical challenges. It can be seen that change saturation is not just a popular discussion topic but a serious focus area that is posing significant challenges to a range of organisations.
There are many common ways of understanding and approaching change saturation. However, many of these are not always correct with some being quite misleading. In this article, we aim to review the 5 key incorrect assumptions about change saturation that are downright misleading and should be directly challenged. These may be assumptions that are widely held and assumed to be ‘facts’ and are not questioned.
In the following, we outline the key assumptions that should be challenged when approaching change saturation.
1. Change is disruption
The first assumption is that change is always ‘disruption’. Change can be dynamic. There is also a range of different types of changes. Therefore, change does not always need to be negative and cause chaos or impede normal ways of working.
Take, for example, agile teams. A part of the work of an agile team is to drive continuous improvement. The team establishes regular routines to try something new, i.e. a change. They then execute it and examine the data to see the effect of the change on business. For these teams, ‘planned’ changes are just part of normal ways of working, and therefore not necessarily viewed as ‘disruptions’ to their work since this is part of their work.
On the other hand, change is also not always ‘negative’. Some changes may be there to make it easier for the employee or the customer. For example, it may be that the organisation is implementing system-driven automation to save employees time in entering manual information. These changes are typically welcomed by the impacted employees and are not perceived as ‘disruptions’ to their work. Instead, they are typically perceived as positive changes.
As a result, change needs to be understood by its specific impact on the various stakeholders, and not by its ‘disruption’. A more useful way to understand the impact of the changes on end stakeholders may be to understand the various activities required for them to undergo the change and shift their behaviours.
For example, it could be that a customer service rep may need to undergo training sessions, team briefing sessions, review documentation, and receive team leader feedback, in the overall change journey. These activities may be ‘on top’ of existing normal business routines, or they may be a part of existing business routines, and therefore not ‘adding’ to the ‘saturation level’.
2. Change capacity is determined by capability
It is a commonly held belief that change capacity is determined by change capability at individual, team and organisational levels. Yes, factors such as change leadership, individual change capability and skills can improve change capacity. However, change capacity is not only determined by capability.
Indeed, there are other factors that determine change capacity.
Humans are designed to have a limited attention span. When there are too many things happening at the same time, we can only focus on a limited number of things at the same time. There are many studies that show if we keep switching focus between different tasks, we are likely to not have full focus and attention which will leave us to making mistakes.
This also applies to learning. The more we focus on multiple tasks, the more we are not able to tune out and therefore engage in deeper processing and learning.
What about thinking about multiple initiatives? According to University of Oregon researchers, professors Edward Awh and Edward Vogel, the human brain has a built-in limit on the number of discrete thoughts it can entertain at one time. The limit for most individuals is four. It does not matter how much capability development one focuses on, there is a limit to how much capacity can be created. Therefore, there is a cap on to what extent capability may lift change capacity. After all, no matter how skillful someone is, biological tendencies and restrictions remain.
The level of expectation of the extent to which one can change can determine the outcome. Studies have shown the individual negativity or positivity can impact the outcome. The more negative an individual of the outcome, the more negative the outcome becomes. However, if the expectations are unrealistically high, they may lead to disappointment.
Think back to the impacts of Covid, and how what would have seemed almost impossible in terms of virtual working has suddenly become a reality overnight. Often what companies had imagined taking 10 years to achieve, is suddenly achieved overnight out of necessity. The expectation that there is no other way and that there is no choice leads to the acceptance of the change scenario.
3. Basing saturation points purely on opinions
As change practitioners, we often aim to be the ‘people’ representative. Many think of themselves as the ‘social worker’ or ‘welfare worker’ who are there to be the voice of employees. Whilst, it is true that we need to be the voice of people, the definition of ‘people’ should not just include employees, but a range of stakeholders including managers.
Especially when the change environment is complex and challenging, there may be a tendency for people to ‘over-inflate’ the reality of the situation. Sometimes it may be easier to call out that there is too much change in the hope that this feedback will result in less change volume, thereby making work ‘easier’.
Change practitioners need to be aware of political biases or tendencies for people to report on feedback that is not substantiated by data. Interviews with stakeholders may need to be supplemented by surveys or focus groups to test the validity of the results. We should not simply assume that anything stakeholders tell us are ‘truths’ per se, especially since there is political motivation in biased reporting.
4. Focus on capability vs systems and processes to manage saturation
An overt focus on capability, knowledge and skills, may lead to gaps in the overall ability to manage change saturation. This is because skills and competencies are just one of many elements that supports change execution. Beyond this, effective organisations also need to focus on having the right systems and processes established to support ongoing change execution.
Systems and processes include such as:
Learning operations processes whereby there is a clear set of steps for the business to communicate, undertake, and embed training/learning activities. These include the right channel to organise people capacity to attend sessions, communication channels regarding the nature of scheduled training sessions and monitoring the effectiveness of these sessions
Communication processes include having a range of effective channels that promote dynamic communication between employees and managers, as well as across different business units and teams.
Data and reporting mechanisms to visualise change impacts, measurement on change saturation levels, and report on change delivery tracking and change adoption progress
Governance established to examine change indicators including change saturation, risks identified, and make critical decisions on sequencing, prioritisation, and capacity mitigation
Skills and competencies are one element, but without processes and systems established to execute the change and track/report on change saturation, there will be limited business outcomes achieved.
Outlined in this article are just 5 of the common assumptions about change saturation that are misleading. There are many more other assumptions. The key for change practitioners is not to blindly rely on ‘methodologies’ or concepts, but instead to focus on data and facts to make decisions. Managing change saturation needs to be data-driven. Otherwise, stakeholders may easily dismiss any change saturation claims (as is often the case with senior managers). Armed with the right data and insights, the change practitioner has the power to influence a range of change decisions to achieve an optimal outcome for the organisation.
Measuring the change adoption of stakeholders is one of the most important parts of the work of change practitioners. It is the ultimate ‘proof’ of whether the change interventions have been successful or not in achieving the initiative objectives. It is also an important way in which the progress of change management can clearly be shown to the project team as well as to various stakeholder groups.
Measurement takes time, focus and effort. It may not be something that is a quick exercise. There needs to be precise data measurement design, a reliable way of collecting data, and data visualisation that is easily understood by stakeholders.
With the right measurements of change adoption, you can influence the direction of the initiative, create impetus amongst senior stakeholders, and steer the organisation toward a common goal to realise the change objectives. Such is the power of measuring change adoption.
The myth of the change management curve
One of the most popular graphs in change management and often referred as the ‘change curve’ is the Kubler-Ross model. The model was specifically designed by psychiatrist Elisabeth Kubler-Ross to refer to terminally ill patients as a part of the book ‘On Death and Dying’. For whatever reason, it has somehow gained popularity and application in change management.
There is little research evidence to back this up even in psychological research. When applied in change management there is no known research that supports this at all. On the other hand, there is ample research by McKinsey that for effectively managed initiatives and transformations, stakeholders do not go through this ‘valley of death’ journey at all.
The ‘S’ curve of change adoption
If the ‘change curve’ is not the correct chart to follow with regard to change adoption, then what is the right one to refer to?
The ‘S’ curve of change adoption is one that can be referenced. It is well backed in terms of technology and new product adoption research. It follows a typically slow start followed by a significant climb in adoption followed by a flattened level at the end. Here is an example of key technologies and the speed of adoption in U.S. households since the 1900s.
With the different types of change contexts, the shape of the S curve will be expected to differ as a result. For example, you are working on a fairly minor process change where there is not a big leap in going from the current process to the new process. In this case, the curve would be expected to be a lot more gentle since the complexity of the change is significantly less than adopting a complex, new technology.
Going beyond what is typically measured
Most change practitioners are focused on measuring the easier and more obvious measures such as stakeholder perceptions, change readiness, and training completion. Whilst these are of value, they in themselves are measuring aspects of the change. They can be viewed as forward-looking indications of the progress that supports moving toward eventual change adoption, versus the eventual change adoption.
To really address head-on the topic of measuring adoption, it is critical to go beyond these initial measures toward those elements that indicate the actual change in the organisation. Depending on the type of change this could be system usage, behaviour change, following a new process or achieving cost savings targets.
Project Benefit realization
It goes without saying that to really measure change adoption the change practitioner must work closely with the project manager to understand in detail the benefits targeted, and how the prescribed benefits will be measured. The project manager could utilise a range of ways to articulate the benefits of the project. Common benefit categories include:
Business success factors such as financial targets on revenue or cost
Product integration measures such as usage rate
Market objectives such as revenue target, user base, etc.
These categories above are objectives that are easier to measure and tangible to quantify. However, there could also be less tangible targets such as:
Product or solution leadership
There could be various economic methods of determining the targeted benefit objectives. These include payback time or the length of time from project initiation until the cumulative cash flow becomes positive, or net present value, or internal rate of return.
The critical part for change practitioners is to understand what the benefit objectives are, how benefit tracking will be measured and the interpret what steps are required to get there. These steps include any change management steps required to get from the current state to the future state.
Here is an example of a mapping of change management steps required in different benefit targets:
Project benefits targeted
Likely change management steps required
Change management measures
Increased customer satisfaction and improved productivity through implementing a new system.
Users able to operate the new system. Users able to improve customer conversations leveraging new system features. Users proactively use the new system features to drive improved customer conversations. Managers coaching and provide feedback to usersBenefit tracking and communications. Customer communication about improved system and processes Decreased customer call waiting time .
% of users passed training test. System feature usage rate. Customer issue resolution time. User feedback on manager coaching. Monthly benefit tracking shared and discussed in team meetings. Customer satisfaction rate. Customer call volume handling capacity.
Measuring behavioural change
For most change initiatives, there is an element of behaviour change, especially for more complex changes. Whether the change involves a system implementation, changing a process or launching a new product, behaviour change is involved. In a system implementation context, the behaviour may be different ways of operating the system in performing their roles. For a process change, there may be different operating steps which need to take place that defers from the previous steps. The focus on behaviour change aims to zoom in on core behaviours that need to change to lead to the initiative outcome being achieved.
How do we identify these behaviours in a meaningful way so that they can be identified, described, modelled, and measured?
The following are tips for identifying the right behaviours to measure:
Behaviours should be observable. They are not thoughts or attitudes, so behaviours need to be observable by others
Aim to target the right level of behaviour. Behaviours should not be so minute that they are too tedious to measure, e.g. click a button in a system. They also should not be so broad that it is hard to measure them overall, e.g. proactively understand customer concerns
Behaviours are usually exhibited after some kind of ‘trigger’, for example, when the customer agent hear certain words such as ‘not happy’ or ‘would like to report’ from the customer that they may need to treat this as a customer complaint by following the new customer complaint process. Identifying these triggers will help you measure those behaviours.
Achieve a balance by not measuring too many behaviours since this will create additional work for the project team. However, ensure a sufficient number of behaviours are measured to assess benefit realisation
Behaviour change can seem over-encompassing and elusive. However, it may not need to be this. Rather than focusing on a wide set of behaviours that may take a significant period of time to sift, focusing on ‘micro-behaviours’ can be more practical and measurable. Micro-behaviours are simply small observable behaviours that are small step-stone behaviours vs a cluster of behaviours.
For example, a typical behaviour change for customer service reps may be to improve customer experience or to establish customer rapport. However, breaking these broad behaviours down into small specific behaviours may be much easier to target and achieve results.
For example, micro-behaviours to improve customer rapport may include:
User the customer’s name, “Is it OK if I call you Michelle?”
Build initial rapport, “How has your day been?”
Reflect on the customer’s feeling, “I’m hearing that it must have been frustrating”
Agree on next steps, “would it help if I escalate this issue for you?”
Each of these micro-behaviours may be measured using call-listening ratings and may either be a yes/no or a rating based assessment.
Establishing reporting process and routines
After having designed the right measurement to measure your change adoption, the next step would be to design the right reporting process. Key considerations in planning and executing on the reporting process includes:
Ease of reporting is critical, and you should aim to automate where possible to reduce the overhead burden and manual work involved. Whenever feasible leverage automation tools to move fast and not be bogged down by tedious work
Build expectations on contribution to measurement. Rally your stakeholder support so that it is clear the data contribution required to measure and track change adoption
Design eye-catching and easy to understand dashboard of change adoption metrics.
Design reinforcing mechanisms. If your measurement requires people’s input, ensure you design the right reinforcing mechanisms to ensure you get the data you are seeking for. Human nature is so that whenever possible, people would err on the side of not contributing to a survey unless there are explicit consequences of not filling out the survey.
Recipients of change adoption measurement. Think about the distribution list of those who should receive the measurement tracking. This includes not just those who are in charge of realising the benefits (i.e. business leaders), but also those who contribute to the adoption process, e.g. middle or first-line managers.
An important part of measuring change is to be able to design change management surveys that measure what it has set out to measure. Designing and rolling out change management surveys is a core part of what a change practitioner’s role is. However, there is often little attention paid to how valid and how well designed the survey is. A survey that is not well-designed can be meaningless, or worse, misleading. Without the right understanding from survey results, a project can easily go down the wrong path.
Why do change management surveys need to be valid?
A survey’s validity is the extent to which it measures what it is supposed to measure. Validity is an assessment of its accuracy. This applies whether we are talking about a change readiness survey, a change adoption survey, employee sentiment pulse survey, or a stakeholder opinion survey.
What are the different ways to ensure that a change management survey can maximise its validity?
Face validity. The first way in which a survey’s validity can be assessed is its face validity. Having good face validity is that in the view of your targeted respondents the questions measure what they aimed to measure. If your survey is measuring stakeholder readiness, then it’s about these stakeholders agreeing that your survey questions measure what they are intended to measure.
Predictive validity. If you really want to ensure that your survey questions are scientifically proven to have high validity, then you may want to search and leverage survey questionnaires that have gone through statistical validation. Predictive validity means that your survey is correlated with those surveys that have high statistical validity. This may not be the most practical for most change management professionals.
Construct validity. This is about to what extent your change survey measures the underlying attitudes and behaviours it is intended to measure. Again, this may require statistical analysis to ensure there is construct validity.
At the most basic level, it is recommended that face validity is tested prior to finalising the survey design.
How do we do this? A simple way to test the face validity is to run your survey by a select number of ‘friendly’ respondents (potentially your change champions) and ask them to rate this, followed by a meeting to review how they interpreted the meaning of the survey questions.
Alternatively, you can also design a smaller pilot group of respondents before rolling the survey out to a larger group. In any case, the outcome is to test that your survey is coming across with the same intent as to how your respondents interpret them.
Techniques to increase survey validity
1. Clarity of question-wording.
This is the most important part of designing an effective and valid survey. The question wording should be that any person in your target audience can read it and interpret the question in exactly the same way.
Use simple words that anyone can understand, and avoid jargon where possible unless the term is commonly used by all of your target respondents
Use short questions where possible to avoid any interpretation complexities, and also to avoid the typical short attention spans of respondents. This is also particularly important if your respondents will be completing the survey on mobile phones
Avoid using double-negatives, such as “If the project sponsor can’t improve how she engages with the team, what should she avoid doing?”
2. Avoiding question biases
A common mistake in writing survey questions is to word them in a way that is biased toward one particular opinion. This assumes that the respondents already have a particular point of view and therefore the question may not allow them to select answers that they would like to select.
Some examples of potentially biased survey questions (if these are not follow-on questions from previous questions):
Is the information you received helping you to communicate to your team members
How do you adequately support the objectives of the project
From what communication mediums do your employees give you feedback about the project
3. Providing all available answer options
Writing an effective survey question means thinking through all the options that the respondent may come up with. After doing this, incorporate these options into the answer design. Avoid answer options that are overly simple and may not meet respondent needs in terms of choice options.
4. Ensure your chosen response options are appropriate for the question.
Choosing appropriate response options may not always be straightforward. There are often several considerations, including:
What is the easiest response format for the respondents?
What is the fastest way for respondents to answer, and therefore increase my response rate?
Does the response format make sense for every question in the survey?
For example, if you choose a Likert scale, choosing the number of points in the Likert scale to use is critical.
If you use a 10-point Likert scale, is this going to make it too complicated for the respondent to interpret between 7 and 8 for example?
If you use a 5-point Likert scale, will respondents likely resort to the middle, i.e. 3 out of 5, out of laziness or not wanting to be too controversial? Is it better to use a 6-point scale and force the user not to sit in the middle of the fence with their responses?
If you are using a 3-point Likert scale, for example, High/Medium/Low, is this going to provide sufficient granularity that is required in case there are too many items where users are rating medium, therefore making it hard for you to extract answer comparisons across items?
5. If in doubt leave it out
There is a tendency to cram as many questions in the survey as possible because change practitioners would like to find out as much as possible from the respondents. However, this typically leads to poor outcomes including poor completion rates. So, when in doubt leave the question out and only focus on those questions that are absolutely critical to measure what you are aiming to measure.
6.Open-ended vs close-ended questions
To increase the response rate, it is common practice to use closed-ended questions where the user selects from a prescribed set of answers. This is particularly the case when you are conducting quick pulse surveys to sense-check the sentiments of key stakeholder groups. Whilst this is great to ensure a quick, and painless survey experience for users, relying purely on closed-ended questions may not always give us what we need.
It is always good practice to have at least one open-ended question to allow the respondent to provide other feedback outside of the answer options that are predetermined. This gives your stakeholders the opportunity to provide qualitative feedback in ways you may not have thought of.
Writing an effective and valid change management survey is often glanced over as a critical skill. Being aware of the above 6 points will get you a long way in ensuring that your survey is designed in a way that will measure what it is intended to measure. As a result, the survey results will be more bullet-proof to potential criticisms and ensure the results are valid, and provide information that can be trusted by your stakeholders.
Change saturation is one of the popular search items when it comes to measuring change management. How do we effectively measure change saturation without resorting to personal opinions? And how might we formulate effective recommendations that are logical and that stakeholders can action immediately?
Use this recipe to measure change saturation using The Change Compass.