The MoSCoW technique of prioritising initiative requirements or features
The MoSCoW method of prioritization is well used by Business Analysts, Project Managers and Software Developers. The focus is on identifying and agreeing with key stakeholders what are the core levels of requirements that should be focused on more than others. This process of prioritization will then enable a better outcome in focusing the efforts of the team on the most important aspects of the solution given limited time and cost.
MoSCoW stands for: Must Have, Should Have, and Could Have.
There is significant opportunity for change practitioners to also adopt this technique to better prioritise a range of different change interventions. Too often, change activities are planned as a result of stakeholder requests, and not necessarily as a result of a prioritized approach of what approaches/activities provides the best outcome versus others.
1. Must Haves:
These are core, fundamental requirements that must be there for the end outcome to be there. These are the non-negotiable ones without which the goals of the project cannot be achieved.
For example, in implementing a new system, the users must know that the system is going to replace the previous system and the reason for this. Users must also know how to operate the new system prior to the older system being switched off.
2. Should Haves:
These are features or requirements that would have a high priority to reach the project outcome. These can often be core features that will add to the user/customer experience. However, they are not a must, and given challenges in time or cost they can be deprioritized.
For example, for a new system implementation it would be highly desirable to allow the users to access a sandbox to be able to play with the features prior to the launch to improve their readiness. It could also be that due to the large number of users using the system it makes sense to conduct a large scale awareness campaign to broadcast the arrival of the new system.
3. Could Haves:
These are nice to haves given sufficient resources such as time and cost. These requirements are definitely not critical and can easily be deprioritized as needed.
For example, in implementing the new system it may be nice to have coaching workshops with users prior to the go-live to offer additional learning support for those who may need more help. It could also be various system support materials such as cheat sheets, booklets, etc. to help the user embed the ins and outs of using the new system.
4. Won’t Haves (or Would Haves):
These are potential features or requirements that may be looked at in the future if there is sufficient resources available. This is the lowest in the order of priority, meaning that it will not make significant impact to the outcome of the project.
For example, in implementing the new system refresh training sessions could be offered later down the line for some users after go live. Depending on the organization and previous experiences an ‘embedment campaign’ could also be scheduled to drive continual usage of the system. But given the cost required these are deemed lowest in the priority.
In prioritizing change management approaches and interventions this way, we are adopting a structured method of determining the activities we are investing in to get the right outcome. The clarity of which interventions are core and foundational, versus others that are desirable or nice to haves is important to the success of the initiative. This could also avoid any disagreements or questioning of the change approach further down the line as the approach follows a structured and agreed process with stakeholders.
A critical part of agile is being able to iterate and continuously improve in order to deliver an optimal solution. Rather than one large change release, an agile project would break this down into smaller releases. Each release will go through an iterative process to test, collect data, evaluate and use any learning to improve the next release.
If an agile approach is appropriate we should also adopt this same approach in how we deliver change management activities. This means that we should be running a series of experiments to test, learn, document and improve on how we deliver change to the organization.
This contrasts to how most change managers would approach developing and delivering the change approach. The standard approach is collecting various information about the change, talk to key stakeholders about the change, and then form a view based on previous experiences in terms of what change approach would work for this initiative. Then, this approach would be present to stakeholders to get their blessing before executing on the change approach.
Below is an example of planning to run experiments in an agile environment from Alex Osterwalder, the founder of Strategyzer. First is designing the experiment, shaping its hypothesis, and testing it, which involves looking at the outcome data, learning from the experiment and making any relevant decisions based on the outcome.
Referenced from Alexander Osterwalder.
In this first part of a series on practical agile applications for change managers we focus on communications.
Communicating for change is a critical part of managing change and is also one that can easily be tested using a series of experiments.
The Campaign Monitor has outlined a series of aspects in which emails can easily be tested. These include:
Date and time
Call to action
Digital businesses also often conduct A/B Testing whereby 2 different sets of content are designed and delivered at the same time for the duration of the test. At the conclusion of the experiment we can then look at the results to see which one did better based on audience responses.
How do we measure communications experiments?
There are several ways to do this:
Readership – For intranet pages, your corporate affairs rep can usually access readership statistics
Surveys – Send surveys to the audience to ask for feedback
Focus groups – Run small focus groups for feedback
There is one area in which corporate can better learn from digital businesses – using digital tools to measure and track communications. For example, you can send out emails promoting a new intranet page, and then check back to see how many users actually visited the site. The results may be helpful as an initial experiment before launching the email to a wider audience group to achieve maximum results.
There are plenty of external tools such as ActiveCampaign or Mailchimp where you are able to use features such as:
A/B testing results
Send emails are certain times or dates
Automatic email responses
Target particular segments
View and click rates
In the following diagram you can see an example that it’s not difficult to build a drip-email series of interactions with your stakeholders based on their responses (or lack of).
It’s feasible to use these tools for a project where you can run a series of experiments and measure outcomes to support your change iterations.
As someone who is normally overseeing the change management side of large programs and portfolios, I now find myself being in the shoes of a project manager. Here’s the background. I now manage a digital software-as-a-service business (The Change Compass) aimed at those who are driving multiple changes in their organizations. In terms of managing change deliverables and stakeholders, I was perfectly comfortable, having done this with some of the largest organizations in the world. However, I was not trained as a project manager, and particularly not in managing a digital product.
Having worked on very large digital projects over the years I‘m familiar with the different phases of the project lifecycle and lean/agile/scaled agile methodologies. However, managing a digital project hands-on has revealed some very surprising learnings for me. I will share this in the following.
The customer/user doesn’t always know best
Over the years we have received quite a lot of customer feedback about what worked and what didn’t work and we have iteratively morphed the application inline with customer wishes. However, a ‘customer/user suggestion or wish’ is not always the best for them.
There are some features that we have developed to enable the user to build different reports. However, after lots of feedback, and iterations, we’ve found that the users actually don’t use these features much at all. On the other hand, there are other features designed based on our observations of how users have behaved that are very frequently used. In the design phase, some users have commented that they are not sure if these features will work. However, after trialing these they have easily adopted these and have not made any suggestions or comments since.
It is probably similar to when the first iPhone was released. A lot of people were negative about how it did not have a keyboard and that the lack of tactile pressing of buttons was a sure sign that it was not going to work. Did Apple derive iPhone purely based on customer feedback? Did customers already know what they wanted and simply told Apple? Nope. Well, the screen-only mobile phone with no or limited buttons is now a standard across mobile phone design.
To read more about avoiding key gaps in managing customer experience click here.
Setting clear expectations is absolutely critical
At The Change Compass we have a very diverse and scattered team. We have our development team in India, a UX designer in Canada, graphic designer in Europe and Analysts in Australia. Most of our team members are quite familiar with agile practices. They are familiar with each phase of the agile life cycle, Kanban boards, iterating releases, etc. For our Ultimate Guide to Agile for Change Managers click here.
However, one big lesson I learnt was the importance of setting clear and mutually agreed to work deliverables. With such a diverse team composition comes diverse understanding of the same concept. In agile, we try not to over-document and rely on discussions and ongoing engagement to achieve collaboration and clarity.
However, what I learnt was that clear documentation is absolutely critical to ensure that there is crystal clear understanding of the scope, what each deliverable looks like, what quality processes are in place to reach the outcome, the dependencies across different pieces of work, and what each person is accountable and not accountable for. All of these sound like common sense. However, the point is that it is common for agile projects to err on the side of too light in documentation, therefore leading to frustrations, confusion and lack of outcome achievement. In our experience, documentation is critical.
3. Boil everything down to its most basic meaning
In digital projects there is a lot of technical jargon with backend, front end, and mid layer design elements. Like any technology project, there seems to be a natural inclination to become overwhelmed with what is the best technical solution. Since I did not have a technology background I forced myself to become very quickly familiar with the various technical jargons in delivery to try to compensate.
However, what I found was that with such a diverse team, even within the technical team there is often misunderstanding about what a technical term means. On top of this, we have other non-technical team members such as Analysts, UX designer and Graphic Designer. We have experienced lots of team miscommunications and frustrations as a result of too much technical language.
To ensure the whole team is clear on what we are working on, how we are approaching it, and their roles in this along the way, we’ve tried hard to ‘dumb down’ the use of technical jargon into basic language as much as possible. Yes – there is a basic set of digital language necessary for delivery that all members should understand. But, beyond this we’ve tried to keep things very simple to keep everyone on the same page. And the same can also be applied to non-technical language, for example, graphic design technical terms that the techies may not be able to understand can also cause misunderstanding.
Team dynamics is still key …. Yes, even in a digital project
To get on the agile bandwagon a lot of project practitioners invest deeply to undergo
various training to become more familiar with how agile projects are conducted. While this is critical what I’ve found is that no matter what project methodology, agile or non-agile, digital or non-digital, the basics still remain that effective team dynamics is key to a high performing project team.
Most of the issues we have faced is around team communications, shared understanding, how different team members work with each other, and of course cross-cultural perceptions and behaviours. Any effort we have placed in discussing and resolving team dynamics and behaviours have always led to improved work performance.
The struggle of releasing something that isn’t perfect is hard
Being a typical corporate guy having worked in various large corporate multinationals it is ingrained in me that quality assurance and risk management are key to any work outcome. Quality work is one that ticks all boxes with no flaws and that does not expose any risks to the company. In the typical corporate world, any flaws are to be avoided. Thorough research,, analysis and testing are required to ensure the quality is optimal.
The agile approach challenges this notion head on. The assumption is that it is not possible to know exactly what the customer or user reaction is going to be. Therefore, it makes sense to start with a minimum viable product, and iterate continuously to improve, leveraging ongoing customer feedback. In this approach, it is expected that what is released will not be perfect and cannot be perfect. The aim is to have something that is usable first. Then, work to gradually perfect it.
Whilst in theory it makes sense, I’ve personally found it very difficult not to try and tick all boxes before releasing something to the customer. There are potentially hundreds of features or designs that could be incorporated to make the overall experience better. We all know that creating a fantastic customer experience is important. Yet, an agile approach refrains from aiming to perfect the customer experience too much, instead, relying on continuous improvement.
Most of us work in organizations where change is the constant, and where at any one time there is a myriad of changes. What happens when there is a lot of changes being worked on? How do effective organizations manage change within this common environment? And what plays out when an organization adopts agile within this environment? Here we will illustrate how one organization effectively manages lots of changes within an agile environment.
Meet company A which is a typical financial services organization. They, like most other financial services organizations, are undergoing multiple changes. In change management theory land most are concerned with managing one change at a time. The reality for a lot of organizations is that there are lots of changes, some times up to hundreds of changes at a given point in time within an organization. This is not taking the lens of formal ’projects’ that would have formalized governance and resources in place to plan and deliver the initiative. From a user-centric lense, change is any initiative that involves changing a current way of working. These includes product changes, marketing campaigns, process changes, and role changes.
Like other organizations, company A has several business units, each of which has a range of initiatives that mainly impact their own business unit. However, some of these also impact other business units. At the same time, there is a company-wide body or governance group that determines which intiatives are to be funded centrally and are of higher priority, depending on the initiative benefit case, strategic importance and overall business value.
Towards the end of the year every year, there seems to be a phenomenon emerging in this company. Within an agile environment there are many agile teams are working in self-driven teams iterating on various changes. Many of the initaitives are also focused on delivering changes that impact frontline staff that work directly with customers. Most of these initiatives are aiming to implement the change prior to the end of the calendar year as the peak customer volume for this financial services firm tends to be between December and February. The idea is that if the changes were rolled out prior to December, then the change would take place in time to capture the peak volumes and therefore provide a quick realisation of benefits.
However, the scenario is that in true agile form, there is bound to be delays in each iteration. As each agile team starts to work through and iterate on the change, often there are technical delays or that the team realizes that the scope of the project requires a longer period of time to deliver. As a result, it is common to delay the eventual initiative go-live. When there are several initiatives aiming to go-live prior to the December period, this becomes a peak change impact period for the business. This means that there are simply too many initiatives trying to launch at the same time, causing operational performance challenges and business risks.
So how has Company A managed this situation?
Armed with the quantitative data of the impacts of every initiative, programs, projects and initiatives, the picture was clear in terms of what this meant to the business. The frontline staff, as well as team leaders of the frontline, will require significant time away from their normal duties to understand, digest and embed the various changes. This presents real challenges in terms of ensuring the right resourcing given the number of hours required to undergo changes. The business also has historical data of what happened last time this level of change had impacted the business and what this meant to business performance. The data also included the overall operational environment and any challenges including customer volumes, performance trends, etc.
A series of governance session was organized to zoom in on this specific scenario consisting of project delivery managers, change management, business leaders and other support professionals (e.g.initiatives risk). The session focused on discussing the various business risks and how to mitigate these risks, including prioritizing agreed critical initiatives, understanding sequencing implications, and de-prioritizing non-critical initiatives. With each meeting, there was also continuing delays for some initiatives (again, in true agile form). As these potential or actual delays were socialized and shared, stakeholders kept updating their plan of attack to ensure the was an effective way of managing the situation.
The role of the change practitioner
The change management professional’s role in this context is to lead and facilitate the discussion of the governance and stakeholders so that there is clarity of what the data is telling us, what options we have to deal with it, and agreed actions forward. Program managers play a role in sharing ongoing progress in initiatives. Business stakeholders also play a critical role in understanding, accepting and agreeing to any actions, as well as sharing any shifting business performance priorities. However, the change professional plays a key role as the core of the problem as about managing and coordinating the amount and pace of change at a given point in time.
There was a series of solutions proposed to manage this overall peak change period:
1) Less critical initaitives were either pushed out or stopped. This lead to various re-planning exercises
2) Higher priority initaitives were clarified and agreed
3) A set of communication and engagement actions were proposed to better engage the impacted teams to help them joint-the-dots around the myriad of changes and what these meant
4) Careful and continual monitoring and reporting of business performance was emphasized to track the outcome of the changes
The illustrated case for Company A is a very common scenario for a lot of organizations within an agile environment. Initiatives cannot operate in silos if we adopt a user-lens in managing change and the impacts of change. This case illustrates how critical it is to have strong data that tells a clear story of what is going to happen to the business and what it means. Data enables effective and strategic conversations. Data also provides significant power and value in putting change management at the driving seat of business management.