How to write a change management survey that is valid

How to write a change management survey that is valid

An important part of measuring meaningful change is to be able to design effective communication effectiveness change management surveys that measure the purpose of the survey it has set out to measure the level of understanding of the change. Designing and rolling out change management surveys is a core part of what a change practitioner’s role is. However, there is often little attention paid to how valid and how well designed the survey is. A survey that is not well-designed can be meaningless, or worse, misleading. Without the right understanding from survey results, a project can easily go down the wrong path. This is how this survey can be a powerful tool to ensure smooth transition for the change initiative.

Why do change management surveys need to be valid?

A survey’s validity is the extent to which it measures what it is supposed to measure. Validity is an assessment of its accuracy. This applies whether we are talking about a change readiness survey, a change adoption survey, employee engagement, employee sentiment pulse survey, or a stakeholder opinion survey.

What are the different ways to ensure that a organizational change management survey can maximise its validity and greater success?

Face validity. The first way in which a survey’s validity can be assessed is its face validity. Having good face validity is that in the view of your targeted respondents the questions measure what they aimed to measure. If your survey is measuring stakeholder readiness, then it’s about these stakeholders agreeing that your survey questions measure what they are intended to measure.

Predictive validity. If you really want to ensure that your survey questions are scientifically proven to have high validity, then you may want to search and leverage survey questionnaires that have gone through statistical validation. Predictive validity means that your survey is correlated with those surveys that have high statistical validity. This may not be the most practical for most change management professionals.

Construct validity. This is about to what extent your change survey measures the underlying attitudes and behaviours it is intended to measure. Again, this may require statistical analysis to ensure there is construct validity.

At the most basic level, it is recommended that face validity is tested prior to finalising the survey design.

How do we do this? A simple way to test the face validity is to run your survey by a select number of ‘friendly’ respondents (potentially your change champions) and ask them to rate this, followed by a meeting to review how they interpreted the meaning of the survey questions.

Alternatively, you can also design a smaller pilot group of respondents before rolling the survey out to a larger group. In any case, the outcome is to test that your survey is coming across with the same intent as to how your respondents interpret them.

Techniques to increase survey validity

1. Clarity of question-wording.

This is the most important part of designing an effective and valid survey. This is a critical part of the change management strategy. The question wording should be that any person in your target audience can read it and interpret the question in exactly the same way.

  1. Use simple words that anyone can understand, and avoid jargon where possible unless the term is commonly used by all of your target respondents
  2. Use short questions where possible to avoid any interpretation complexities, and also to avoid the typical short attention spans of respondents. This is also particularly important if your respondents will be completing the survey on mobile phones
  3. Avoid using double-negatives, such as “If the project sponsor can’t improve how she engages with the team, what should she avoid doing?”

2. Avoiding question biases

A common mistake in writing survey questions is to word them in a way that is biased toward one particular opinion which may lead to biased employee feedback. This assumes that the respondents already have a particular point of view and therefore the question may not allow them to select answers that they would like to select.

Some examples of potentially biased survey questions (if these are not follow-on questions from previous questions):

  1. Is the information you received helping you to communicate effectively to your team members through appropriate communication channels?
  2. How do you adequately support the objectives of the project
  3. From what communication mediums do your employees give you feedback about the project

3. Providing all available answer options

Writing an effective employee survey question means thinking through all the options that the respondent may come up with regarding the upcoming change. After doing this, incorporate these options into the answer design. Avoid answer options that are overly simple and may not meet respondent needs in terms of choice options.

4. Ensure your chosen response options are appropriate for the question.

Choosing appropriate response options may not always be straightforward. There are often several considerations, including:

  1. What is the easiest response format for the respondents?
  2. What is the fastest way for respondents to answer, and therefore increase my response rate?
  3. Does the response format make sense for every question in the survey?

For example, if you choose a Likert scale, choosing the number of points in the Likert scale to use is critical.

  1. If you use a 10-point Likert scale, is this going to make it too complicated for the respondent to interpret between 7 and 8 for example?
  2. If you use a 5-point Likert scale, will respondents likely resort to the middle, i.e. 3 out of 5, out of laziness or not wanting to be too controversial? Is it better to use a 6-point scale and force the user not to sit in the middle of the fence with their responses?
  3. If you are using a 3-point Likert scale, for example, High/Medium/Low, is this going to provide sufficient granularity that is required in case there are too many items where users are rating medium, therefore making it hard for you to extract answer comparisons across items?

5. If in doubt leave it out

There is a tendency to cram as many questions in the survey as possible because change practitioners would like to find out as much as possible from the respondents. However, this typically leads to poor outcomes including poor completion rates. So, when in doubt leave the question out and only focus on those questions that are absolutely critical to measure what you are aiming to measure.

6.Open-ended vs close-ended questions

To increase the response rate of change readiness survey questions, it is common practice to use closed-ended questions where the user selects from a prescribed set of answers. This is particularly the case when you are conducting quick pulse surveys to sense-check the sentiments of key stakeholder groups. Whilst this is great to ensure a quick, and painless survey experience for users, relying purely on closed-ended questions may not always give us what we need.

It is always good practice to have at least one open-ended question to allow the respondent to provide other feedback outside of the answer options that are predetermined. This gives your stakeholders the opportunity to provide qualitative feedback in ways you may not have thought of. This may include items that indicate employee resistance, opinions regarding the work environment, new ways of working, or requiring additional support.

To read more about how to measure change visit our Knowledge page under Change Analytics & Reporting.

Writing an effective and valid change management survey best practices for a specific change initiative is often glanced over as a critical skill. Being aware of the above 6 points will get you a long way in ensuring that your survey addresses areas of concern in a way that aligns with your change management process and strategy and will measure what it is intended to measure. As a result, the survey results will be more bullet-proof to potential criticisms and ensure the results are valid, providing information that can be trusted by your stakeholders.

The Critical Gap in Customer Experience Management Most Companies Miss

The Critical Gap in Customer Experience Management Most Companies Miss

Customer experience management dominates strategic conversations across banking, utilities, telecoms, and retail. Companies invest heavily in CRM systems, digital channels, and customer journey mapping. Yet a fundamental gap persists: the lack of integrated visibility into how company-wide change initiatives shape customer perceptions.

This guide reveals why traditional approaches fall short, quantifies the risks of disconnected change efforts, and provides a practical roadmap for creating a true single view of the customer through change impact integration.

What Prevents Companies from Achieving a Single View of the Customer?

Recent research confirms persistent challenges in customer experience management. A 2024 Forrester study found 48% of enterprises still struggle with unified customer data across channels and departments. Similarly, Gartner reports 52% cite building cohesive new experiences as their top barrier.

The core issue lies beyond siloed CRM data. Companies lack visibility into the cumulative impact of concurrent initiatives—product changes, pricing adjustments, IT rollouts, regulatory communications—that collectively define customer reality.

Why Traditional CRM Approaches Fall Short

CRM systems excel at marketing automation, sales tracking, and contact centre efficiency. However, they capture only transactional interactions, missing the broader context of organisational change.

Traditional CRM Focus Limitations

  • Marketing campaign data
  • Sales conversion metrics
  • Service interaction logs
  • Customer segmentation profiles

These systems overlook how product updates, pricing shifts, or compliance communications alter customer perceptions between tracked touchpoints.

The Missing Piece: Change Impact Tracking

The critical gap involves mapping all customer-impacting initiatives into a unified view. This includes marketing campaigns plus operational changes affecting service delivery.

Change Initiatives Shaping Customer Experience

  • Product lifecycle changes (end-of-life, new features)
  • Pricing and billing adjustments
  • IT system rollouts impacting service access
  • Regulatory compliance communications
  • Employee training initiatives influencing service quality
  • Partner or supplier changes affecting delivery

Without this integrated picture, companies cannot anticipate cumulative customer confusion or frustration.

Traditional CRM vs Change Impact Data vs Integrated CX View

Data SourceFocusCustomer InsightStrategic Value
CRM SystemsMarketing, sales, service transactionsIndividual touchpointsTactical optimisation
Change Impact DataCompany initiatives affecting customersPlanned experience shiftsRisk anticipation
Integrated ViewCombined datasetsHolistic customer realityStrategic CX orchestration

This table illustrates why isolated CRM investments yield incomplete results.

Risks of Disconnected Change Initiatives

Without integrated change visibility, companies create conflicting customer signals that erode trust and satisfaction. Real-world examples illustrate the consequences.

Common Customer Confusion Scenarios

  • One department ends a credit card product while sales teams push aggressive uptake targets
  • IT rollout disrupts online banking while marketing promotes digital-first convenience
  • Pricing changes coincide with loyalty program promotions, confusing value messaging
  • Regulatory communications clash with personalised marketing campaigns

These disconnects compound across multiple initiatives, overwhelming customers.

Financial Impact of Poor CX Coordination

The stakes are substantial. Recent studies quantify the cost:

  • Forrester 2024: Companies lose $1,200+ per negative customer experience
  • Gartner 2025: 42% of telecom households report negative experiences from conflicting communications
  • McKinsey: Utilities face 28% churn risk from uncoordinated service disruptions

Cumulative impact across customer bases represents millions in lost revenue annually.

Customer experience of change impacts

The Solution: Integrated Customer Change Impact Management

Create a unified view combining CRM data with change impact analytics for holistic CX orchestration.

Core Components of Integrated CX Visibility

  1. Centralised Change Repository: Track all customer-impacting initiatives across departments
  2. Customer Segmentation Mapping: Align change impacts with specific personas and journeys
  3. Timing & Volume Analysis: Visualise change saturation by customer segment over time
  4. Impact Correlation Engine: Link initiatives to expected CX outcomes and risks
  5. Strategy Alignment Dashboard: Compare planned changes against customer experience goals

5 Strategic Benefits

  • Anticipate cumulative customer confusion before rollout
  • Optimise change sequencing to minimise disruption peaks
  • Align departmental initiatives with unified CX strategy
  • Quantify ROI from coordinated vs siloed change efforts
  • Enable proactive service recovery planning

Customer Change Impact Matrix Example

Customer SegmentProduct ChangePricing ShiftIT RolloutRegulatory Comm.Total Impact Score
Premium BankingMediumHighLowMediumHigh
Mass MarketLowHighHighLowHigh
Digital NativeHighLowHighLowHigh

This matrix reveals saturation risks by segment.

Implementation Roadmap for Integrated CX Change Management

Phase 1: Foundation (0-3 Months)

  • Inventory all customer-impacting initiatives across departments
  • Map initiatives to customer segments and journey touchpoints
  • Establish cross-functional CX governance council
  • Build baseline change impact repository

Phase 2: Integration (3-6 Months)

  • Connect change data with existing CRM/customer systems
  • Deploy change saturation dashboards by segment
  • Implement automated conflict detection alerts
  • Launch pilot optimisation for high-risk periods

Phase 3: Optimisation (6-12 Months)

  • Embed CX alignment reviews in initiative approval processes
  • Scale predictive impact modelling across portfolio
  • Establish continuous improvement feedback loops
  • Benchmark against industry CX leaders

Governance and Success Factors

Essential Governance Elements

  • Executive sponsorship with direct profit/loss accountability
  • Cross-departmental representation in change review forums
  • Standardised change impact assessment templates
  • Monthly portfolio saturation reporting to leadership

Critical Success Metrics

  • Reduction in customer confusion complaints (25% target)
  • Improved Net Promoter Score during change periods
  • 30% faster issue resolution through proactive planning
  • Higher departmental collaboration scores

Frequently Asked Questions (FAQ)

What is the biggest gap in customer experience management?
Lack of integrated visibility into how company-wide change initiatives collectively shape customer perceptions and experiences.

Why do CRM systems alone fail to deliver unified CX?
CRM captures transactions but misses operational changes like product updates, pricing shifts, and IT rollouts that define customer reality.

How much do poor CX experiences cost companies?
Recent studies show $1,200+ lost per negative experience, with millions annually across customer bases in banking and utilities.

What does integrated CX change management look like?
Centralised change repositories, customer segmentation mapping, saturation dashboards, and strategy alignment analytics working together.

How do you identify customer change saturation risks?
Use impact matrices showing concurrent initiatives by segment, highlighting high-risk periods needing sequencing adjustments.

What is the first step toward CX change integration?
Conduct an inventory of all customer-impacting initiatives across departments to establish baseline visibility.

5 things Eames taught me about agile project delivery

5 things Eames taught me about agile project delivery

Ray and Charles Eames, legendary mid-century designers, developed creative processes remarkably aligned with modern agile methodologies. Their approach emphasised iteration, resource respect, and systems thinking, offering valuable lessons for today’s project teams facing complex delivery challenges.

This guide explores five key Eames principles and their direct application to agile project delivery. Change practitioners and project leaders gain practical insights to enhance iteration, stakeholder engagement, and systemic success.

What Agile Principles Did Eames Champion?

The Eames duo’s design philosophy prefigured agile concepts by decades. Their methods focused on practical experimentation, collective wisdom, and holistic systems. These are core tenets of contemporary agile delivery.

These principles translate directly to project environments, improving outcomes across technology rollouts, process changes, and organisational transformations.

1. Not Reinventing the Wheel: Leverage Collective Experience

Eames avoided starting from scratch, instead building on proven materials and techniques. Agile teams benefit similarly by tapping organisational knowledge rather than isolated innovation.

Practical Applications in Agile Delivery

  • Previous rollout lessons: Review past implementations of similar products or services to anticipate adoption challenges and success factors.
  • Stakeholder group insights: Consult colleagues experienced with specific audience dynamics and communication preferences.
  • Solution design patterns: Adapt approaches proven effective in prior technical or process solutions.
  • Timeline strategies: Apply scheduling techniques refined through previous deadline pressures.
  • Learning intervention successes: Reuse effective training content, delivery methods, and evaluation frameworks.

This principle prevents redundant effort while accelerating delivery through proven foundations.

2. Continuous Testing and Learning: Iterative Refinement

The Eames process featured constant prototyping and feedback, mirroring agile’s iterative cycles. Every team member, not just designers, contributes to this learning loop.

Change Management Testing Examples

  • Message validation: A/B test communications with target audiences to measure resonance and engagement.
  • Learning content trials: Pilot training modules with sample groups, gathering feedback on structure, clarity, and delivery medium.
  • Impact assessment accuracy: Validate change impact analysis directly with end users rather than proxies alone.

Digital tools enable scalable testing, ensuring solutions evolve toward optimal fit-for-purpose outcomes.

3. Respecting the Materials at Hand: Understand Your Resources

Eames emphasised the importance of recognising the capabilities and limitations of available resources. In agile project delivery, this means deeply understanding people, systems, processes, and stakeholder capacities.

Applying Resource Respect in Agile Projects

  • Assess team skills and system maturity before designing interventions.
  • Adapt project plans based on stakeholder readiness and local constraints.
  • Support change leads in gauging the ability levels of different groups to absorb new processes.
  • Tailor communication and training to maximise relevance and effectiveness given resource realities.

This approach builds realistic, sustainable change strategies aligned with organisational strengths and challenges.

Eames agile change management design thinking process

4. Generating New Perspectives and Ideas Through Play and Fun

The Eames valued play as a creative catalyst, fostering new ideas and fresh perspectives. Agile teams benefit from incorporating elements of play, fun, and experimentation into their work.

Practical Ways to Embed Play in Agile Delivery

  • Run hackathons or innovation sprints encouraging out-of-the-box thinking.
  • Design team-building activities that mix fun with purposeful reflection on project goals.
  • Use gamification techniques to increase engagement in learning and adoption tasks.
  • Foster a psychologically safe environment where experimentation and mistakes are accepted as learning opportunities.

Play enhances creativity, collaboration, and morale, supporting higher-quality outcomes.

5. Eventually Everything Connects: Embrace Systems Thinking

The Eames stressed seeing the broader picture and understanding how various elements interlink to form a larger system. This mindset is vital in agile delivery, where dependencies and impacts extend beyond single teams or projects.

Systems Thinking in Agile Projects

  • Map connections among processes, systems, communications, training, and branding to ensure cohesive delivery.
  • Identify how multiple change initiatives intersect and impact shared stakeholders or resources.
  • Help stakeholders understand how different initiatives support broader organisational strategies.
  • Use system maps and visualisations to support planning, risk assessment, and communication.

This holistic awareness prevents siloed work and promotes integrated, effective change.

Implementation Roadmap for Eames-Inspired Agile Delivery

Applying These Principles in Modern Projects

Quick-Start Actions for Teams

  • Conduct knowledge audits to capture previous rollout experiences across the organisation.
  • Schedule regular testing cycles for communications, training, and impact assessments.
  • Map resource capabilities and limitations during project kickoff planning.
  • Plan quarterly innovation sessions incorporating play and experimentation elements.
  • Create visual system maps showing project interconnections and dependencies.

Building Organisational Support

  • Train change leads in resource assessment and systems thinking techniques.
  • Establish cross-project knowledge sharing forums.
  • Integrate Eames principles into agile training and certification programs.
  • Use success stories to demonstrate ROI from iterative testing and collective learning.

These steps embed timeless design wisdom into contemporary delivery practices.

Cultural Considerations for Success

Overcoming Common Barriers

Success requires psychological safety for experimentation and leadership support for non-traditional approaches. Traditional organisations may resist play-based innovation, requiring champions to demonstrate tangible benefits first.

Scaling Across Teams

Start with pilot projects showcasing measurable improvements in delivery speed, stakeholder satisfaction, and adoption rates. Use these case studies to expand practice organisation-wide.

Measuring Impact

Track metrics like iteration cycle time reduction, stakeholder engagement scores, knowledge reuse rates, and cross-project collaboration frequency to validate principle effectiveness.

Frequently Asked Questions (FAQ)

What makes Eames principles relevant to modern agile delivery?
Their focus on iteration, collective wisdom, resource respect, creativity through play, and systems thinking directly addresses contemporary project complexity and delivery challenges.

How do you implement continuous testing in change management?
Use A/B testing for messages, pilot training modules with user groups, and validate impact assessments directly with end users to refine approaches iteratively.

Why is systems thinking essential in agile projects?
Modern initiatives rarely operate in isolation. Understanding interconnections prevents siloed work and ensures cohesive delivery across multiple changes.

How can teams incorporate play into serious projects?
Run hackathons, gamify learning tasks, and design team activities blending fun with purposeful project reflection to boost creativity and morale.

What is the first step in applying ‘not reinventing the wheel’?
Conduct knowledge audits capturing previous rollout lessons, stakeholder insights, and proven solution patterns across the organisation.

Read our ultimate guide to agile for change manager.

—————–