Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.

Archive for February, 2018


The Evaluation Phase

The Evaluation Phase in to some extent a misnomer, because in OD evaluation is a continuous and constant process of assessment. Formative evaluation will take place throughout the OD cycle in order that the OD practitioner can make adjustments to keep things on track and respond to new information that becomes available as a consequence of each interaction and intervention. Setting a plan and blindly sticking to it fails to take into account the system in which the OD programme is being delivered. It is therefore essential that mechanisms are put into place to evaluate progress and take stock of what has been achieved so far. Since organizations are social constructs, the very act of changing the conversation and language through dialogic interventions will result in a shift in the organizational culture. Keeping track of how the organization is evolving will mean that plans that were crafted following the initial investigation will need to be adjusted or scrapped completely. Regular communication of progress with the sponsor, steering group and key stakeholders is essential to keep track of developments and ensure that support continues for the programme.

A summative evaluation provides the process for assessing the extent to which the OD intervention has delivered the outcomes agreed during the Contracting Phase. The metrics used should have been identified and agreed prior to the programme delivery. The Evaluation Phase is essential in understanding whether there has been a return on investment (ROI) from the OD intervention, that the work delivered has been effective and resources used efficiently. Evaluation measures also ensure that value delivered by the OD programme is captured and recognition is given for the resulting achievements. Monitoring the changes occurring within the organization galvanise trust in the OD programme and provide the energy to keep going with the tools and techniques. Continuous measurement of the outcomes provide the basis for confidence that the OD tools and techniques can deliver the desired organization change, and the achievement of sustainable organizational performance can be attributed to the investment in people led change.

Just as there are positive forces for change in every organization there are also negative or counter forces that will seek to reduce or reverse changes. The organization is a system, so although change in one part will impact another part of the organization, not changing part of an organization can prevent change for happening somewhere else. For example, during a transformation programme in a UK manufacturer a new customer relationship management (CRM) system was introduced. The commercial teams were given technical training on how to use the technology. However, there was no specific intervention to support their line managers in embedding the use of the CRM back in the workplace, nor any attempt to give meaning to the training beyond what buttons needed to be pressed. The line managers continued to demand that their teams work in the old way resulting in a doubling of the employees’ workload. Although the employees used the system, the employees lacked an understanding of how the CRM system fitted into the wider organizational process. This led to a multitude of processing and data errors, resulting in tens of millions of pounds worth of financial adjustments every month. Nine months later an intervention using OD tackled the managerial stakeholder attitudes to the new system, and employees were engaged in behaviour change and sense-making regarding expectations regarding why good data was so important. However, the damage to the change programme and the financial performance of the organization in the intervening period was significant. A full diagnosis and employment of OD tools and techniques prior to the original technical training would have circumvented these issues. This occurrence highlights the importance of good diagnosis at the start of the OD programme and the attention that should be paid to on going evaluation. It also conveys the serious consequences of getting change wrong, and the relevance of evaluation in enabling the OD practitioner to respond to issues promptly.

In addition to monitoring outputs of the OD programme during its delivery phase, evaluation provides the information required to ensure that change is reinforced and stabilized as part of the organization’s culture. Sustainable change to the organization can take years to secure and the desired outcomes may not be apparent in the initial few weeks or months in which the more visible changes may seem to occur.

Process evaluation is also necessary to ensure that both the practitioner and client learn lesson from the way in which the OD programme has been implemented and activities have been delivered. By analysing and distilling learning from previous projects the OD practitioner is able to hone their skills and improve the effectiveness of their interventions. Key questions to be addressed during that the evaluation phase are:

  • What required outcomes identified at the beginning of the OD process did the intervention achieve?
  • Given the resource committed to the intervention were the desired outcomes achievable?
  • What metrics were used to evaluate the OD intervention and were they suitable for measuring progress?
  • What tracking mechanisms, methods and approaches were used in reviewing the progress of the OD programme and who had responsibility and ownership of the data?
  • What responsibilities and ownership could internal change agents have in gathering evaluative data?
  • How can the change process be reinforced by the choices made regarding the evaluative process in use?
  • Following the analysis of evaluation data, how can the actions or the intervention approach deliver the outcomes required?
  • What worked? Why?
  • What does not work? Why?
  • How does this impact the design of future interventions going forward?

    The OD Tool-Kit – What you need for the Evaluation Phase

    At each stage of the OD cycle it is possible to evaluate progress. At the end of the Diagnostic Phase it is possible to evaluate what next based on what is new knowledge has been generated through the diagnosis. After each intervention it is possible to evaluate what happened, what was delivered and what progress has been achieved against outcomes. It is also important to re-evaluate what next based on new knowledge, which has been generated by the intervention. At the end of the programme, outcomes can be measured and a ROI generated. Process evaluation is required at every stage of the OD cycle. At all points it is possible to determine whether the goal is any closer to being achieved. The techniques and methods required for the Evaluation Phase of the OD cycle are:

  • Social Science research design in measurement and statistical methods
  • Knowledge of organizational metrics and outcome measures
  • Cost-Benefit Analysis
  • Stakeholder analysis
  • Evidence Based decision-making
  • Return on Investment calculations
  • Peer-to-Peer Learning methodologies
  • Report writing and presentation Skills


  • OD evaluation is a continuous and constant process of assessment
  • The evaluation phase is essential in understanding whether there has been a return on investment from the OD intervention
  • Evaluation provides the information required to ensure that change is reinforced and stabilized as part of the organization’s culture.
  • Process evaluation is also necessary to ensure lessons are learnt from the way in which the OD programme has been implemented and activities have been delivered.