We embed systems for continual review and improvement to ensure we learn from experience.
1.1. Considering evaluation at all stages
Principle 1.1: We integrate evaluative practices into our end-to-end processes to build a culture of continuous improvement.
Embedding evaluation and evaluative thinking into our processes and systems is key to building a culture of learning and improvement. Evaluative thinking is the continuous practice of taking an inquisitive, critical, unbiased and reflective approach to evidence and decision-making. Evaluation is not a standalone or one-off process. It is an important element throughout the lifecycle of policies, programs and other activities.
This section outlines how considering evaluation at different stages of the policy cycle adds value in different ways.
The following stages are numbered for clarity. However, these stages are part of a continuous, ongoing cycle that will repeat regularly. Sometimes the cycle may happen at a small scale, such as introducing a slight change to existing processes, monitoring performance and checking things are working as expected. At other times it will be a larger cycle, like when we develop, implement and formally evaluate a new policy.
The role of evaluation at each stage
1. Understanding the context
Why is intervention needed?
Past evaluation activities and lessons learned can provide context and inform appropriate solutions.
2. Designing options
What are the options?
Get clear on what, why, how and when to develop a strong theory of change and assess appropriate evaluation arrangements.
3. Implementation
How will success be measured?
Identify what metrics will show the intended outcomes, and plan for how and when this data will be collected.
4. Evaluation and evolution
Is it on track?
Assess progress and performance regularly using available data through ongoing monitoring and periodic program health checks.
Did it work?
Where appropriate, formal evaluations help us learn what works, what does not, and how things can be improved.
How can it be improved?
Use learnings from evaluation activities to adjust and improve.
Stage 1: Understanding the context
When considering a change or intervention, it is important to understand the problem being addressed and its broader context. Then you can consider what, if any, intervention is needed.
Evaluation can provide a valuable source of insight and evidence at this stage. The lessons learned from past evaluation activities provide important insights about what works and what does not in different contexts. This evidence can be used to inform policy impact analysis. It also provides information for decision makers to support the need for an intervention and inform the appropriate solution.
Stage 2: Designing options
Past evaluations and lessons learned can provide key insights when designing options for new policies and programs. They give policy makers evidence of what has and has not worked, and what the department has learned from past experience.
As part of the design stage, evaluative thinking supports the development of robust policy frameworks and impact analysis. It does this by considering how a policy or program is expected to lead to change (known as the 'theory of change').
A theory of change is a description of how and why a desired change is expected to happen in a particular context. For programs and policies, the theory of change helps explain how and why a government activity is expected to address an identified need.
Articulating the theory of change can help clarify what effects the proposed policy or program is intended to have, how they are expected to happen and when.
This includes:
- what is being invested (inputs)
- who we need to involve (participants)
- what activities are planned (outputs)
- how these activities are expected to lead to change over time (outcomes).
Determining how to measure the success of these outcomes can also help clarify design options.
Like an initiative, appropriate evaluation activities take different forms and scale. So it is important to consider the most appropriate approach and whether resources should be dedicated to formal evaluation over the life of the initiative. Pillar 2 below provides more guidance on determining the type and scope of evaluation.
Undertaking this forward thinking and setting up the right evaluation approach at this stage will strengthen the following stages.
Stage 3: Implementation
After considering evaluation approach when designing options, developing a monitoring and evaluation framework is a key part of implementation. The framework will ensure everything is in place to support successful monitoring and evaluation.
The framework includes:
- program logic illustrating the policy or programs’ theory of change and intended outcomes
- a data collection plan identifying how we will measure progress against expected outcomes, including what data we will collect and when
- plan for future evaluation outlining future evaluation activities, including timing of formal evaluations (where appropriate) or program health checks.
A monitoring and evaluation framework may also include tailored questions for collecting responses from grantees, users or participants to show progress towards program outcomes.
The appropriate level of detail for each component of the framework will vary depending on the policy or program and its context. All policies and programs must be monitored, regardless of whether a formal evaluation is planned.
Stage 4: Evaluation and evolution
Once the policy or program has been implemented, its performance is monitored over its lifecycle by collecting data in line with the monitoring and evaluation framework.
The framework is intended to be a living document and kept up to date with any changes to the program or planned activities.
Where the plan for future evaluation includes a formal evaluation, this will be subject to evaluation-specific governance arrangements, including inclusion in the forward work plan and annual schedule of evaluations. Once the evaluation is completed, the Evaluation Unit will include the report in the internal library of completed evaluation reports. It will also monitor the implementation of recommendations through regular reporting to the Performance and Risk Committee (refer Section 2.3 Evaluation governance).
Where the plan for future evaluation is limited to monitoring with no formal evaluation, the policy area should schedule and conduct periodic program health checks. These health checks involve reviewing the data collected through monitoring to assess the program’s performance. They are not subject to evaluation-specific governance arrangements but should be appropriately managed by the policy area within existing reporting lines. This includes reporting to the Performance and Risk Committee where appropriate.
The information gathered through evaluative practices:
- let us know if it is on track
- helps improve the policy or program
- maximises the value Australians receive from the investment of public funds
- creates an evidence base to inform the government’s future policy and program decisions.
1.2 Learning from experience
Principle 1.2: We learn from experience by deliberately seeking out, acting on and sharing evaluation evidence.
Completing an evaluation or program health check is an important step, but not the last step. It is important to translate the lessons learned from evaluation into action. This may include:
- shaping the design of new activities
- informing decisions about whether to continue existing activities
- making incremental changes in response to lessons learned
- sharing insights and learnings.
While evaluation activities may focus on a specific initiative, the insights gained have broader relevance. To maximise the value of evaluation activities, lessons learned need to be discussed openly and shared widely when appropriate.
A library of completed reports from formal evaluations is maintained by the Evaluation Unit and is available to all staff in the department. Under this strategy, a register of evaluation recommendations and lessons learned will also be developed for internal use. Governance arrangements to support this strategy will include greater reporting and tracking of outcomes from evaluations (see Principle 2.3 for more information).