Pillar 2: Evidence and accountability

We take responsibility for understanding the impact of our work through best-practice evaluation.

2.1 Government requirements

Principle 2.1: Our approach to evaluation aligns with government requirements. 

Evaluating government activities ensures the effective and efficient use of government funding. There are several legislative and government requirements that apply to evaluation in the Australian Government.

The requirement to evaluate arises from our obligations under the Public Governance, Performance and Accountability Act 2013 (PGPA Act) to measure, assess and report on performance.

The Commonwealth Evaluation Policy, which sits under the PGPA Act, provides further guidance on evaluation activity. It sets out the principles guiding evaluation activity, including that evaluations need to be:

  • fit for purpose
  • useful
  • robust, ethical and culturally appropriate
  • credible
  • transparent where appropriate.

To ensure evaluation is integrated into policy development, the Budget Process Operational Rules (BPORS) include evaluation requirements and requirements for policy impact analysis. The BPORs require all NPPs to show that an evaluation plan will be established in the early stages of implementation in line with the Commonwealth Evaluation Policy. In the department, this plan is referred to as a monitoring and evaluation framework.

Taken together, the impact analysis and evaluation requirements mean that, through the Budget process, the department is expected to:

  • use evidence and the findings of previous evaluations to identify the policy problem and why the government needs to take action 
  • set out options to address the problem and their costs and benefits
  • clearly explain objectives, expected outcomes and impact of the preferred policy option over the short, medium and long term 
  • establish baseline data and robust performance monitoring early so changes can be measured and assessed over time
  • ensure credible data and evidence is collected throughout implementation to support future evaluations, reviews and performance assessments. 

The government establishing the Australian Centre for Evaluation has put a renewed focus on evaluation in the APS. The centre aims to improve the volume, quality and impact of evaluations across the APS and improve evaluation capability across Australian Government entities.

2.2 Type and scale of evaluations

Principle 2.2: We prioritise evaluation effort strategically and efficiently to maximise its utility. 

Evaluation can take many forms and is not a one-size-fits-all approach. 

'It is not feasible, cost-effective or appropriate to fully evaluate all government activities and programs. The cost of evaluation must be balanced against the risk of not evaluating, noting that sometimes performance monitoring by itself will be sufficient to meet the performance reporting requirements under the Public Governance, Performance and Accountability Act 2013.'

Commonwealth Evaluation Toolkit (Australian Centre for Evaluation)

We prioritise evaluation effort and resources with a decision-making framework (see table). This enables us to do a strategic, risk-based assessment of appropriate evaluation activities. 

To determine what evaluation activity may be required, including its scope and timing, we need to consider:

  • the characteristics of the policy or program
  • other assurance activities that have been, or will be undertaken
  • the context in which the evaluation will be undertaken, including related policy developments.

The Evaluation Unit will work with responsible areas to assess the evaluation category for each initiative and the evaluation activity required. This is done as the NPP or program proposal is being considered and will allow appropriate resourcing to be provided.

When this has been determined, the Evaluation Unit will track the activity and provide regular reporting. This ensures strategic oversight of evaluation activities across the department. Evaluation activities will be included in the responsible area’s business planning.

Framework for determining appropriate evaluation activity for policies and programs
Evaluation category A B C
Indicative characteristics of policy or program
  • Significant funding
  • High risk
  • May be flagship initiative
  • Government priority
  • Strategically significant
  • Moderate funding 
  • Medium risk
  • May be pilot initiative
  • Some strategic significance or level of priority for government
  • Small funding
  • Low risk and strategic significance
  • Similar initiatives have been previously evaluated or reviewed 
Evaluation activity
  • Formal evaluation/s supported by robust ongoing monitoring 
  • Additional data collection 
  • Robust mixed methods, including stakeholder consultation
  • May involve economic modelling and/or cost benefit analysis
  • At least one formal evaluation during life of initiative, supported by ongoing monitoring
  • Additional data collection 
  • May involve survey of participants and/or qualitative interviews with stakeholders
  • Periodic program health checks, as part of ongoing monitoring
  • Desktop research
  • Analysis of internal data
  • Existing stakeholder engagement
Evaluation governance
  • Undertaken independent from policy or program area (EU or external consultant)
  • Reference group (SES level, may include external)
  • Public release of final report (with Ministerial agreement)
  • Reporting on implementation of findings
  • Undertaken by policy area (either in-house or with external partner) in collaboration with program area, with advice from Evaluation Unit
  • Reference group (EL2 or SESB1 level)
  • Findings may be reported
  • Undertaken by responsible policy or program area
  • Existing reporting channels report outcomes

Purpose-driven evaluation

To maximise usefulness, all formal evaluations in the department need a clear purpose. Evaluations are shaped by:

  • the purpose and context of the initiative (including where it is in its lifecycle)
  • the information needs of stakeholders
  • ethical and cultural considerations. 

Broad focus areas may include:

  • Design – Did we do the right thing?
  • Implementation – Did we do things in the right way?
  • Impact – Did it work?

These focus areas inform the types of questions to ask in an evaluation and the most appropriate methodologies. Evaluations that focus on an initiative's impact must be timed so the expected outcomes have had time to occur. 

The Evaluation Unit will provide internal resources and guidance for staff on planning, scoping and conducting formal evaluations. This includes determining evaluation questions, appropriate methodologies and other considerations such as ethics and privacy. The resources will also include guidance on program health checks.

Using external expertise

The department may not always have the expertise or capacity for evaluation activities.

When needed, we may engage external partners or providers (such as academics or research providers) to conduct or help with evaluation. These partnerships are a chance to draw on expertise and build our internal evaluation capability.

2.3 Evaluation governance

Principle 2.3: We integrate oversight and accountability into our evaluation practices to produce credible and robust evidence.

We all have a role to play in ensuring evaluation is part of our policy and program processes as we seek continuous improvement. Figure 4 below provides an overview of the evaluation roles and responsibilities in the department.

Department-level oversight

Appropriate governance helps ensure accountable and credible evaluations.

The Performance and Risk Committee (PaRC) is responsible for strategic oversight of evaluation. It also provides advice to the Executive Board on the current status and approach to evaluation in the department, including interactions with other assurance activities. Through these governance arrangements, PaRC is responsible for ensuring the department is undertaking monitoring and evaluation in line with this strategy and Australian Government requirements. The Evaluation Unit supports this oversight function.

PaRC endorses the annual schedule of evaluations. This includes:

  • considering any requests to reschedule or cancel planned evaluations
  • prioritising where in-house evaluation and external expertise is used across the schedule.

PaRC only considers requests to cancel planned evaluations in very limited circumstances. Once PaRC has endorsed the schedule, it will be presented to the Executive Board for approval.

The Evaluation Unit in the Analysis and Insights Division provides strategic management of evaluation activities in the department. This includes:

  • monitoring and reporting on evaluation activities across the department, including implementation of evaluation recommendations
  • managing the annual schedule of evaluations and forward program
  • determining the appropriate evaluation activity and governance arrangements for individual policies and programs
  • providing guidance and advice on evaluation design, frameworks, plans and methodologies 
  • undertaking some high-priority evaluations, as agreed with PaRC 
  • maintaining the register of evaluation recommendations and lessons learned and the library of completed evaluation reports
  • liaising with the Australian Centre of Evaluation and DISR portfolio entities.

Activity-level oversight

As well as considering the form of evaluations, we must also consider the appropriate level of oversight for specific evaluation activities. This needs to account for context and manage issues such as the risk of bias and ethical or cultural considerations.

For formal evaluations, the Evaluation Unit will work with the responsible team to create a reference group of appropriate decision-makers. The reference group will oversee the evaluation from beginning to end. The level of the reference group is informed by the evaluation category of the policy or program being evaluated (see Figure 3).

For less formal evaluation activities, such as ongoing monitoring or program health checks, oversight through existing reporting channels may be sufficient, combined with the departmental oversight set out above.

Overview of evaluation roles and responsibilities throughout the policy lifecycle
Activity Evaluation Unit Responsible policy or program area Other key stakeholders
Governance and strategic oversight
Plan for formal evaluations
  • Coordinate annual schedule of evaluations
  • Ensure evaluation is included in divisional business planning and appropriately resourced
  • Performance and Risk Committee: Endorse annual schedule of evaluations
  • Executive Board: Approve the annual schedule
Maintain consistent approach to evaluation
  • Liaise with other enabling areas and portfolio entities
  • Engage with ACE and ensure adherence to Commonwealth requirements
  • Conduct monitoring and evaluation in line with this Strategy and supporting guidance
  • Performance and Risk Committee: Provide strategic oversight and connect with other enabling areas
  • Australian Centre for Evaluation: Oversee Commonwealth evaluation approach
Stage 1: Understanding the context
Consider evidence from evaluation activities
  • Provide guidance and support, including access to lessons learned and reports from previous evaluations
  • Seek out and consider past evaluations and lessons learned in developing evidence-based policy proposals
  • Australian Centre for Evaluation: Facilitate evaluation knowledge sharing across APS
Stage 2: Designing options
Determine appropriate evaluation approach for proposal
  • Provide advice on appropriate evaluation approach and costings where appropriate
  • Assess evaluation category of proposed policy
  • Determine high-level outcomes and metrics for success 
  • Allocate resourcing for evaluation where appropriate
  • Strategic Policy Division: Coordinate budget process
  • Office of Impact Analysis: Conduct threshold assessment for NPPs and provide advice on impact analysis requirements
  1.  
Stage 3: Implementation
Establish monitoring and evaluation framework
  • Provide guidance on developing the framework
  • Develop the framework and gain SES sign off  
  • Data & Evaluation Branch: May facilitate access to appropriate data sources for monitoring performance
Stage 4: Evaluation and evolution
Monitor performance
  • Provide advice as required
  • Collect data and track progress in line with the framework
  • Update the framework as needed to reflect any changes
  • Report on performance 
  • Strategic Policy Division: Facilitate entity-level performance measurement and reporting
Conduct program health check (Category C)
  • Provide advice and tools to support health check
  • Undertake periodic health checks as scheduled in the framework, primarily using existing program data
  • Manage oversight and reporting through BAU governance mechanisms
  • Make improvements to program where appropriate
  • Other stakeholders vary depending on context
Conduct formal evaluation (Category A or B)
  • Support policy area to scope, plan and conduct / commission evaluation in line with the Commonwealth Evaluation Policy and this strategy
  • Provide tools and templates to support evaluation
  • Take part in evaluation reference group
  • Establish reference group to oversee evaluation
  • Scope, plan and conduct / commission formal evaluation, including information gathering, analysis and recommendations
  • Ensure methodology is appropriate, robust and ethical
  • Australian Centre for Evaluation: May undertake a formal impact evaluation of priority program in partnership with DISR 
  • Analysis & Insights Division: May provide support where technical skills required for data and/or economic analysis
Post-evaluation activities
Share learnings
  • Add final report to library of completed evaluation reports
  • Share lessons learned internally and promote use
  • Seek Ministerial approval to publish final evaluation report/insights
  • Performance and Risk Committee: Note lessons learned from completed evaluations
Implement recommend-ations
  • Monitor and report on implementation of recommendations
  • Respond to evaluation recommendations
  • Implement program-specific recommendations as appropriate
  • Performance and Risk Committee: Oversee implementation of recommendations
Capability building
Build staff evaluation capability
  • Provide training, tools and guidance to build staff skills
  • Lead DISR Evaluation Community of Practice
  • Ensure staff have appropriate time, skills and knowledge to meet monitoring and evaluation requirements, including through performance discussions and skills planning
  • Australian Centre for Evaluation: Lead APS-wide Evaluation Community of Practice and maintain Commonwealth Evaluation Toolkit