Evaluation maturity matrix

Background

A maturity matrix is a self-assessment tool that helps an organisation grade itself against a particular capability. The matrix does this by dividing the capability into focus areas, set against varying levels of maturity. Presented as a table, a maturity matrix shows an organisation the stages it needs to advance through to achieve greater maturity over time.

Developing and maintaining evaluation maturity is an ongoing process that must be balanced with other organisational objectives. While there is no policy requirement to reach a certain level of maturity by a specified time, the government has emphasised the importance of growing evaluation capability across the Australian Public Service.

Use of the maturity matrix

The DISR evaluation maturity matrix can be used to:

  • determine both the current and target levels of evaluation maturity, against 6 areas of focus
  • provide a common understanding of evaluation culture, capacity, and practice
  • develop strategies to address capability gaps, improve systems and better target critical resources to ensure a strong evaluation culture, in line with the Commonwealth Evaluation Policy.

Levels of maturity

This maturity matrix sets out 4 levels of evaluation maturity. These are described at a high level in the table below, adapted from the Australian Evaluation Society seminar, Developing and implementing an effective evaluation maturity model (May 2023). 

­Beginning Developing Embedded Leading
  • Evaluation is not well understood.
  • Evaluative
    practices are underdeveloped.
  • Evaluation is 
    ad hoc and not planned.
  • When evaluation 
    is done, it delivers limited benefit to the department or its stakeholders.

 

  • There is a general understanding of the role of evaluation.
  • Evaluative practices are growing, but inconsistent.
  • There are examples of good practice, but the department and its stakeholders do not get the full benefit of evaluation.

 

  • Evaluation is largely integrated in business functions.
  • Evaluative practices are established and consistent.
  • The department commissions and conducts evaluation well and strategically builds and uses its evidence base.

 

  • Evaluations present evidence and insights on impact and change.
  • Evaluative practices are exemplary.
  • The department and its stakeholders benefit greatly from our evaluation activity.
  • Others regard the department as a leader in this field.

 

Pillar 1: Establishing evaluative practices

Focus area Beginning Developing Embedded Leading

Culture

 

  • Low awareness of benefits. 
  • Seen as a compliance activity. 
  • Fear of negative findings and recommendations may lead to a perception of ‘mandatory optimism’ regarding program performance.
  • Decision makers do not consider evaluation a priority and rarely use evaluations as evidence for decision-making.
  • Some appreciation of the benefits
  • Increasingly viewed as a useful tool for the department, not simply a compliance activity. 
  • Decision makers start to seek evaluation evidence to support decisions but it is not easy to find.

 

  • Seen as an important component of sound policy and program design and delivery. 
  • Decision makers use evaluation evidence in decision-making. They openly communicate evaluation findings and lessons learned.

 

  • Considered integral to all aspects of the department’s work and the benefits are widely recognised. 
  • Decision makers share a clear vision for evaluation in the department. 
  • Evidence and opportunities for improvement are constantly sought.
  • Strategic decisions are routinely informed by evaluation evidence and insights. 
Planning
  • Evaluation planning is basic and of variable quality. 
  • Frequency and quality of evaluation is lacking. 
  • Insufficient resources are often allocated.
  • Planning for evaluation and performance monitoring is integrated at the program design stage. 
  • Guidelines for prioritising and scaling evaluation activity are used. 
  • Adequate resources are allocated to evaluation activities for strategically significant and highest risk programs only.
  • Evaluation activities are planned and conducted as a fundamental part of policy and program design and delivery. 
  • Priority programs are formally evaluated. 
  • Evaluations use fit for purpose methodologies. 
  • Adequate resources and time are allocated for evaluation across the department.
  • Evaluations motivate improvements in program design and policy implementation.
  • Evaluation data is systematically used to make policy decisions.
  • Resource requirements for evaluations are thoroughly planned.
  • Allocated resourcing consistently enables high quality and fit for purpose evaluation. 

Pillar 2: Evidence and accountability

Focus area Beginning Developing Embedded Leading
Governance
  • Lack of evaluation policies, procedures or governance mechanisms.
  • Monitoring and evaluation activities are inconsistent and often not proportionate to the scale and risk of an initiative.
  • Accountability for evaluation activities is not clear.
  • Some policies, procedures and governance mechanisms exist, but are not consistently understood.
  • Activities are usually proportionate to the scale and risk of an initiative.
  • Some understanding of responsibility for evaluation throughout the policy or program lifecycle.
  • Monitoring and evaluation is regularly undertaken in line with departmental evaluation standards.
  • Strategic oversight is exercised at a departmental level.
  • Evaluation is understood as a shared responsibility and roles are clear throughout the policy and program lifecycle.
  • Monitoring and evaluation processes are formalised and performed regularly as part of expected workload.
  • Accountability is clear and exercised throughout the organisation. 
  • Monitoring and evaluation roles and responsibilities are clearly documented and valued at all levels.
Knowledge
  • Evaluation findings and recommendations are held by policy areas and not widely available.
  • No process for sharing knowledge from evaluation to support broader learning.   
  • No follow up on the implementation of recommendations. 
  • Ineffective use of existing data for evaluation purposes.
  • Findings and recommendations held centrally
  • Ad hoc processes to support learning and sharing knowledge. 
  • Some data systems provide useful performance information for evaluation purposes.
  • Opportunities are identified to strengthen the collection and use of administrative data for evaluation purposes.

 

  • Findings and recommendations are easily accessible for staff. 
  • Established processes to support learning and share knowledge from evaluations.
  • Evaluation insights are shared externally where appropriate.
  • Staff are consistently able to collect and analyse data to assess performance.
  • Evaluation knowledge is strategically managed across the department.
  • Internal processes to support learning and sharing are regularly reviewed and updated.
  • Regular meta-analysis of findings and recommendations.
  • Findings have influence outside the department.
  • The department is recognised for effectively using a wide variety of data for evaluation purposes.

Pillar 3: Increasing capability

Focus area Beginning Developing Embedded Leading

Capacity

 

  • Evaluation skills and understanding is limited, despite pockets of expertise.
  • Most staff have foundational understanding of evaluation concepts and basic skills to assess progress against program outcomes.

 

  • General evaluation skills widespread. 
  • Robust research and analytical methods are commonly used to assess outcomes. 
  • Improved skills and knowledge in developing quality measures of success. 
  • The department consistently applies robust research and analytical methods to assess impact and outcomes.
  • There are experienced and capable staff able to undertake evaluative work throughout the department.
Support
  • Limited opportunities for staff to develop their evaluation skills and understanding.
  • Internal support and guidance is not available.
  • Some time is available for building evaluation skills, at the expense of other priority work.
  • Internal guidance material is developed but not widely accessed.
  • Some internal support for evaluation activities.

 

  • Staff have sufficient time to build evaluation skills and understanding on an ad hoc basis.
  • Evaluation guidance materials are a valuable resource for staff.
  • Dedicated support for evaluation activities.

 

  • All staff have regular, planned opportunities for evaluative capability building.
  • Demonstrated commitment to continuous learning and improvement throughout the department.
  • Comprehensive support and guidance available.

Adapted from the Department of Industry, Science, Energy and Resources (2017), Evaluation Strategy 2017–2021, pp. 38–39

Informed by Australian Evaluation Society seminar, Developing and implementing an effective evaluation maturity model (May 2023)