Artificial intelligence transparency statement

Date published:
28 February 2025
Date updated:
26 February 2025

Our department is supporting Australia to become a leader in developing and adopting trusted, secure and responsible Artificial Intelligence (AI). 

Our own adoption of AI adheres to the Voluntary AI Safety Standard, which consists of 10 voluntary guardrails. These guardrails include transparency and accountability requirements and explain what developers and deployers of AI systems must do to achieve them.

The Digital Transformation Agency’s Policy for the responsible use of AI in government came into effect on 1 September 2024. It has a set of requirements for Australian Government agencies to engage with AI in a safe and responsible way.

The policy has mandatory requirements about accountable officials and transparency statements. This statement details our implementation of the policy requirements. 

Governance

In January 2024, we formed an AI Governance Committee (AIGC) to have central oversight for AI use in the department. The committee’s members represent a range of perspectives from across the department and have involvement in developing AI policy or projects. The AIGC ensures:

  • we find ways to use AI to improve efficiency, capability and innovate
  • appropriate governance of AI use and adherence with relevant legislation, policies and best practice
  • opportunities involving the use of AI are considered, safe and responsible
  • we identify and address all potential AI related risks
  • appropriate training and usage policies are available.

AI accountable official 

Our Chief Information Officer is the accountable official responsible for carrying out the policy. The AIGC supports the accountable official. 

How we use AI

We use a small number of AI tools and functions to improve efficiencies and augment processes. These help staff focus on more complex and meaningful work.

The AIGC maintains visibility of AI use and classifies AI use according to the following usage patterns and domains

  • Usage patterns: supporting human decision-making and administrative action, giving insights through analytics and improving workplace productivity so staff can focus on more complex work. 
  • Domains: service delivery, compliance and fraud detection, policy and legal, and corporate and enabling domains. 

We have a policy that guides all staff on: 

  • acceptable use of AI in our department  
  • ethical considerations 
  • Freedom of Information considerations 
  • record keeping 
  • privacy
  • roles and responsibilities when using AI.

We do not use AI in any instance where the public directly interacts with or feels a significant impact from AI without a human agent. 

Staff review all AI tool outputs and treat them as drafts or starting points for further research, not for decision-making.

Our commitment

We will continuously refine and enhance our AI capabilities. We do this by ensuring centralised oversight and evaluation of AI tools through the AIGC. 

This statement will evolve to align with technology changes, legislation, policy and governance best practices. We will review at least every 12 months and update it if our AI approach changes, or if anything materially impacts its accuracy.