A series of digitally-generated glowing blue squares.

The Australian Government is positioning Australia as a global leader in safe and responsible artificial intelligence (AI). The National AI Centre (NAIC) have developed the first iteration of the Voluntary AI Safety Standard to support these efforts. 

The standard is a guide to best practice for Australian businesses, sectors and industries that are developing, procuring and deploying AI systems and services.  

The standard has 10 voluntary guardrails to help users realise the benefits of AI and avoid the potential risks it can pose. It takes a human-centred approach to safe and responsible AI that is modular, agile and flexible.

The standard will help organisations to: 

  • protect people and communities from harms
  • avoid reputational and financial risks 
  • increase trust and confidence in AI systems, services and products
  • align with legal needs and expectations of the Australian population
  • operate more seamlessly in an international economy.

The standard follows the government’s interim response to the Safe and Responsible AI in Australia consultation. While considering options for mandatory approaches, the government is taking steps now to address AI in high-risk settings. One of these is the Voluntary AI Safety Standard.

We will continue to update the standard over the next 6 months.