The Voluntary AI Safety Standard helps organisations develop and deploy AI systems in Australia safely and reliably.
This publication includes:
- the 10 voluntary AI guardrails and how to use them
- examples of when to apply the guardrails
- how we developed the standard
- the standard's foundational concepts and legal context
It also includes definitions, links to tools and resources, and information on how AI interacts with other business guidance and regulations.
About the standard
The Voluntary AI Safety Standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with artificial intelligence (AI). Through the Safe and Responsible AI agenda, the Australian Government is acting to ensure the development and deployment of AI systems in Australia in legitimate but high-risk settings is safe and can be relied on.
The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include transparency and accountability requirements across the supply chain. They also explain what developers and deployers of AI systems must do.
The guardrails help organisations to benefit from AI while mitigating and managing the risks that AI may pose to organisations, people and groups.
Some of the language in the standard is technical. We've chosen this language because it aligns with international standards. This means that Australian businesses can be sure that if they comply with the Voluntary AI Safety Standard, they will also comply with international laws.
This version of the Voluntary AI Safety Standard is the first iteration. We will continue to update the standard.