The Voluntary AI Safety Standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with artificial intelligence (AI). Through the Safe and Responsible AI agenda, the Australian Government is acting to ensure that the development and deployment of AI systems in Australia in legitimate but high-risk settings is safe and can be relied on, while ensuring the use of AI in low-risk settings can continue to flourish largely unimpeded.
In 2023, the government underwent consultation through its discussion paper on ‘Safe and Responsible AI in Australia’. In the Interim Response areas of government action were outlined, including:
- delivering regulatory clarity and certainty
- supporting and promoting best practice for safety
- ensuring government is an exemplar in the use of AI
- engaging internationally on how to govern AI.
The response also recognised the need to consider building AI capability in Australia.
To support and promote best practice, an immediate action was to work in close consultation with industry to develop a Voluntary AI Safety Standard. This standard complements the broader Safe and Responsible AI agenda, including developing options on mandatory guardrails for those developing and deploying AI in Australia in high-risk settings.
While there are examples of good practice through Australia, approaches are inconsistent. This is causing confusion for organisations and making it difficult for them to understand what they need to do to develop and use AI in a safe and responsible way. The standard establishes a consistent practice for organisations. It also sets expectations for what future legislation may look like as the government considers its options on mandatory guardrails.
The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include testing, transparency and accountability requirements across the supply chain. They also explain what developers and deployers of AI systems must do to comply with the guardrails. The guardrails help organisations to benefit from AI while mitigating and managing the risks that AI may pose to organisations, people and groups.