The Australian Government has published a proposals paper for introducing mandatory guardrails for AI in high-risk settings.
The paper outlines proposed options for mandatory guardrails as preventative measures that would require developers and deployers of high-risk AI to take specific steps across the AI lifecycle.
We developed the guardrails in consultation with the temporary AI Expert Advisory Group, comprising experts in AI technologies, law and governance.
The proposals paper includes:
- a proposed definition of high-risk AI
- 10 proposed regulatory guardrails to reduce the likelihood of harms occurring from the development and deployment of AI systems
- regulatory options to mandate guardrails, building on current work to strengthen and clarify existing laws.
We are committed to ensuring the development and deployment of AI systems is safe and can be relied upon. We also want to allow AI use in low-risk settings to flourish largely unimpeded.
The consultation aims to ensure the proposed guardrails are fit for purpose, build greater public trust and give businesses greater regulatory certainty to drive AI adoption across the economy.
We encourage feedback on the proposals from all Australians, including industry, academia, workers and the broader community.
This consultation is one of the immediate actions to come out of the government’s interim response to the Safe and Responsible AI in Australia consultation, released earlier this year.
The Australian Government has also released the first iteration of the Voluntary AI Safety Standard. The standard supports and promotes best-practice governance to help more businesses adopt AI in a safe and responsible way.