Responsible AI practices
Industry and government organisations use AI to make important decisions that affect people’s lives. AI can be very efficient at achieving business objectives, such as:
- automating data-heavy information gathering
- creating personalised customer experiences
- using natural language processing to improve support services.
But AI can have unintentional negative impacts if not designed with the appropriate checks and balances. This can result in privacy loss, data breaches, and ethical issues. The Australian Responsible AI Index found that 82% of businesses believed they were practicing AI responsibly. However, less than 24% had actual measures in place to ensure this.
NAIC worked with Gradient Institute to develop Implementing Australia’s AI Ethics principles: A selection of responsible AI practices and resources.
This document aims to help bridge the gap between the Australian AI Ethics Principles and the business practice of responsible artificial intelligence (RAI).
AI ethics principles
The report explores some of the practical steps to implement 8 AI ethics principles. It explains each practice and its organisational context, including the roles that are key to successful implementation. Some of the effective practices include:
- impact assessments
- data curation
- fairness measures
- pilot studies
Read the report to learn how to implement Australia's AI Ethics Principles to create responsible AI practices.
NAIC prepared and released this report when still part of CSIRO.