Australian AI ethics principles: Telstra tests 2 AI solutions

Telstra is exploring artificial intelligence (AI) to transform customer experience and increase business operation efficiency. Telstra participated in the Australian AI Ethics Principles pilot. See what the company learnt.
A Telstra employee examines one of Telstra’s drones

Telstra uses AI to improve customer experience. Image credit: Telstra

Telstra participated in the Australian AI Ethics Principles pilot.

The company prepared this case study summarising its experiences in applying the Australian AI Ethics Principles. The case study is an example of how the company considered the principles. The principles are voluntary and organisations can choose to apply them in different ways.

Considering artificial intelligence ethics

Telstra’s aim in the pilot was to test how it can maintain responsible AI in practice, while using AI to deliver benefits. These benefits included better digital customer experiences and increased efficiency of business operations. Telstra considered how best to apply the Australian AI Ethics Principles across 2 applications:

  • one AI solution developed in-house
  • another AI solution developed by a third-party.

For commercial reasons, Telstra can’t disclose specific details of the 2 technologies. The principles provided useful guidance and led the company to change its internal governance and ethical AI training.

Putting AI ethics principles into practice

Telstra shared the following insights on how it applied the Australian AI Ethics Principles to the 2 AI solutions.

Principle 1: Human, social and environmental wellbeing

Carefully considering the objectives and actions of AI systems before deployment helps a business to identify potential  ethical impacts.

In this pilot, Telstra’s team analysed as many potential impacts of the system as possible (both positive and negative). This included looking beyond immediate impacts to broader long-term or indirect impacts of deploying the AI to determine how it should manage these. For example, the third-party AI system delivered a short-term cost saving to business processes. But the team only allowed it to proceed because it also delivered a better experience and resulted in an equally fair, or even fairer outcome, for users.

Principle 3: Fairness

Businesses should minimise unfair discrimination that can result from bias – explicit and implicit. To check for unfair discrimination, AI projects need appropriate auditing and criteria in place with people overseeing the AI system’s deployment. Regularly consulting or testing with stakeholders may also help surface unintended biases.

Where AI makes decisions that significantly impact stakeholders, Telstra more closely scrutinises an AI system’s approach to fairness in its risk review.

Telstra’s experience during the pilot highlighted the importance of defining ‘fairness’ in these situations. Defining fairness often requires making a value judgement on:

  • which stakeholders are most at risk
  • specific protections to address potential impacts and meet legal requirements.

Even when stakeholders can reach agreement on a what a ‘fair’ outcome is, translating this to a mathematical formulation is often not straightforward. Telstra recommends the AI community think further on how to translate fairness into actionable digital criteria.

Principle 5: Reliability and safety

Telstra has processes and capabilities in place to ensure the realiability and safety of data and software systems. For AI, this means ensuring the robustness of an algorithim’s predictions and decisions to avoid unintended consquences.

AI systems are highly dependent on data being representative or fit for its intended application. Common problems with data include:

  • relying on stale data (for example using data for the same AI-driven marketing of products even after the price has changed)
  • using data collected from a non-representative sample population, which you can’t assume to be true of a broader cohort (for example testing an AI-driven offer only in an affluent suburb before distributing it to the general customer base).

Businesses should carefully consider the following as these may introduce errors into an AI system’s predictions:

  • how training data was acquired
  • how missing data is handled
  • how the target audience was measured
  • whether the data characteristics and data itself is necessary and appropriate for the intended application.

For Telstra, the key is to ensure data is accurate and relevant to the specific target and purpose.

Telstra also recommends that businesses monitor the ongoing performance of AI systems against suitable baselines or comparisons. They can then see if using AI is an improvement from traditional manual decision-making or simple rule-based approaches. Telstra is exploring ways to do this systematically, and for the full lifecycle of the AI system.

Principle 6: Transparency and explainability

Telstra considers that ‘transparency and explainability’ requires the following measures to achieve accountability:

  • disclosing where customers are engaging with, or affected by, an AI-enabled system or process, such as a virtual chatbot
  • being appropriately transparent about how AI is being used for any operational system
  • being able to explain how the AI’s decisions meet the needs of different people engaged with it.

In practice, it can be challenging to provide an easy explanation of the complexity of some AI technologies. Businesses should provide stakeholders with enough detail to build trust and provide mechanisms that allow individuals to challenge decisions which affect them.

From the pilot, Telstra required different types of transparency and explanation to meet the needs of different people interacting with the system. Examples include:

  • For customers interacting with the company’s systems, they may want a non-technical explanation of their outcomes. They may also want advice on what they could do differently to change the outcome.
  • For deployers of AI buying third-party AI, they may need to access to the AI model to test it for fairness. They may also need to be able to explain which features had the most impact on a particular decision.

Principle 8: Accountability

Telstra considers that ‘accountability’ requires identifying who is accountable at different levels of the organisation for:

  • the actions of an AI system
  • implementing the system components correctly
  • setting and balancing the system’s objectives.

This applies to in-house developed AI and AI purchased from third parties.

When Telstra purchases third-party systems with embedded AI, it remains responsible for their performance. Telstra takes steps to ensure these purchased AI technologies are working in line with its ethical principles. Navigating how to share accountability with suppliers is not a simple exercise and Telstra continues to refine its approach to this.

The pilot helped Telstra to articulate some minimum standards it expects of suppliers. In practice, this involved asking suppliers questions such as:

  • how a system was built
  • what data the system was trained on
  • how data is used in any retraining when the system is operating
  • whether appropriate assessments for bias have been undertaken
  • how often the assessments for bias will be updated and revised.

The pilot helped Telstra to identify a person who is ultimately accountable for each decision to purchase, deploy or on-sell a third-party AI system.

Applying the ethics principles to a working system required the combined efforts of several parts of Telstra’s organisation. Telstra is implementing updated governance arrangements to ensure this cross-company collaboration happens.

Benefits and impacts

Telstra has taken steps to ensure the Australian AI Ethics Principles are applied to AI systems, especially for decisions with significant impacts on stakeholders:

  • Telstra developed a Responsible AI Policy, applicable to the Telstra Group worldwide, and a detailed guidance document to implement it. These outline standards and expectations for due diligence when acquiring, using and selling AI.
  • Telstra broadened the remit of its existing Risk Council for Data & AI (RCAID) to cover AI use cases. A cross-functional body must approve AI systems (including third-party systems) that inform decisions with significant impacts on people.

Moving forward, Telstra is:

  • reviewing supplier governance processes to ensure that any third-party suppliers of AI solutions are meeting internal requirements
  • setting up role-based responsible AI training for employees and contractors involved in the development or procurement of AI systems.

Contact Telstra

If you have any questions about this example or are interested to learn about Telstra’s AI systems or ethical AI journey, please contact DataAIGovernance@team.telstra.com.