Australian AI ethics principles: Insurance Australia Group reviews its AI claim decisions

Insurance Australia Group Limited (IAG), Australia’s largest general insurer, is evaluating its artificial intelligence (AI) decisions against its existing algorithmic ethics framework and the Australian AI Ethics Principles.
IAG employees check a damaged vehicle

IAG developed a total loss ‘early warning’ message for car insurance customers by combining AI with business process automation. Image credit: IAG

IAG participated in the Australian AI Ethics Principles pilot.

The company prepared this case study summarising its experiences in applying the Australian AI Ethics Principles. The case study is an example of how the company considered the principles. The principles are voluntary and organisations can choose to apply them in different ways.

IAG is using AI to improve the claims process for NRMA Insurance customers.

In using AI, IAG also aims to reduce the emotional trauma that a car accident can bring to its customers. Severe car accidents can be particularly challenging. For example, it is not always immediately clear whether a car will be able to be repaired, or declared a total loss – what many of us know as a ‘write off’. A final decision can sometimes take over 2 weeks. If the final decision is a total loss and this is a surprise to the customer, they may not be ready to buy a new car. Customers may also find deductions for things like excesses confusing.

IAG’s artificial intelligence solution

IAG developed a total loss ‘early warning’ message to customers by combining AI with business process automation. When the information about a car accident looks reasonably likely to result in a total loss, customers received a text message. This happened soon after lodging a claim to let customers know about the potential outcome and claim process.

The AI ‘early warning’ model uses a combination of data including:

  • details about the accident
  • whether the accident required a tow truck
  • existing information about the customer’s car collected when they purchased the policy.

IAG constructed a model using around 340,000 historic finalised claims from NRMA, of which around 50,000 were total losses.

To ensure it would not adversely affect IAG customers in a material way, the company undertook an ethical review of the model.

Considering artificial intelligence ethics

IAG has a robust review process for evaluating AI-driven decisions against ethical principles. These processes help to identify potential issues and risks even where the AI application appears benign and fairly uncontroversial.

Undertaking this ethics review also helps IAG to:

  • identify which principles are most relevant, and which are less important for the project
  • determine how it should best respond to potential negative impacts
  • consider any appropriate trade-offs within the system design.

An important part of the process is for IAG to identify and carefully analyse potential harms to customers.

In this situation, the review team considered harms from:

  • receiving an early warning when a customer’s car does not result in a total loss (false positives)
  • failing to send an early warning for cars that do result in a total loss (false negatives).

While the team considered both forms of error relatively benign, they deemed false positives more likely to cause harm than false negatives. So they set a high threshold of model confidence for receiving an early warning to ensure a low false positive rate. The team will evaluate the actual impact of receiving an incorrect message over time as these mistakes occur. If the cost of getting a wrong message is small, this may justify lowering the threshold so more customers receive a beneficial message.

IAG will regularly test, monitor and evaluate the relative costs and benefits of applying the AI solution in various scenarios. This will include assessing error distribution across the population. The team will also review communication to customers as part of ongoing monitoring.

Putting AI ethics principles into practice

IAG’s established algorithmic ethics framework has a similar purpose and scope to the Australian AI Ethics Framework. The company shared how it applied the Australian principles to this AI solution.

Principle 1: Human, social and environmental wellbeing

The project’s overall intent was to benefit IAG’s customers, with no other conflicting objectives. However, the team acknowledged it might also improve IAG’s operational expenses in handling claims. They clearly documented all these benefits at the project outset.

Principle 2: Human-centred values

When developing new solutions for customers, IAG adopts a human-centred design process. This ensures human values are actively considered in the development of new AI solutions. Staff at IAG design AI systems in line with these values.

An important step in early development is to test potential designs at a small scale. This means any unintended harms can be addressed before a solution scales up. For this project, the relevant team tested communications on a small scale with a sample of customers (and modified them following feedback) before the full launch. This gave the team confidence in the system’s positive impact.

The system was designed to have limited power. It makes no final choices that materially affect people – it merely offers targeted information to assist customers. This design means that the system doesn’t fundamentally challenge human autonomy or other core values.

Principle 3: Fairness

For new AI systems, teams at IAG would consider fairness by doing a detailed analysis of potential harms and benefits.

In this example, the relevant team identified:

  • the circumstances that could give rise to harms
  • the potential degree of harm caused
  • how these harms are distributed across the population.

Where possible, IAG measures harm using objective data.

The team used this analysis to weigh up the relative costs and benefits of various system design options and select the most appropriate solution. In this case, a message being sent in error was judged to have the most potential for harm. As a result, the initial threshold for triggering the ‘early warning’ notification was set relatively high. This meant fewer notifications were sent, and reduced the risk of a message being sent in error. This design decision will be regularly reviewed, particularly in light of system performance data.

The team also reviewed the expected distribution of potential harms and benefits for various vulnerable or protected groups. They didn’t find any material areas of concern.

IAG ensured this process was clearly documented to explain and justify the decisions made. (IAG presented a detailed paper on this approach at the 2021 All-Actuaries Virtual Summit.)

Principle 4: Privacy protection and security

The early warning prediction uses data that the customer provided directly at the point of claim and at policy inception. This customer data is subject to IAG’s privacy and security controls. The prediction doesn’t currently draw on external data.

IAG considered that its practices aligned with customers’ reasonable expectations that claims would be assessed to the best of the company’s ability using data customers provide. If the AI model used additional data, there would have been a greater need to consider appropriate privacy and security protocols.

Principle 5: Reliability and safety

IAG tested the system at different stages of development and deployment. This ensured the system operates in line with its intended objectives, and any problems or unintended harms are rectified.

When IAG’s developers had to make trade-offs, they thoroughly documented and presented them to system owners for a clear decision. This was particularly important in light of fairness considerations. Following this process, the person responsible for the system signs off on any system updates.

After the initial launch, automatic monitoring picked up a degradation in system performance. This allowed IAG to quickly adjust the decision threshold to preserve the system’s accuracy. Without automatic monitoring, the degradation may not have been noticed for some time. This demonstrates the importance of system monitoring to help maintain ongoing reliability and safety.

Principle 6: Transparency and explainability

This AI system doesn’t significantly impact customers, since it only provides information and doesn’t make any material decision regarding a customer’s claim. Given the limited negative risks to customers, IAG didn’t consider it was necessary to make the model transparent or explainable. Instead they priotitised the model’s overall accuracy.

Given the application of AI in this situation is fairly benign, IAG considered that its customers didn’t need or want explanations of how AI was used to inform notices they received. If individual customers wished to query the notification or complain about it, the company could provide them suitable information or explanation.

Principle 7: Contestability

IAG assessed that this AI system did not significantly impact on customers as it didn’t make or assist materially in IAG’s decisions. The AI system only provided information and didn’t influence the final claim determination. This meant contestability (the ability for customers to challenge decisions) didn’t need such rigorous consideration.

IAG notes that contestability in the insurance sector is protected under existing regulations. Customers can access internal and external dispute resolution schemes for any complaint they might have – including about claims processing. This includes AI systems that inform or contribute to contestable or reviewable decisions.

IAG encourages customers to contact them if they have any questions about the notification. The company then actions feedback where needed.

Principle 8: Accountability

The model’s development and deployment had a clear business owner who took responsibility for any problems arising from the end-to-end process. This aligns with the principle of accountability.

The team who developed the system regularly monitors the system’s outcomes. Any material updates to the system must be re-evaluated under IAG’s existing algorithmic ethics framework and receive signoff from the responsible business owner. Monitoring of this particular system has resulted in a minor adjustment to the thresholds for triggering the early warning notification to customers. This happened in response to a small reduction in model accuracy. This demonstrates the value of monitoring and accountability.

Benefits and impacts

This system has had a positive overall impact on IAG’s customers.

IAG measured the impact using their established Net Promoter Score (NPS) survey system. The company observed a large uplift in survey scores from the intervention. To date, there has been no observable negative impact on groups of customers from the system. The team will continue to monitor the potential for this to emerge as more customers are affected.

IAG had an established algorithmic ethics framework before the pilot, with similar purpose and scope to the Australian AI Ethics Framework. Involvement in the pilot has provided the company with a useful mechanism to test its established framework against a high-profile external benchmark. This has allowed IAG to make minor refinements to some aspects of their existing framework:

‘Based on our experience prior to, and during, the pilot, we consider the Australian AI Ethics Framework to be a good starting point for organisations wishing to implement ethical and responsible AI systems. We encourage other organisations to consider adopting the principles in their work.

‘We also observe that while high-level principles can be useful, challenges and uncertainties can emerge when such principles are put into practice. We encourage organisations and government to discuss practical aspects of implementation, via case studies and further guidance material. This helps to provide the community with greater clarity on the practical steps to take when implementing ethical AI systems, complementing higher level principles.’

IAG is committed to developing a range of AI systems in the near future, in line with ethical standards.

Contact IAG

If you have any questions about this example or are interested to learn about IAG’s AI systems or ethical AI journey, please contact corporateaffairs@iag.com.au.