AI systems have specific characteristics that amplify risks
AI systems span a wide range of technical approaches. Organisations can use them for many tasks, such as helping with prediction, classification, optimisation or content generation. At their core, AI systems are software-based tools.
AI systems fall broadly into 2 types, each with different strengths and risks:
- Narrow AI systems are designed and trained to perform a specific task. Most AI systems in use today fall into this category. These types of systems can perform well in a narrow range of activities, potentially even better than humans, but they cannot perform any other tasks. Examples include chess engines, recommender systems, medical diagnostic systems and facial recognition systems.
- General-purpose AI systems are designed and trained to handle a broad range of tasks and are therefore flexible. Their use is not limited to a specific function, so they can be more easily used for purposes their designers may not have considered. Examples include large language models and systems such as Open AI’s ChatGPT series.
Both narrow and general-purpose AI systems are built and operate differently from traditional software systems. These differences mean that using an AI system for a particular task may amplify existing risks when compared with traditional software.
For example, in traditional software systems, developers explicitly define all the logic governing a system’s behaviour. This relies on explicit knowledge, with conscious human engagement at every stage of the software design and development process. Traditional software systems are easier for humans to control, predict and understand.
In contrast, developers of AI systems take a different approach. This often involves defining an objective and constraints, selecting a dataset, and employing a ‘machine learning algorithm’. This creates an AI model which can achieve the specified objective. While such models often outperform comparable, traditional software systems, the different development approach means AI models are often less transparent, less interpretable, and more complex to test and verify. This amplifies risks and can lead to harm. This is more likely to happen in contexts where it is important to understand and explain how the output was achieved or to constrain the range of potential outputs for safety reasons.
The specific characteristics of general AI systems can amplify risks and harms or pose new risks and harms to an organisation. General AI systems are more prone to unexpected and unwanted behaviour or misuse. This is because of their increased flexibility of interactions, the reduced predictability of their capabilities and behaviour and their reliance on large and diverse training data. For example, large language models can deliberately or inadvertently manipulate or misinform consumers. They can also pose novel intellectual property challenges for both training data and the outputs generated.
The standard supports a risk-based approach to AI harm prevention
As with all software, AI systems vary in the level of risk and the type of harm they pose. Some, like an algorithm on a website that suggests reordering based on stock levels, tend to be lower risk. The potential harms are confined to a customer taking longer to receive a product. Others, like a tool that prioritises job applicants for an interview process or makes financial lending decisions, have far greater potential to create harm. For instance, they may deny a suitable applicant the opportunity of a job or bank loan, or even systematically and unlawfully discriminate against a group of people.
The standard supports a risk-based approach to managing AI systems. It does this by supporting organisations – starting with AI deployers – to take proactive steps to identify risks and mitigate the potential for harm posed by the AI systems they deploy, use or rely on.
The standard prioritises safety and the mitigation of harms and risks to people and their rights.
A human-centred perspective on the harms of AI systems
Organisations should assess the potential for these risks and harms to people:
- Harm to people. This includes infringements on personal civil liberties, rights, and physical or psychological safety. It can also include economic impacts, such as lost job opportunities because of algorithmic bias in AI recruitment tools or the unfair denial of services based on automated decision-making.
- Harm to groups and communities. AI systems can exacerbate discrimination or unwanted bias against certain sub-groups of the population, including women, people with disability, and people from multicultural backgrounds. This can lead to social inequality, undermining of equality gains and unjust treatment. This is pertinent in recommender algorithms that amplify harmful content.
- Harm to societal structures. AI systems’ impact on broader societal elements, such as democratic participation or access to education, can be profound. AI systems that spread misinformation could undermine electoral processes, while those that affect educational algorithms could widen the digital divide.
The standard is useful and applicable for identifying, preventing and minimising other risks that may affect an organisation. Organisations often analyse these risks against the potential for reputational damage, regulatory breach, and commercial losses (Figure 3).