AI Risk Management

Protect your organization against potential risks posed by AI systems through all life-cycle stages

What is AI Risk Management?

AI Risk Management is the process of identifying, assessing, and managing the novel risks associated with AI systems. Risk management can address both technical and governance aspects of a system in order to reduce financial risk, repetitional risk, and legal liability by implementing guardrails to prevent harms from occurring and establishing robust mechanisms in the case that unforeseen risks do occur. These include having clear lines of accountability and liability throughout the supply chain.

AI Risk Management can also help to ensure compliance with ethical and legal standards, as well as internal and external policies. Through this compliance, AI Risk Management can help to balance innovation and safety of AI systems by ensuring that there are the appropriate safeguards in place to innovate responsibly, with social and technical best practices in mind.  

What Technical AI Risks can be managed?

Broadly, there are five key technical risks that should be managed:

  • Robustness – the risk that the algorithm fails in unexpected circumstances or when under attack.
  • Bias – the risk that the algorithm treats individuals or groups unfairly or results in disproportionately negative outcomes for a certain group.
  • Explainability – the risk that the system or its decisions may not be understandable to users, deployers, or developers.
  • Efficacy – the that than an AI system may underperform for it intended use case.
  • Privacy – the risk that the algorithm may leak sensitive or personal data.
Risk Vertical
Important
How to measure
How to mitigate
Robustness
Robustness can help to prevent harm and adverse effects on human well-being.
  • Assess how the algorithm performs on unseen data
  • Retrain the model on new data
  • Take steps to make the model more generalizable
  • Continuous monitoring for signs of failure
  • Adverserial training
Bias
Ensuring that a system is free from bias can prevent preferential or discriminatory treatment of individuals, and ensures fairer outcomes. Bias can have multiple sources, including the datasets used to train the model and the use and weighting of features to predict outcomes., depending on the source of the bias.    
  • Assess the performance of the system across different groups
  • Assess the differences in positive outcomes across different groups
  • Examine whether features in the model are correlated with subgroup membership
  • Examine the representativeness of training data
  • Debias the training data or make it more representative
  • Adjust the model so predictors associated with subgroup membership have less of an influence
  • Introduce constraints in to the model to limit the influence of protected attributes on outcomes
  • Amend the outputs of the model to make the predictions fairer
Explainability
When systems are explainable, how the system came to a decision can be understood, and this can be explicated to various stakeholders
  • Examine the documentation and communication processes
  • Examine how
  • Implement robust documentation procedures
  • Communication of system purposes, inputs, outputs, and limitations
  • Use tools to examine how features are weighted
Efficacy
An efficacious system maximizes its value by ensuring it performs efficiently and accurately.
  • Measure the performance of the system using metrics such as accuracy, precision, and recall
  • Measure system efficiency through run times or energy used  
  • Improve model generalization
  • Regularly monitor performance
  • Collect additional training and test data
Privacy
Appropriate privacy practices can prevent data breaches and unlawful processing and can ensure that individuals consent to the use of their data.
  • Assess the type of data used by the algorithm to generate an output, the amount of data that is stored and used in the model
  • Determine whether data minimization techniques have been applied
  • Survey how data is stored, retained and deleted
  • Use de-centralized/ federated models
  • Anonymizing/ pseudonymising data
  • Reducing the amount of training data or perturbing

What are the steps involved in AI Risk management?

Risk mapping

An inventory is created of the systems used and they are evaluated for inherent risk based on the type of system and context that they are used in. Systems that have implications for critical decisions that can affect someone’s life changes are considered as having a high risk, while systems that do not have many implications for someone’s life chances would be considered as low risk. These risks can also be divided into the technical risks, as outlined above, and governance risks such as accountability, management, compliance, and associated processes.

Verification

In-depth qualitative and quantitative assessments are carried out to gather additional information about any strategies that have already been put in place to measure or mitigate risks to determine the residual risk associated with the system across the different risk verticals. This could, therefore, result in the risk level of the system being updated from the inherent risk level, ideally remaining the same or being lowered.

Mitigation

Once the residual risk of a system has been evaluated and outstanding risks identified, steps can be taken to mitigate these risks. Recommendations provided in this step are bespoke and targeted, based on the specifications of the system and the specific risks associated with it.

Assurance

Once mitigations have been effectively implemented, the system is assured and can be given certification when conducted in accordance with specific frameworks.

Monitoring

Once assured, a system should be continuously monitored and reevaluated periodically, but particularly after major updates, to ensure that any changes made to the system do not introduce additional risks. Monitoring can also help to verify that mitigations continue to be effective.

AI Risk Management is the process of identifying, assessing, and managing the novel risks associated with AI systems. Risk management can address both technical and governance aspects of a system in order to reduce financial risk, repetitional risk, and legal liability by implementing guardrails to prevent harms from occurring and establishing robust mechanisms in the case that unforeseen risks do occur. These include having clear lines of accountability and liability throughout the supply chain.

AI Risk Management can also help to ensure compliance with ethical and legal standards, as well as internal and external policies. Through this compliance, AI Risk Management can help to balance innovation and safety of AI systems by ensuring that there are the appropriate safeguards in place to innovate responsibly, with social and technical best practices in mind.  

Why is AI Risk Management important?

AI Risk Management is important for preventing harms resulting from the use of AI without the appropriate guardrails, and can help to increase trust in systems. AI Risk Management can also support with creating an inventory of all of the AI systems in use within an organization, creating greater visibility throughout the organization and helping to map out needs or redundancies.

There have already been a number of high-profile cases of AI and automated systems resulting in actual or potential harm or violating existing laws where the appropriate risk management practices have not been implemented and compliance has not been prioritized. However, AI risk management can help these mistakes to be avoided, upholding the integrity of AI systems and building trust.  

Is AI Risk Management Legally required?

Under the EU AI Act, systems that are considered high-risk —that can have a significant impact on the lives of users — will be required to comply with stringent obligations, one of which includes implementing a risk management system. While standards for the EU AI Act are still being developed by CEN-CENELEC that will define exactly what the risk management system must look like, the current text states that the risk management system bust be continuous and iterative, in place throughout the entire lifecycle of the system.

Moreover, there are laws emerging in the US that require risk management of AI systems, including Colorado’s SB 21-169, which prohibits unfair discrimination resulting from the use of external consumer information sources, algorithms, and predictive models used in insurance practices. Specifically, the law restricts insurers’ use of external consumer information and data sources - such as credit scores, social media habits, locations, purchasing habits, home ownership, educational attainment, occupation, licensures, civil judgments, and court records - as well as algorithms and predictive models using this data. The Colorado Commissioner of Insurance is required to consult with stakeholders to develop rules for specific insurance types and insurance practices, and the rulemaking process is currently underway for life insurance and private passenger auto insurance.

Regulation 10-1-1 regarding life insurance went into effect on 14 November 2023, under which life insurers using external consumer information sources, algorithms, and predictive models must establish a governance and risk management framework to support policies, procedures and systems to determine whether the use of such data could result in unfair discrimination. The insurer must also create and document a process for selecting and overseeing all external resources and third-party vendors as part of this framework.  

What about the NIST AI RMF

The National Institute of Standards and Technology’s AI Risk Management Framework 1.0 (AI RMF 1.0) aims to reduce and manage harms AI systems can have on people, organizations and the ecosystem at large through a voluntary framework. Essentially, the AI RMF provides a guidance-based framework for organizations developing and deploying trustworthy AI systems across use-cases and sectors.

Within this framework, NIST identifies 7 key principles that characterize trustworthy AI systems: safe, secure and resilient, explainable and interpretable, privacy enabled, fair, accountable and transparent, and valid and reliable.

Guidance for implementing these 7 principles are provided in the RMF Core provides a series of actions organizations can adopt and adapt to effectively risk manage their algorithms. Specifically, the core is based around four key pillars:

Govern

Establishing policies, procedures, accountability structures, engagement protocols, as well as prioritising workforce Diversity, Equity and Inclusion considerations

Map

Categorising and mapping AI systems contextually, comparing a system's capabilities, risks, benefits and relative impact with appropriate benchmarks

Measure

Applying context-specific metrics and methods to evaluate a system's trustworthy characteristics over time, as well as critiquing analytic methods

Manage

Using such methods and metrics to triage and respond to AI risks, manage third-party risks and benefits, and document and monitor risk mitigation plans

When is the best time to implement AI Risk Management?

It is never too early or too late to implement AI Risk Management – it can be applied throughout the lifecycle of AI systems.  During the design and development stage, AI risk management can help to ensure that safeguards are built into a system by anticipating and mitigating  possible risks and that there are failsafes in the event that something does go wrong.

Pre-deployment, AI risk management can ensure that AI systems have the appropriate documentation and means for transparency and accountability to enable confident use of the system and a clear route to recourse if any issues arise.  Once deployed, AI risk management is important in the ongoing monitoring of systems to ensure that they perform and are being applied as intended, and so that any negative consequences are spotted and mitigated early, before they potentially result in a serious impact or become a liability.  

Schedule a demo with us to get more information

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.