24/03/2025
Insights Blog

The European Insurance and Occupational Pensions Authority (“EIOPA“) has recently published a consultation paper on its draft opinion on artificial intelligence (“AI”) governance and risk management (PDF, 533KB).

Although the only insurance-specific high risk use-case in the AI Act is the use of AI for risk assessment and pricing in relation to life and health insurance, EIOPA emphasises that other uses of AI systems in insurance will continue to be subject to existing sectoral legislation. The objective of the draft opinion is to provide clarity on the main principles and requirements in insurance sector legislation that should be considered in relation to AI systems used by insurers that are not prohibited or categorised as high-risk under the AI Act. 

Proportionality

Solvency II, the IDD and DORA already impose on (re)insurers a requirement to have in place effective systems of governance and risk management, which are proportionate to the nature, scale and complexity of their operations, as well as product oversight and governance. The existing frameworks can be adapted to deal with AI systems, by (re)insurers taking two steps:

  1. Conduct an impact assessment – insurers should assess the risk associated with their AI use cases and develop governance and risk management measures that are adequate and proportionate to them, taking into account criteria such as large scale data processing, sensitivity of the data involved, the extent the AI can act autonomously and the potential adverse impact the system could have. Insurance-specific issues should be considered, such as whether a particular line of business is mandatory insurance or important for financial inclusion, prudential considerations (such as whether the AI is used in critical activities that could impact business continuity or if it could impact on the financial position of an undertaking), legal obligations and reputational risks.
  2. Develop proportionate measures – taking into account the nature, scale and complexity of the AI use case, (re)insurers should develop proportionate measures to ensure responsible use of the AI system. EIOPA emphasises that the proportionality principle is applicable to all the governance and risk management measures described in the draft opinion. Measures should be tailored to each use case – for example, where the output from an AI system cannot be comprehensively explained, measures such as human oversight may be used to compensate for this.

Governance and risk management system

Responsible use of AI depends on a combination of risk management measures. (Re)insurers should define and document the approach to the use of AI within the business, and the relevant policy should be regularly reviewed, particularly as the use of AI systems change over time. (Re)insurers should consider the following key areas (based on EIOPA’s AI Governance Principles, PDF 1227 KB), which are complementary and inter-dependent:

  1. Fairness and ethics – (Re)insurers should adopt a customer-centric approach to the use of AI to ensure customers are treated fairly and according to their best interests. Outcomes of AI systems should be regularly monitored and audited and adequate redress mechanisms should be in place to enable customers to seek redress when they have been negatively affected.
  2. Data governance – (Re)insurers should implement a data governance policy which is aligned with the potential impact of the AI use case. Data used to train and test AI systems must be accurate (no material errors and free of bias), complete (representative of the population and sufficient historical information), and appropriate (consistent with the purposes for which it is to be used).
  3. Documentation and record keeping(Re)insurers should keep appropriate records of the training and testing data and modelling methodologies to ensure reproducibility and traceability.
  4. Transparency and explainability – (Re)insurers should adopt necessary measures to ensure outcomes of AI systems can be meaningfully explained (e.g. by avoiding “black box” algorithms or using complex AI systems for limited purposes, such as fine tuning mathematical models. Supplementary explainability tools may be used but the limitations of these tools must also be considered. Explanations should also be adapted to the audience (e.g. clear, simple and non-technical language for consumers vs. comprehensive explanations for regulators and auditors).
  5. Human oversight – the board remains responsible for the use of AI within the business, while relying on compliance and internal audit to ensure it is compliant with applicable law. (Re)insurers may decide to appoint AI officers to provide oversight and advice to other functions. Human oversight should contribute to the removal of possible biases.
  6. Accuracy, robustness and cybersecurity – these factors should be proportionate to the nature, scale and complexity of the AI system. Such systems should also be resilient against attempts by unauthorised third parties to alter their use.

EIOPA is holding an online public hearing on the consultation paper on 8 April 2025 (12pm-2:30pm GMT), with responses to be submitted via an online survey by 12 May 2025.