22/11/2024
Insights Blog

The EU AI Office released the first draft of the General-Purpose AI Code of Practice (the “Code”) on 14 November 2024. The Code is intended to help Providers govern the deployment and development of AI in an ethical, responsible and pro-active manner in advance of the relevant compliance deadline of August 2025 for General-Purpose AI (“GPAI”).

Key Points to Note:

  • The purpose of the Code is to support the compliance of Providers of GPAI models with the EU AI Act.
  • The Code details rules on both Transparency and Copyright for all Providers of GPAI models.
  • For those Providers of more complex and higher risk GPAI models, the Code also outlines a taxonomy of systemic risks, risk assessment measures, and technical and governance mitigation measures which must be applied.
  • The Code is expected to complete four drafting rounds with the final version being published by May 1, 2025.

Key Elements of the Code:

  • Transparency: Providers must maintain comprehensive documentation covering the design, development, and deployment of their AI systems, disclose detailed information on the data sets and methodologies used in AI model development, particularly high-risk applications, and establish complaint-handling procedures to address concerns from rightsholders.
  • Copyright Compliance: Providers must put in place a Copyright Policy which is consistent with EU copyright laws. Measures should also be established to ensure that third-party datasets used for model development must also adhere to EU copyright laws.
  • Risk Assessment for GPAI Models with Systemic Risk: Providers must conduct continuous, thorough risk assessments throughout the AI lifecycle to identify and mitigate systemic risks, using robust risk analysis methodologies and mapping risk indicators.
  • Risk Mitigation for GPAI Models with Systemic Risk: Providers must adopt, implement, and make publicly available robust safety and security frameworks to proactively assess and mitigate systemic risks associated with AI models.
  • Governance Risk Mitigation for GPAI Models with Systemic Risk: Providers must establish robust governance frameworks to ensure the ongoing oversight and monitoring of AI development and deployment. Such frameworks are likely to rely heavily on an appropriate Risk Committee who will be tasked with identifying, monitoring, managing and mitigating the systemic risks produced by the Provider’s GPAI models.
  • Providers are required to report serious incidents to the European AI Office and implement whistleblowing protections to encourage transparency and accountability.

What Next for Providers?

The deadline for compliance with the EU AI Act’s requirements for GPAI models is August 2025. Providers, especially whose GPAI models pose systemic risk, should:

  • Create an AI Inventory: While not an obligation under the EU AI Act, it would be nearly impossible for Providers to identify and address risks arising from GPAI models without knowing where they exist within the organisation.
  • Categorising and Prioritising Risks: Systemic risks, including safety, security, and ethical considerations, must be identified and ranked by severity.
  • Identify Risks Arising: The use of AI tools and systems can amplify existing risks, particular Model Risk, Data Governance Risk and Third-Party Risk. Providers should ensure that all risks attaching to their GPAI models are identified.
  • Integrate Risk Ownership: The risks arising from the deployment and development of AI should be clearly assigned to a suitably experienced and qualified management body to ensure accountability at the highest levels of the organisation. Leadership must actively embed AI governance into daily operations by assigning clear responsibilities at the executive level and implementing robust monitoring frameworks.
  • Allocate Dedicated Resources: Adequate resources must be designated for managing systemic risks, and governance frameworks should be regularly reviewed and updated to reflect evolving challenges.
  • Promote Proactive Governance: Providers must implement governance practices that build accountability and trust, ensuring the responsible development and deployment of AI systems.

How can we help? 

We offer specialised legal and consulting expertise and support to help organisations implement the requirements of the EU’s Digital Finance Strategy, including the EU AI Act.

We provide the necessary legal expertise and operational know-how to guide you through the process, including:

  • Assisting with the design and development of required policies and procedures, including Acceptable Use and Copyright Policies.
  • Advising on the development and implementation of robust risk assessment and governance frameworks.
  • Providing bespoke training programs at Board and/or employee level to ensure an awareness and understanding of ethical AI practices and compliance requirements.
  • Supporting your implementation projects as part of the wider EU Digital Finance Strategy e.g. the EU AI Act, Digital Services Act, Digital Markets Act, and the EU Data Act.

If you would like to discuss any of the services mentioned above, please do not hesitate to contact your usual Arthur Cox contact(s), or any member of the Governance and Consulting Services or Technology and Innovation Groups to discuss how we can help you.