The EU AI Act
The AI Act represents the first comprehensive legal framework for artificial intelligence, addressing associated risks and obligations for providers.
The AI Act sets clear requirements for AI developers and deployers, aiming to reduce administrative and financial burdens, particularly for SMEs. Part of a broader policy package, the AI Act ensures the safety and fundamental rights of individuals and businesses, while promoting AI innovation and investment across the EU. The regulation adopts a risk-based approach, categorising AI systems into four levels of risk, with stringent obligations for high-risk applications to ensure transparency, accountability, and human oversight.
Earlier on in the year, our Technology and Innovation Group created a 5 part video series on the AI Act, led by Rosemarie Blake, collated into this podcast episode which discusses the AI Act, the timeline of obligations, general purpose AI, enforcement, GDPR interplay and compliance obligations for actors in the AI value chain.
Introduction
The AI Act represents the first comprehensive legal framework for artificial intelligence, addressing associated risks and obligations for providers. It sets clear requirements for AI developers and deployers, aiming to reduce administrative and financial burdens, particularly for SMEs. Earlier on in the year, our Technology and Innovation Group created a five part video series on the AI Act led by Rosemarie Blake, collated into this podcast episode which discusses the AI Act, the timeline of obligations, general purpose AI, enforcement, GDPR interplay and compliance obligations for actors in the AI value chain.
Rosemarie Blake
Hello and welcome to the first of our bite-sized series on the AI Act. Today we’re going to be looking at an introduction to the AI Act and I’m joined by my colleague Ciara Anderson from the Technology and Innovation Group.
Ciara Anderson
Hi.
Rosemarie Blake
So before we get into the detail on the scope of the AI act and who will be subject to it, it might be helpful to give a brief overview of the Act.
Ciara Anderson
Yes, exactly. So we’ll talk about the timeline. The Act was published on 12 July 2024 and it entered into force on 1 August. It has a graduated timeline so it’s generally applicable in 24 months and that brings us to the 2nd of August 2026 with some exceptions. The sections on prohibited AI systems are going to come into force sooner so within six months; that brings us to the 2nd of February 2025 and then separately the provisions on general purpose AI come into force or will be applicable in 12 months so that brings us to 2nd August 2025 and the Act will apply to high-risk AI systems 36 months after publication so full compliance for those systems is required by 2 August 2027.
Rosemarie Blake
A lot of dates to contend with there, Ciara, thank you for bringing us through those and it’s also worth mentioning as well that there’s a grandfathering provision in relation to high-risk AI so for high-risk AI systems that are in use prior to the date of 2nd August 2026, they are not going to be subject to compliance unless there are significant changes in their design or if they’re high-risk AI systems that are used by public authorities, they’ll be required to comply by 2 August 2030. So what about the new governance architecture?
Ciara Anderson
So there’s a couple of governing bodies so we have the AI Office. That sits within the Commission. So it’s remit is enforcing kind of the common rules of the AI Act across the EU and it’s also monitoring the general practise purpose AI as well. They will be supported by a scientific panel of independent experts that will help it in its enforcement activities and then separate to the AI office, we have the AI Board, which is the member states representatives and they’re charged with ensuring the kind of consistent application and they’re empowered to issue guidelines, for example, and make recommendations and as well there’s also going to be this advisory forum of stakeholders and they’re there to provide technical expertise and they might be, for example, members of industry to provide expertise both to the board and to members of the Commission as well.
Rosemarie Blake
Thanks very much Ciara for that breakdown. So perhaps we can discuss who exactly the AI Act will impact. Can you tell us a little bit about who the operators are throughout the AI value chain?
Ciara Anderson
Sure. So there’s a couple different roles, a couple of different hats that an organisation can wear and different provisions that are applicable. So at the very beginning of the value chain, as you say, you have a provider of an AI system, then you will have a deployer of an AI system and then you have a concept of an importer or a distributor of an AI system. And you also have this concept of an authorised representative in the EU as well, which is applicable for non-EU entities.
Rosemarie Blake
Okay, understood. So that’s all the operators. Can you tell us a bit more about each of those operators and their roles?
Ciara Anderson
Sure. So again, starting with the provider. So a provider is an entity that they either develop themselves or they commission the development of an AI system and the idea is that system, they are aimed at placing that on the market under their own name, under their own trademark, that might be free of charge or it might be for a cost so that’s the provider. A deployer is an entity that uses an AI system so they don’t create it, but they use it under their own authority. The only small exception to that is basically a personal or non-professional use, which is kind of like the household exemption under the GDPR. So that’s our provider, our deployer. Then you have an authorised representative within the EU so the authorised rep basically has agreed to carry out the provider’s obligations within the EU. And I think this is kind of a concept that we have under different laws. And then you have an importer, which is an importer of an AI system into the eu, essentially someone else’s system and you have a distributor which essentially is an intra-EU distributor of an EU system within the EU so there’s a couple different hats that an organisation can wear.
Rosemarie Blake
Thanks very much, Ciara. So let’s talk about extraterritorial effect. What does the AI Act say in terms of operators who are outside the EU?
Ciara Anderson
Great question. So, as I think it probably is evident from the discussion of the rules, it does have extraterritorial effect so in Article 2, it explicitly says that the regulation applies to providers of AI systems who place them onto the market or into service in the EU, wherever they are based. So that’s that extraterritorial effect, but it actually goes even further so providers and deployers will be subject to the Act when they’re based outside of the EU, where the output from those systems, their systems are used by people in the EU.
Rosemarie Blake
Thanks, Ciara and I guess that’s become the theme of lots of EU legislation to ensure that anything coming into the European market is aligned with European policy and values.
Ciara Anderson
Exactly, exactly.
Rosemarie Blake
So when we look at the responsibilities of the various actors throughout the value chain, clearly it’s vital for an organisation to figure out where they sit on that chain. Is it possible to be both the provider and the deployer under the AI Act?
Ciara Anderson
Yeah.
Ciara Anderson
So you’ll wear different hats and at the same organisation may wear different hats. So you might create an AI system, in which case you’d be a provider, but then you might use your own system within your organisation, in which case you’d be a deployer.
Rosemarie Blake
Thanks very much for that. In terms of other legislation that’s coming down the track, it’s probably worth mentioning the other piece of legislation that’s been proposed by the EU to make sure that the Product Liability Regime is up to scratch when it comes to AI, like the AI Liability Directive and the proposed revisions to the Product Liability Directive. Can you talk us through a little bit about what those entail?
Ciara Anderson
Yep. So the AI Liability Directive was a complement to the AI Act. It was intended to allow claimants to recover for non-contractual forms fault-based damage due to kind of a breach of safety rules or unlawful discrimination on the basis of algorithms embedded in AI systems. However, it obviously hasn’t been approved or come to pass yet and it’s been subject to a little bit of doubt and they’re currently doing a study to determine whether there really is a gap between the AI Act and the revised Product Liability Directive. I guess the thinking is perhaps a specific new law isn’t actually required specifically for AI liability and it might otherwise be covered.
Rosemarie Blake
Grand, so the exceptionalism piece, it might not be quite necessary when it comes to AI. We might just be home and dry when it comes to the revised PLD. So what about the PLD then?
Ciara Anderson
So the Product Liability Directive has been around for 40 years, so it definitely needed a refresh, particularly to kind of address these digital consumer products that we all use so this revised Product Liability Directive was approved by the European Parliament in March, just gone, and it’s looking to be approved shortly by the European Council and the whole idea is that these amendments are designed to ensure that the Product Liability Regime applies to these specific features of AI systems that are used in consumer products. So the concept of product under the Product Liability Directive means all movables, so even if they’re interconnected or integrated into something else. So that would cover things like digital marketing or manufacturing files as an example, and also software. So software isn’t specifically defined under the Product Liability Directive, but the recitals make clear that it will cover software which would cover AI systems.
Rosemarie Blake
Thanks, Ciara and is there anything else we should note about those amendments?
Ciara Anderson
The definition of damage has been expanded again in this kind of digital world so it will cover the loss or corruption of data. The Directive now covers explicitly AI systems. It expands the scope of liability, expands the scope of parties that may be liable, and it also adjusts some of the provisions that are applicable to the global supply chain. So I guess all in all it’s making it easier for claimants and representative bodies to bring claims for damage, either on an individual or collective basis, in relation to damage caused by AI systems that are part of consumer products.
Rosemarie Blake
Thanks very much, Ciara. So I guess we’ll have to watch this space when it comes to the PLD. That’s all we have time for today. Thank you very much to Ciara for joining us.
Ciara Anderson
Thanks.
Rosemarie Blake
And thank you very much for listening.
Rosemarie Blake
Hello and welcome to this instalment of our video series on the Artificial Intelligence Act. Today we’re going to be doing a deep dive into the requirements to do with general purpose AI. Joining me for this discussion is my colleague Rory Curtis in the Technology and Innovation Group.
Rory Curtis
Thanks, Rose.
Rosemarie Blake
So, before we get into the detail of the general purpose AI requirements, can you set the scene in terms of how general purpose AI is set out in the Act according to the EU legislators?
Rory Curtis
Yeah. So general purpose AI, or “GPAI” as it’s often referred to, is sometimes also called foundation models but the term foundation models hasn’t actually made it into the final text of the AI Act. So conceptually speaking, a general purpose AI model is a model that is capable of performing a wide variety of tasks and can easily adapt to new situations. It is generally trained on large volume data sets and can be used for loads of different tasks without needing much fine tuning. So when you look at the actual text of the Act, and you look at the definition of GPAI, it uses really expansive terms like “significant generality”, that it is a model that is capable of performing a wide range of distinct tasks and can be implemented widely downstream. So these are like really broad terms and it remains to be seen how the courts and the competent supervisory authorities will actually interpret what these terms mean.
Rosemarie Blake
Yeah and I suppose that’s largely reflective of when we look back in 2021, the European Commission was more focused on regulating AI systems with a specific tangible purpose and then as the legislation made its way through Parliament, and then in November 2022, we had the explosion of large language models like ChatGPT and LLMs. So now we’ve got Chapter 5 of the AI Act, which is sort of there as a sort of chapter that’s going to broadly address general purpose AI requirements.
Rory Curtis
Yeah and I would make the point that although it’s very expansive, it demonstrates a willingness of the EU to engage in the regulation of powerful AI models and to ensure that the AI is safe, trustworthy, and that we do not end up with AI models that pose large scale systemic risk of bias and I’d also note that for the large majority of GPAI models, the obligations that apply will be relatively limited. It’s only in respect of AI models that are said to pose systemic risks that a significant number of obligations will apply.
Rosemarie Blake
Yeah. Thanks, Rory and so I guess looking at a more aerial view of the compliance obligations, can you tell us how the concept of general purpose AI fits into the overall compliance obligations and the risk based approach as set out in the AI Act?
Rory Curtis
Sure. As we mentioned before, there’s an entire chapter of the Act dedicated to general purpose AI models and sort of similar to what we’ve said about high-risk AI, most of the obligations apply to providers of general purpose AI models. So when you look at the obligations that apply, it’s tiered so in respect of all general purpose AI model providers, there’s a certain base level of obligations that will apply. So this includes regulatory cooperation, transparency, compliance with existing copyright laws, and then when you go to general purpose AI models with systemic risks, there’s a greater level of obligations that apply. So it includes cybersecurity requirements, incident reporting, performance of model evaluations and mitigation of risks.
Rosemarie Blake
Thanks, Rory. So it’s clear that that systemic risk consideration is going to be quite important and I wonder, could you tell us a bit more about the threshold at which a general purpose AI model is considered to pose systemic risk?
Rory Curtis
Yeah. So a general purpose AI model is considered to have systemic risk in two broad situations. So the first is where the AI model has high impact capabilities and the criteria for what high impact capabilities are is still to be determined and then the second situation is where it is considered by the Commission to have systemic risk and this view of the Commission is based on advice by technical experts.
Rosemarie Blake
Okay, so we’re likely to see more guidance from the Commission in that space in terms of what high impact capabilities look like and then we also have a presumption of systemic risk in Article 51 and I wonder, could you talk us through how that plays out?
Rory Curtis
Sure. So this applies when the general purpose AI model is trained with a computational power of 10 to the power of 25 floating point operations per second. So in that situation, the AI model will be presumed to have systemic risk. This threshold is based on existing models such as OpenAI’s ChatGPT4 and Google’s Open Mind Gemini. So the reason that the threshold is set at this level is because we don’t really know much about the capabilities of AI models that go beyond that threshold and if this FLOPs level is reached, the provider of the AI model is required to notify the Commission within two weeks.
Rosemarie Blake
Thanks, Rory. So the basic chain of logic, there is more FLOPs equals more power, so that’s where the increased compliance burden is going to attach to and then when it comes to general purpose AI models, there’s an exception for general purpose AI that’s based on an open source model and I wonder, could you talk to us a bit more about what that exception looks like?
Rory Curtis
Yeah. So open source GPAI models are exempt from certain transparency obligations. So for example, they don’t have to provide up to date training and documentation that would enable sort of downstream providers to implement the AI into their own systems, but they do have to provide certain other information about just general information on the content that is used to train their models and they’ll also have to comply with existing copyright law and those sort of general obligations as well.
Rosemarie Blake
Thanks, Rory. So if you’re looking to benefit from the exemption, you’d probably look to show that one, your model didn’t pose systemic risk, and two, that your open source criteria that the publication of all those details would be available and we might wrap up there by looking briefly at who is going to oversee the regulation of general purpose AI models in the new governance architecture set out by the AI Act.
Rory Curtis
Yeah, so overall we have a number of bodies. So the main one though that will be regulating these powerful AI models will be the EU AI office and the EU AI Office will also be responsible for setting standards and best practices and distributing them and it’ll be assisted in this respect by an independent panel of technical experts and then also you’ll have the EU AI Board which is made up of representatives from each member states and they’ll sort of be responsible for coordinating with the member states and assisting them with implementing the AI Act in their own jurisdictions and then additionally they’re will be an AI Advisory Board which will assist in advising the EUI AI Board in complying with its obligations.
Rosemarie Blake
So we can look forward to a considerable amount of secondary legislation that’s coming down the tracks. So, thanks very much for the discussion today. That’s all we’ve got time for in this bite-sized session of our video series. Thank you very much for tuning in. I’m sure we’ll have a lot more to discuss as the graduated timeline of the AI Act takes force.
Rosemarie Blake
Hello and welcome to this episode of our bite-sized series on the AI Act. Today we’re going to be talking about enforcement mechanisms under the Act and to do this I’m joined by my colleague Lukas Mitterlechner in the Technology and Innovation Group.
Lukas Mitterlechner
Thanks, Rose.
Rosemarie Blake
So, Lukas, to level set, the act relies on a double layered enforcement mechanism set out between the Commission and Member States. Perhaps you could talk to us about how that’s set up?
Lukas Mitterlechner
Yes, sure. So, as you’ve said, the European Commission has the overall responsibility to ensure the correct implementation of the AI Act, including through additional secondary legislation such as codes of conduct and further delegated acts. The Commission also has exclusive powers to supervise and enforce provisions relating to general purpose AI. And the AI Office will be responsible in terms of implementation in relation to general purpose AI. The Office will also have assistance from other bodies, namely a scientific panel of independent experts, the European Artificial Intelligence Board, who will have representatives from each member state and also a forum of stakeholders.
Rosemarie Blake
Thanks, Lukas. So that’s EU level. What about enforcement at Member State level?
Lukas Mitterlechner
Yes. So in tandem with the EU governance architecture, Member States will have to establish or appoint one or more competent authorities to act as a market surveillance authority and a notifying authority, respectively. They’ll be tasked with implementing rules governing AI systems at the national level under the coordination of the Commission. So it doesn’t replicate the one Stop Shop mechanism as is found in the GDPR, but instead it’s more aligned with how existing product harmonisation rules are enforced.
Rosemarie Blake
Okay, thanks for that and in terms of existing market surveillance authorities, what does the Act say about that?
Lukas Mitterlechner
Yeah, so in respect of product safety, the MSA for products covered in Annex 1, so that’s your medical devices, civil aviation and children’s toys. The existing MSAs for those will be designated as the MSAs under the AI Act. And then for high-risk AI systems used in law enforcement, border management and justice and democracy, the access the member states shall designate either data protection authorities or another competent authority protecting fundamental rights.
Rosemarie Blake
Got it. What about AI and financial services?
Lukas Mitterlechner
So for financial services, the MSA will be the national supervisory authority for high-risk AI used or developed by that entity in direct connexion with the provision of financial services. So in Ireland, that authority would be the CBI. Member states also can derogate from either of those positions, provided that coordination is ensured with existing regulatory bodies at the national level.
Rosemarie Blake
Okay, thanks very much for that. So there’s a potential expansion of scope there for existing market surveillance authorities. So can you talk us through the timeline of the MSAs and when they’ll be designated?
Lukas Mitterlechner
Yes, so we’ll find out in the next 12 months. So before the 2nd of August next year, we’ll know who the national authorities are for each member state.
Rosemarie Blake
Okay, 2nd of August 2025 and once they’re designated, can you talk to us a bit more about what their obligations will look like?
Lukas Mitterlechner
Yeah, sure. So they’ll have reporting obligations to the Commission and the relevant national competition authorities on competition law matters and then separately they’ll have to report on the use of prohibited practises and the measures taken against them.
Rosemarie Blake
Okay, so they’re going to get information based on a reporting schedule and then in terms of their investigative powers, what do those look like?
Lukas Mitterlechner
So at a high level, MSAs and notifying authorities have investigative powers and can send reasoned requests for information to deployers and providers and they can also carry out evaluations of AI systems which they think present a risk and they can also evaluate non high-risk classifications when they have reasons to believe that those AI systems are in fact high-risk.
Rosemarie Blake
Okay, so pretty broad powers then to commence evaluations and on those evaluations, what happens in the event of a finding of non-compliance?
Lukas Mitterlechner
There’s a few different options available depending on the infringement, like corrective actions, withdrawals or product recalls and the imposition of penalties but if non-compliance persists, the MSA also can take further measures to prevent the AI system from being on the market. It’s also worth mentioning that if an authority considers that the operator’s non compliance isn’t restricted to its own jurisdiction, there’s a duty to inform the Commission that and other Member States of their evaluation and measures taken against the operator.
Rosemarie Blake
Okay, so there’s an element of cross border notification there. So in terms of enforcement activity, can you just confirm what role the Commission is going to play?
Lukas Mitterlechner
Yeah. So the Commission will coordinate the action of the market surveillance authorities and oversee the measures that they take. If the MSA of a Member State raises an objection against a measure taken by another authority, or if the Commission thinks that any measure is contrary to EU law, the Commission can evaluate this objection and decide whether the measure is justified or not. MSAs and the Commission can also propose joint investigations regarding high-risk AI systems.
Rosemarie Blake
Okay, thanks for that. So presumably once there’s been an investigation, a finding of non infringement, there’ll then be consideration given to penalties and fines. Can you talk us through what the tiered level of fines look like?
Lukas Mitterlechner
Yes. So the Commission can impose fines for breaches of the Act, and these fines vary based on the obligations. So for non-compliance with obligations surrounding prohibited AI systems, fines can be up to 35 million euros or 7% of the company’s annual worldwide turnover in the preceding financial year, whichever is higher and then for non-compliance with obligations relating to high-risk systems, fines could be up to 15 million euro or 3% of the company’s annual worldwide turnover in the preceding year, whichever is higher and then lastly, for supplying incorrect or misleading information to a national competent authority in reply to a request for information, fines can be up to €750,000, or 1% of the company’s annual worldwide turnover in the preceding year. Several factors will be taken into account when deciding the severity of the fine, such as the nature, the gravity and the duration of the infringement, as well as the number of people that have been affected. So Member States can also lay down their own rules and penalties on other enforcement measures, which may also include warnings and non monetary measures.
Rosemarie Blake
Thanks, Lukas. So that’s similar to what we’ve seen in terms of the GDPR, in terms of taking into account aggravating and mitigating factors with the imposition of penalties. What about the right of complaint then? Or the right of compensation? What does the act say about that?
Lukas Mitterlechner
Yeah, so on the individual compensation side, that’s not actually found in the AI Act. So there’s no private right of action like the right to compensation under Article 82 of the GDPR. That’s more the remit of additional legislation such as the proposed AI Liability Directive and also the revised Directive on Liability for Defective Products. In addition, in terms of collective actions, the act will be added to the list of legislative instruments that can be actionable on the basis of the Representative’s Actions Directive, meaning that it can also be used as a source of collective damage actions in Europe with regards to complaints. Also, an individual or an organisation who believes that the rules of the act aren’t being complied with can file a complaint under Article 85 of the Act. This authority will then have to consider this complaint when conducting the monitoring activities.
Rosemarie Blake
Thanks very much Lukas for taking us through that. That brings us to the end of this episode of our bite sized series on the AI Act. Thank you very much for listening.
Rosemarie Blake
Hello and welcome back to this episode of our video series on the AI Act. Today we’re going to be discussing the interplay between the AI Act and the GDPR and to provide some insight into this, I’m joined by my colleague Aoife Coll in the Technology Innovation Group.
Aoife Coll
Thanks, Rose.
Rosemarie Blake
So Aoife, we know one of the main differences between the AI Act and the GDPR is the scope of its application. The AI Act applies to participants across the AI value chain, like providers, users and the other participants, like distributors and importers and it applies to those actors when they place an AI system on the market or into service in the EU, or where its output is used in the EU, irrespective of its location. In contrast, the GDPR applies to controllers and processors who process personal data in the EU or which monitor the behaviour of data subjects in the EU or offer goods and services in the EU.
Aoife Coll
Essentially, this means that the AI systems that process non-personal data or their output, or AI systems that process personal data of data subjects located outside of the EU can fall within the scope of the AI Act, but not within the scope of the GDPR.
Rosemarie Blake
Okay, so if we turn to the interplay between the two pieces of legislation, is there anything else we should think about about how they’re both set up to operate in terms of fundamentals?
Aoife Coll
Yes. So as most people will know, the GDPR is designed to protect the fundamental right to data protection and data subjects can exercise their rights against controllers in respect of the processing of their personal data. In contrast, the AI Act is more focused on the current harmonisation regime and as such it takes a harm-based approach.
Rosemarie Blake
Okay, thanks for that, Aoife and obviously the GDPR also sets up a particular right in respect of automated processing. That’s the right under Article 22 not to be subject to a decision that’s based on solely on automated processing, including profiling. How does the AI Act build on this?
Aoife Coll
Yes. So the concept of human oversight is referenced in the Act in terms of the design of an AI system, and it makes clear that that oversight should apply across the whole life cycle of an AI system and that it should be factored in before the system is put onto the market. This will also apply to automated profiling, which is always considered a high-risk use case under the AI Act.
Rosemarie Blake
Okay, so where you might have been conducting profiling that requires you to have a human in the loop in respect of decisions made about natural persons, you’re going to have to build on that compliance lift as you’ll be within the high-risk use cases for the purpose of the AI Act.
Aoife Coll
Exactly.
Rosemarie Blake
Thanks very much for explaining that, Aoife. So let’s discuss what’s been happening more recently in the privacy space in the context of deploying general purpose AI models.
Aoife Coll
Yes, so this is definitely a hot topic at the moment. We’ve seen the Garantes enforcement action in Italy in respect of ChatGPT and then closer to home we’ve seen the Irish Data Protection Commission’s recent engagement with X in respect of its AI model GROK and then also more recently we’ve seen them open an inquiry into Google around its compliance with their data protection impact assessment obligations with regard to its AI system. So when controllers are using personal data to train an AI model, it’s essential that they are able to demonstrate to the regulator that they’ve factored in certain compliance measures, such as transparency, having an adequate lawful basis, depending on the model itself, you might need to think about other things. So for example, if it’s a model that’s used often by children, then you might need to consider age-gating.
Rosemarie Blake
Thanks very much for explaining that, Aoife and I think also a good example of a recent decision in respect of age-gating, children and regulatory expectations for AI models that are potentially interacting with children is in the case of SNAP AI and the ICO’s decision, which we saw earlier in the summer, and there was a preliminary enforcement notice issued by the OCO in that case, concluding that SNAP hadn’t met the requirements of Article 35 in relation to its chatbot, which interacted primarily with 13 to 17 year old users. So as I mentioned, it’s a good blueprint for organisations that are looking to conduct DPIAs in terms of the level of detail that’s required and observations on particular areas of concern when it comes to AI models in children and then in terms of those other compliance requirements, perhaps you could take us through what the AI act says about fundamental rights impact assessment and in terms of the substance of the FRIAs, are deployers going to be able to leverage, say, the contents of DPIAs when they’re completing those particular assessments?
Aoife Coll
Yes. So in part, they will be able to do that. The AI Act says that when Fundamental Rights Impact Assessment’s obligations have been met by compliance with data protection impact assessment obligations, then the Fundamental Rights impact assessment shall complement the DPIA.
Rosemarie Blake
Okay, so there’s the potential for a single integrated document to be used when you’re subject to both of those compliance requirements. Obviously, we’re going to be seeing a template from the AI Office which should be issued in due course, which will be of some help to organisations who need to complete both of those assessments and then in terms of, say, the substance of what an FRIA has to deal with, perhaps you could talk us through what those requirements look like?
Aoife Coll
Sure. So an FRIA has to cover a couple of things, including the details around the purpose of the system, the categories of people that are likely to be affected by its use, what the likely risks are, and what measures should be in place if those risks arise.
Rosemarie Blake
Okay, so it’s clear that that DPIA is still going to act as a strong compliance tool in respect of demonstrating your privacy analysis, but it’s not going to get you all the way home for compliance under the AI Act.
Aoife Coll
Unfortunately not.
Rosemarie Blake
Okay, well, thank you very much for providing us with all those insights Aoife, that’s taken us to the end of this video. Thank you very much for listening today and thank you again to Aoife for joining us. Next time we’ll be discussing compliance obligations throughout the AI value chain. And thank you very much for listening.
Rosemarie Blake
Hello and welcome back to our video series in the AI Act. Today we’re going to be talking about compliance obligations in the AI value chain and to do this, I’m joined by my colleague Vivian Spies in the Technology and Innovation Group.
Vivian Spies
Thanks, Rose.
Rosemarie Blake
So, Vivian, when we talk about compliance obligations, the AI value chain, the bulk of those obligations sit with the provider, the person who’s developing the AI. I wonder, could you talk to us a bit more about what those compliance obligations actually look like?
Vivian Spies
Sure. It’s all about the conformity assessment. Before the system goes on the market, providers must subject it to a conformity assessment. This has to demonstrate that the system complies with the requirements for trustworthy.
Rosemarie Blake
Okay, so that’s ensuring things like data quality are met, the right level of traceability is there, that the right documentation is in place, and also ensuring compliance with principles like transparency, human oversight, accuracy and the right cybersecurity measures are in place. So that’s quite a lot to consider and in terms of then when the product is actually on the market or in service, what else do providers need to consider?
Vivian Spies
Providers of high-risk AI systems will have to implement quality and risk management measures to ensure their compliance with the the new requirements and to minimise risk for users and affected persons even after a product is placed on the market.
Rosemarie Blake
Thanks very much, Vivian. And then in terms of evidencing how those measures are in place, what else do providers need to think about when they’re considering interacting with the regulatory authority to make sure those standards are met?
Vivian Spies
Yes, of course. So there’s further requirements in terms of record keeping and cooperation with supervisory authorities. Providers have to ensure that the required documentation is retained specifically to ensure that conformity assessments are accurately completed and retained for a period of at least 10 years. Then there’s also a fairly standard requirement to cooperate with regulatory authorities. If the provider is not in the jurisdiction of the authority, they have to make sure they have an authorised representative appointed to liaise with the supervisory authority.
Rosemarie Blake
Thanks very much, Vivian, for outlining those requirements. As we move on then through the value chain onto deployers, I wonder, could you talk to us a bit more about what their obligations will look like?
Vivian Spies
Their obligations are less onerous, but still significant. They include things like complying with the instructions of use from the provider, ensuring the right level of human oversight is in place and making sure the inputs into the system are sufficiently representative connected to the oversight requirement. Deployers also have ongoing monitoring and evaluation obligations and the maintenance of usage logs. They’ll also have to complete a data protection impact assessment where personal data is involved, and they should use certain information provided by the provider to complete it.
Rosemarie Blake
Thanks very much, Vivian. There’s also a transparency requirement, isn’t there, in relation to workplaces, when you’re using high-risk AI, I wonder, could you talk to us a little bit about what that looks like?
Vivian Spies
Yes, that’s right. If you’re using high-risk AI systems in the workplace, deployers must inform worker representatives and effective workers regarding that use.
Rosemarie Blake
Okay. And is there any other positive obligations that a deployer has to take if they’re engaging with a high-risk AI system?
Vivian Spies
Deployers also have to inform individuals about high-risk AI systems that make or are helping with decisions which could impact their lives.
Rosemarie Blake
Thanks for outlining that, Vivian. So although there’s less of a compliance lift if you’re putting an AI system into use as a deployer, there’s still some fairly significant obligations for deployers to be mindful of if they’re bringing AI into the business, including relying on the provider to give you detailed information so that you can fulfil your own requirements. And then in terms of additional requirements for lawyers, there’s also that obligation to conduct fundamental rights impact assessment. Can you tell us a bit more about what that looks like?
Vivian Spies
Yes. The requirement to conduct a fundamental impact rights assessment is limited to a certain subset of deployers, so it’s not a blanket obligation. That subset is basically deployers who are public bodies or private entities providing public services and deployers like banks and insurers conducting credit decisioning and insurance pricing.
Rosemarie Blake
Thanks very much for outlining that, Vivian. And is there going to be any guidance issued by the AI Office in relation to the conduct of those assessments?
Vivian Spies
Yes, they are. Once the FRIA is complete, the deployer is required to notify the supervisory authority of the results and the AI Office is going to provide a template questionnaire to help comply with this obligation in a simplified way. Deployers can submit this template on completion of the assessment.
Rosemarie Blake
Thanks, Vivian. So we’ll need to keep a watching brief on when that questionnaire is issued by the AI Office and then if we turn then to other actors in the AI value chain, are there any circumstances where they could become subject to the broader compliance obligations of a provider?
Vivian Spies
Yes, there are. The Act sets out a few scenarios where this can occur, and it does specifically reference deployers, but it also encompasses any distributor, importer or other third party. Any of those operators can be considered to be a provider if:
Vivian Spies
A) if they put their name or trademark on a high-risk AI system already on the market or in service
Vivian Spies
B) they make a substantial modification to a high-risk AI system already in use or if they modify the intended purpose of a high-risk AI system.
Rosemarie Blake
Thanks, Vivian. That’s really helpful to know that the compliance obligations can essentially change if the use cases evolve, if there’s changes made to the system, then that can trigger the change in status of an operator in the AI value chain. You mentioned the other actors in the value chain there and I wonder, could you talk us through what their specific compliance obligations will look like as compared to the other actors we’ve discussed, like the providers and the deployers.
Vivian Spies
So importers will generally be obliged to verify that the provider has complied with its primary obligations, such as carrying out the relevant conformity assessment. Distributors have the responsibility not to make the AI high-risk system available on the market if they think the system is not in compliance with the Act’s requirements. Importers and distributors will undertake these tasks by verifying providers declaration of conformity and the conformity marking on the AI system. Finally, authorised representatives are required to perform the task specified in the mandate they receive from the provider.
Rosemarie Blake
Thank you very much for outlining all of those requirements, Vivian. So that brings us to the end of this episode of our video series. Thanks again to Vivian for joining us and thank you all for listening.
Podcasts
The Arthur Cox podcast series ‘AC Audio’ is a collection of knowledge and insights across a range of practice areas within the firm.Disclaimer: The contents of this podcast are to assist access to information and do not constitute legal or other advice. Specific advice should be sought in relation to specific cases. If you would like more information on this topic, please contact a member of our team or your usual Arthur Cox contact.