16/08/2024
Video
EU

The AI Act represents the first comprehensive legal framework for artificial intelligence, addressing associated risks and obligations for providers. It sets clear requirements for AI developers and deployers, aiming to reduce administrative and financial burdens, particularly for SMEs. Part of a broader policy package, the AI Act ensures the safety and fundamental rights of individuals and businesses, while promoting AI innovation and investment across the EU. The regulation adopts a risk-based approach, categorising AI systems into four levels of risk, with stringent obligations for high-risk applications to ensure transparency, accountability, and human oversight.

This video series, curated by our Technology and Innovation Group, discusses the timeline of obligations, how it affects GDPR, the AI Value Chain and Generative AI.

What is the AI Act and the timeline of obligations?

In this episode of our new series on the AI Act, Rosemarie Blake and Ciara Anderson, Senior Associates in our Technology and Innovations Group, discuss the timeline for application of obligations and the different participants along the AI value chain.

The AI Act and General Purpose AI

Rosemarie Blake, Senior Associate, and Rory Curtis, Associate in our Technology and Innovations Group, discuss the requirements for General Purpose AI (also referred to as GPAI) as outlined in the Act. They also consider the risk thresholds and forthcoming compliance obligations for providers of General Purpose AI.

The AI Office and Enforcement

In this episode of our AI series, Rosemarie Blake, Senior Associate, and Lukas Mitterlechner, Foreign Registered Lawyer in our Technology and Innovations Group, explore the EU governance architecture and the layered enforcement mechanism between the EU and Member States.

The AI Act and GDPR Interplay

In the latest episode of our AI series, Rosemarie Blake and Aoife Coll, Senior Associates in our Technology and Innovation Group, discuss the interplay between the AI Act and the GDPR. 

They explore critical topics such as automated processing, human oversight, model training and consider the scope of fundamental rights assessments.

Compliance Obligations for Actors in the AI Value Chain

The EU AI Act, the world’s first comprehensive AI regulation, was adopted in 2024 and will start phasing in prohibitions on certain high-risk AI systems from February 2025. In this episode of our series on the AI Act, Rosemarie Blake, Senior Associate, and Vivian Spies, Foreign Registered Lawyer in our Technology and Innovation Group, discuss the compliance obligations for different actors in the AI value chain.

Deadline for non-Compliance

The first compliance deadline of the EU’s AI Act comes into force on 2nd February 2025. Rosemarie Blake, Senior Associate in our Technology and Innovation Group, discusses the upcoming rules on banned AI systems. She also delves into the AI literacy obligations including some practical guidance on what to watch out for in the coming months.

Video Transcript
Rosemarie Blake


Hi, I’m Rosemarie Blake. I’m a Senior Associate in the Technology and Innovation Group in Arthur Cox. On the 2nd of May, 2025, the rules on banned AI systems and AI literacy come into force. The big ticket item there obviously relates to the banned AI systems. For a finding of non-compliance, organisations can be liable for a find of up to €35 million or 7% of global annual turnover, whichever is the highest. In terms In terms of types of technology that could come within the scope of the banned use cases, these are things like inferring emotions. This could come up in the context of an employee satisfaction survey if an AI technology is used to conduct sentiment monitoring. Another example that people may be familiar with is fraud prevention techniques. With this type of technology, you could come close to the banned use case involving predicting the risk of a natural person committing a criminal offence. If you’re procuring AI technology, it’ll be crucial to ensure you’ve got an understanding of the risk taxonomy within the AI Act. Looking then at AI literacy. To ensure compliance with this requirement, providers and deployers must ensure that all staff or anyone operating AI systems on their behalf have a sufficient level of AI literacy.

There’s three key things to bear in mind there. One is assess the level of literacy that’s already in your organisation. What is your maturity level and build from there? Secondly, it’s to build on existing policies and procedures where you can and leverage compliance from that point and then lastly, keep an eye out for emerging policy and guidance in this space. We don’t yet have the guidelines on prohibited AI use cases, but they are likely to be published in the coming months. Also, the Code of Practice on General Purpose AI is due to be published in May 2025.