Blog
Regulatory Affairs

The EU AI Act: Compliance 101

On February 13, two groups of lawmakers in the European Parliament ratified a provisional agreement for the EU AI Act, which establishes guardrails for the use of AI across a range of industries.

Feb 21, 2024
#
min read
Share
X

Table of contents

On February 13, two groups of lawmakers in the European Parliament ratified a provisional agreement for the EU AI Act, which establishes guardrails for the use of AI across a range of industries.

The next step will be for the Act to be formally approved by the European Parliament and the Council, the representative body of the 27 member states, which is due to take place in April 2024 at the end of the parliament's legislative period. Member states will then have two years to transpose the AI law into national law.

How we’re thinking about the EU AI Act at Sema:

  • Sema evaluates the EU AI Act as a Critical risk.
  • If an organization has a geographical connection to the EU and is using or considering using AI in any of the applicable ways, we recommend that companies have a plan developed in the next month and implemented in the next two months.

The path forward

Sema’s Compliance Standards Database can help your organization prepare. Here’s a snapshot pertaining to the EU AI Act, for the following industries: Computer Software, General Business Software, Govtech, Healthtech, Manufacturer, Other, Tech-Enabled Services

The information below will help you understand the specific compliance requirements of the EU AI Act in greater depth.

High Risk Gen AI will face higher compliance standards

High risk AI Systems will need to be regulated (not banned): such as those used in critical infrastructure (e.g., power grids, hospitals, etc.), those that help make decisions regarding people’s lives (e.g., employment or credit rating), or those that have a significant impact on the environment.

High-Risk AI systems are artificial intelligence systems that may adversely affect security or fundamental rights. They are divided into two categories:

Artificial intelligence systems used in products subject to the EU General Product Safety Directive. These include toys, automobiles, medical devices and elevators.

Artificial intelligence systems that fall into eight specific areas, which will have to be registered in an EU database:

  1. biometric identification and categorization of natural persons;
  2. management and operation of critical infrastructure;
  3. education and vocational training;
  4. employment, worker management and access to self-employment;
  5. access to and use of essential private and public services and benefits;
  6. law enforcement;
  7. migration management, asylum, and border control;
  8. assistance in legal interpretation and enforcement of the law.

All high-risk artificial intelligence systems will be evaluated before being put on the market and throughout their life cycle.

High Risk Basic Models will face higher compliance standards

As a subset of High Risk AI, Basic AI models, in contrast, do not ‘create,’ but learn from large amounts of data, use it to perform a wide range of tasks, and have application in a variety of domains. Providers of these models will need to assess and mitigate the possible risks associated with them (to health, safety, fundamental rights, the environment, democracy, and the rule of law) and register their models in the EU database before they are released to the market.

Different industries will face different compliance standards

Minimal / low risk AI applications will require user transparency and consent: Limited-risk artificial intelligence systems should meet minimum transparency requirements that enable users to make informed decisions. After interacting with applications, users can decide whether they wish to continue using them. Users should be informed when interacting with AI. This includes artificial intelligence systems that generate or manipulate image, audio, or video content (e.g., deepfakes).

Requirements (Applicable for Foundation Model Providers): Registry

[Article 39, item 69, page 8 as well as Article 28b, paragraph 2g, page 40]. In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonization legislation, should be required to register their high-risk AI system and foundation models in a EU database, to be established and managed by the Commission. This database should be freely and publicly accessible, easily understandable and machine-readable. The database should also be user friendly and easily navigable, with search functionalities at minimum allowing the general public to search the database for specific high-risk systems, locations, categories of risk under Annex IV and keywords. Deployers who are public authorities or European Union institutions, bodies, offices and agencies or deployers.

Requirements (Applicable for Foundation Model Providers): Provider Name

[Annex VIII, Section C, page 24]. Name, address and contact details of the provider.

Requirements (Applicable for Foundation Model Providers): Model Name

[Annex VIII, Section C, page 24]. Trade name and any additional unambiguous reference allowing the identification of the foundation model.

Requirements (Applicable for Foundation Model Providers): Data Sources

[Annex VIII, Section C, page 24]. Description of the data sources used in the development of the foundation model.

Requirements (Applicable for Foundation Model Providers): Capabilities and Limitations

[Annex VIII, Section C, page 24]. Description of the capabilities and limitations of the foundation model.

Requirements (Applicable for Foundation Model Providers): Risks and Mitigations

[Annex VIII, Section C, page 24 and Article 28b, paragraph 2a, page 39]. The reasonably foreseeable risks and the measures that have been taken to mitigate them as well as remaining non-mitigated risks with an explanation on the reason why they cannot be mitigated.

Requirements (Applicable for Foundation Model Providers): Compute

[Annex VIII, Section C, page 24]. Description of the training resources used by the foundation model including computing power required, training time, and other relevant information related to the size and power of the model.

Requirements (Applicable for Foundation Model Providers): Evaluations

[Annex VIII, Section C, page 24 as well as Article 28b, paragraph 2c, page 39]. Description of the model’s performance, including on public benchmarks or state of the art industry benchmarks.

Requirements (Applicable for Foundation Model Providers): Testing

[Annex VIII, Section C, page 24 as well as Article 28b, paragraph 2c, page 39]. Description of the results of relevant internal and external testing and optimisation of the model.

Requirements (Applicable for Foundation Model Providers): Member States

[Annex VIII, 60g, page 29]. Generative foundation models should ensure transparency about the fact the content is generated by an AI system, not by humans.

Requirements (Applicable for Foundation Model Providers): Pre-market Compliance

[Article 28b, paragraph 1, page 39]. A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licenses, as a service, as well as other distribution channels.

Requirements (Applicable for Foundation Model Providers): Data Governance

[Article 28b, paragraph 2b, page 39]. Process and incorporate only datasets that are subject to appropriate data governance measures for foundation models, in particular measures to examine the suitability of the data sources and possible biases and appropriate mitigation.

Requirements (Applicable for Foundation Model Providers): Energy

[Article 28b, paragraph 2d, page 40]. Design and develop the foundation model, making use of applicable standards to reduce energy use, resource use and waste, as well as to increase energy efficiency, and the overall efficiency of the system. This shall be without prejudice to relevant existing Union and national law and this obligation shall not apply before the standards referred to in Article 40 are published. They shall be designed with capabilities enabling the measurement and logging of the consumption of energy and resources, and, where technically feasible, other environmental impact the deployment and use of the systems may have over their entire lifecycle.

Requirements (Applicable for Foundation Model Providers): Quality Management

[Article 28b, paragraph 2f, page 40]. Establish a quality management system to ensure and document compliance with this Article, with the possibility to experiment in fulfilling this requirement.

Requirements (Applicable for Foundation Model Providers): Upkeep

[Article 28b, paragraph 3, page 40]. Providers of foundation models shall, for a period ending 10 years after their foundation models have been placed on the market or put into service, keep the technical documentation referred to in paragraph 1(c) at the disposal of the national competent authorities.

Requirements (Applicable for Foundation Model Providers): Law-abiding Generated Content

[Article 28b, paragraph 4b, page 40]. Train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of Union law in line with the generally acknowledged state of the art, and without prejudice to fundamental rights, including the freedom of expression.

Requirements (Applicable for Foundation Model Providers): Training on Copyrighted Data

[Article 28b, paragraph 4c, page 40]. Without prejudice to national or Union legislation on copyright, document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.

Requirements (Applicable for Foundation Model Providers): Adherence to General Principles

[Article 4a, paragraph 1, page 142-3]. All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles establishing a high-level framework that promotes a coherent humancentric European approach to ethical and trustworthy Artificial Intelligence, which is fully in line with the Charter as well as the values on which the Union is founded: a) ‘human agency and oversight’ means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. b) ‘technical robustness and safety’ means that AI systems shall be developed and used in a way to minimize unintended and unexpected harm as well as being robust in case of unintended problems and being resilient against attempts to alter the use or performance of the AI system so as to allow unlawful use by malicious third parties. c) ‘privacy and data governance’ means that AI systems shall be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity. d) ‘transparency’ means that AI systems shall be developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights. e) ‘diversity, non-discrimination and fairness’ means that AI systems shall be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law. f) ‘social and environmental well-being’ means that AI systems shall be developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts on the individual, society and democracy. For foundation models, the general principles are translated into and complied with by providers by means of the requirements set out in Articles 28 to 28b.

Requirements (Applicable for Foundation Model Providers): System is Designed so Users Know its AI

[Article 52(1) Paragraph 1 - not in the Compromise text, but invoked in 28(b), paragraph 4a, page 40]. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.

Requirements (Applicable for Foundation Model Providers): Appropriate Levels

[Article 28b, paragraph 2c, page 39]. design and develop the foundation model in order to achieve throughout its lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity assessed through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development

Foundation Model Compliance

Companies building foundation models will have to draw up technical documentation, comply with EU copyright law and detail the content used for training. The most advanced foundation models that pose “systemic risks” will face extra scrutiny, including assessing and mitigating those risks, reporting serious incidents, putting cybersecurity measures in place and reporting their energy efficiency.

Unacceptable Risk

Anything presenting a clear threat to fundamental rights falls in the "unacceptable risk" tier and is not permitted.

AIs in this category include predictive policing, workplace emotional recognition systems, manipulation of human behavior to circumvent free will, or categorizing people based on qualities including political or religious persuasion, race, or sexual orientation.

With a six-month lead time to compliance, AI developers will need to ensure any feature that falls within the unacceptable risk category is quickly removed from their products. Compliance with regs for "high risk" AI must also be in place within half a year of the Act's passage.

Chatbot Interaction Disclosure

The official text states that users should be informed when interacting with a chatbot.

Disclosure of artificially generated or manipulated content

The act requires AI systems that generate or manipulate text, image, audio or video content (such as a deep-fake tool) to disclose that the content has been artificially generated or manipulated. This also applies to “artistic, creative, satirical, fictional, analogous work,” in which case the “transparency obligations are limited to disclosure of the existence of such generated or manipulated content” in a way “that does not hamper the display or enjoyment of the work.”

Transparency: In-House Tools vs Commercialized

The transparency guidelines only concern tools that are commercialized but not software that are being used in-house by companies.

Timeline for Compliance (Post-Publication in Official Journal)

- Prohibitions on specified categories of banned AI: Six months – late 2024

- Provisions for high impact general purpose AI with systemic risk: 12 months - summer 2025

- Provisions on the obligations on high-risk AI: 12 months – summer 2025

- Provisions dealing with governance and conformity bodies: 12 months – summer 2025

- All other provisions: 2 years – summer 2026

Keeping track of global GenAI compliance standards 

Periodically, Sema publishes a no-cost newsletter covering new developments in Gen AI code compliance. The newsletter shares snapshots and excerpts from Sema’s GenAI Code compliance Database. Topics include recent highlights of regulations, lawsuits, stakeholder requirements, mandatory standards, and optional compliance standards. The scope is global.

You can sign up to receive the newsletter here.

About Sema Technologies, Inc. 

Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.

Disclosure

Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.

Want to learn more?
Learn more about AI Code Monitor with a Demo

Are you ready?

Sema is now accepting pre-orders for GBOMs as part of the AI Code Monitor.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.