Blog
Regulatory Affairs

GenAI code compliance newsletter: December 15, 2023

Sema’s Gen AI code compliance newsletter provides recent highlights of regulations, legislation, lawsuits, stakeholder requirements, mandatory standards, and optional compliance standards that organizations may, should, or must follow when using Generative AI in the software development lifecycle. 

Dec 15, 2023
#
min read
Share
X

Table of contents

Sema’s Gen AI code compliance newsletter provides recent highlights of regulations, legislation, lawsuits, stakeholder requirements, mandatory standards, and optional compliance standards that organizations may, should, or must follow when using Generative AI in the software development lifecycle. 

The scope is global, with a goal to provide a real-time picture of the evolving regulatory landscape. Each edition shares a snapshot and excerpts from Sema’s proprietary Gen AI Compliance Database.  

The newsletter will be distributed regularly, at no cost. You can subscribe by signing up here.

The Database defines

  • Compliance standards as documents (legislation, regulations, acts, orders, policies, etc.) that can generate optional or mandatory actions for organizations.
  • Components as specific "rules" within each Compliance Standard. Each rule generates the possibility of potential, specific, action. Compliance Standards can have one or moreComponents.

We welcome your feedback and requests for specific Compliance Standards, or geographies to prioritize.

Snapshot of Sema’s Database
Compliance Standards: 36
Components: 144
Geographies Covered: 21
Industries Covered: 10

Noteworthy highlights

  1. EU AI Act
  2. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
  3. German Copyright Law

EU AI Act

As part of its broader digital strategy, the EU is regulating  artificial intelligence (AI) to ensure better conditions for the development and use of this  technology. AI has the potential to create many benefits, including better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

On December 8th, 2023, the negotiators from the European Parliament and the bloc’s 27 member countries reached an agreement, making Europe the first continent to set clear rules for the use of AI. The agreement is pending  formal approval by the European Parliament and the Council. Voting is scheduled to take place  in April 2024.  Member states will then have two years to transpose the AI law into national law.

EU AI Act components

  • High risk AI systems will need to berRegulated (not banned): Examples include critical infrastructure (e.g., power grids, hospitals, etc.), those that help make decisions regarding people’s lives (e.g., employment or credit rating), or those that have a significant impact on the environment.
  • High risk basic models will face higher compliance standards: As a subset of High Risk AI, Basic AI models, in contrast, do not ‘create,’ but learn from large amounts of data, use it to perform a wide range of tasks, and have application in a variety of domains.
  • Different industries will face different compliance standards: Minimal / low risk AI applications will require user transparency and consent: Limited-risk artificial intelligence systems should meet minimum transparency requirements that enable users to make informed decisions.
  • Foundation model requirements:
  • ~Registry: Required to register their high-risk AI system and foundation models in a EU database, to be established and managed by the Commission.
  • ~Provider name: Trade name and any additional unambiguous reference allowing the identification of the foundation model.
  • ~Model name: Trade name and any additional unambiguous reference allowing the identification of the foundation model.
  • ~Data sources: Description of the data sources used in the development of the foundation model.
  • ~Capabilities and limitations: Description of the capabilities and limitations of the foundation model.
  • ~Risks and mitigations: The reasonably foreseeable risks and the measures that have been taken to mitigate them as well as remaining non-mitigated risks with an explanation on the reason why they cannot be mitigated.
  • ~Compute: Description of the training resources used by the foundation model including computing power required, training time, and other relevant information related to the size and power of the model.
  • ~Evaluations: Description of the model’s performance, including on public benchmarks or state of the art industry benchmarks.
  • ~Testing: Description of the results of relevant internal and external testing and optimisation of the model.
  • ~Member states: Member states in which the foundation model is or has been placed on the market, put into service or made available in the Union.
  • ~Downstream documentation: Foundation models should have information obligations and prepare all necessary technical documentation for potential downstream providers to be able to comply with their obligations under this Regulation.
  • ~Machine-generated content: Generative foundation models should ensure transparency about the fact the content is generated by an AI system, not by humans.
  • ~Pre-market compliance: A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licenses, as a service, as well as other distribution channels.
  • ~Data governance: Process and incorporate only datasets that are subject to appropriate data governance measures for foundation models, in particular measures to examine the suitability of the data sources and possible biases and appropriate mitigation.
  • ~Energy: Design and develop the foundation model, making use of applicable standards to reduce energy use, resource use and waste, as well as to increase energy efficiency, and the overall efficiency of the system.
  • ~Quality management: Establish a quality management system to ensure and document compliance with this Article, with the possibility to experiment in fulfilling this requirement.
  • ~Upkeep: Providers of foundation models shall, for a period ending 10 years after their foundation models have been placed on the market or put into service, keep the technical documentation referred to in paragraph 1(c) at the disposal of the national competent authorities.
  • ~Law-abiding generated content: Train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of Union law in line with the generally acknowledged state of the art, and without prejudice to fundamental rights, including the freedom of expression.
  • ~Training on copyrighted data: Without prejudice to national or Union legislation on copyright, document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.
  • ~Adherence to general principles: All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles establishing a high-level framework that promotes a coherent humancentric European approach to ethical and trustworthy Artificial Intelligence.
  • ~System is designed so users know it is AI: Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.
  • ~Appropriate levels: design and develop the foundation model in order to achieve throughout its lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity assessed through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development.
  • ~Foundation model compliance: Companies building foundation models will have to draw up technical documentation, comply with EU copyright law and detail the content used for training.
  • Unacceptable risk: Anything presenting a clear threat to fundamental rights falls in the "unacceptable risk" tier and is not permitted. AIs in this category include predictive policing, workplace emotional recognition systems, manipulation of human behavior to circumvent free will, or categorizing people based on qualities including political or religious persuasion, race, or sexual orientation.
  • Chatbot interaction disclosure:  users should be informed when interacting with a chatbot.
  • Disclosure of artificially generated or manipulated content: AI systems that generate or manipulate text, image, audio or video content (such as a deep-fake tool) to disclose that the content has been artificially generated or manipulated.
  • Transparency - In-house tools vs commercialization: The transparency guidelines only concern tools that are commercialized but not software that are being used in-house by companies.

Standard: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Executive Order (EO) issued on October 30, 2023 by President Biden. The EO addresses the purposes/potential of AI and policies and principles. While no direct legislation is contained within executive order, the EO tasks key governmental organizations and agencies with the development of legislation and policies.

Executive Order components

  • NIST guidelines and best practices for safe, secure and trustworthy AI systems: Within 270 days of the EO. NIST to develop a companion resource to the AI Risk Management Framework, NIST AI 100-1, for generative AI.
  • NIST guidelines for red-teaming: Within 270 days of the EO. NIST to establish appropriate guidelines, including procedures and processes to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.
  • USPTO director - patent examination and addressing "inventorship": Within 120 days of the EO. USPTO Director to publish guidance to USPTO patent examiners and applicants addressing inventorship and the use of AI, including generative AI, in the inventive process, including illustrative examples in which AI systems play different roles in inventive processes and how, in each example, inventorship issues ought to be analyzed.
  • Copyright scope of protection: Within 270 days of the EO. USPTO to issue recommendations to the President. The recommendations shall address any copyright and related issues discussed in the United States Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.
  • Human-oversight of AI-generated output application: Independent regulatory agencies encouraged to use full range of authority in support of protecting American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI.
  • HHS AI task force - human oversight: Within 90 days of the EO. Director of Health and Human Services (HHS) will establish a task force that will, within 365 days of its creation, create a strategic plan that will promote AI and AI-enabled technologies.
  • HHS AI task force - protection of personally identifiable information: HHS Task force will address incorporation of safety, privacy, and security standards into the software-development lifecycle for protection of personally identifiable information.
  • Secretary of education - AI in education: Within 365 days of the EO. The Secretary of Education will develop resources, policies, and guidance, including: "the development of an “AI toolkit” for education leaders implementing recommendations from the Department of Education’s AI and the Future of Teaching and Learning report, including appropriate human review of AI decisions, designing AI systems to enhance trust and safety and align with privacy-related laws and regulations in the educational context, and developing education-specific guardrails."
  • Promoting responsible utilization of generative AI: As generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI.
  • Companion resource to secure software development framework: Within 270 days of the EO. Secretary of Commerce to develop a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models.
  • Critical infrastructure: Cross-sector risks: Within 90 days of the EO (and annually thereafter). Evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities.
  • Secretary of Commerce: Identifying the existing standards, tools, methods, and practices: Within 240 days of the EO. Identify the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques.
  • USPTO Director - considerations at the intersection of AI and IP: Within 270 days of the EO. USPTO Director to issue additional guidance to USPTO patent examiners and applicants to address other considerations at the intersection of AI and IP, which could include, as the USPTO Director deems necessary, updated guidance on patent eligibility to address innovation in AI and critical and emerging technologies.
  • Administrator of General Services - increase agency investment in AI: Within 30 days of the EO. Increase agency investment in AI, the Technology Modernization Board shall consider, as it deems appropriate and consistent with applicable law, prioritizing funding for AI projects for the Technology Modernization Fund for a period of at least 1 year.
  • Administrator of General Services - facilitate agencies’ access to commercial AI capabilities: Within 180 days of the EO. Facilitate agencies’ access to commercial AI capabilities, the Administrator of General Services, in coordination with the Director of OMB, and in collaboration with the Secretary of Defense, the Secretary of Homeland Security, the Director of National Intelligence, the Administrator of the National Aeronautics and Space Administration, and the head of any other agency identified by the Administrator of General Services, shall take steps consistent with applicable law to facilitate access to Federal Government-wide acquisition solutions for specified types of AI services and products, such as through the creation of a resource guide or other tools to assist the acquisition workforce. Specified types of AI capabilities shall include generative AI and specialized computing infrastructure.

Standard: German Copyright Law

Copyright Law in place for Germany. German Copyright Law components include:

  • Pure vs blended GenAI code - pure GenAI code will not get copyright protection: "Therefore, works created solely by AI systems are not amenable to copyright protection. If there is a sufficiently large human influence on the act of creation, only the natural persons behind the AI would be recognised as authors."

Sign up for the newsletter

You can subscribe by signing up here.


About Sema Technologies, Inc. 

Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.

Disclosure

Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.

No items found.
Want to learn more?
Learn more about AI Code Monitor with a Demo

Are you ready?

Sema is now accepting pre-orders for GBOMs as part of the AI Code Monitor.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.