Table of contents
Sema is introducing the GenAI Risk Report.
This Report summarizes and prioritizes risks from various uses of GenAI. Example risks may pertain to finance teams, operations efforts, legal considerations, security challenges, reputational issues, or other challenges.
The risk report includes an overview and count of identified Critical and High risk items, along with steps to take to reduce or prevent the risk.
Lastly the Report explains the methodology for the Risk Report as well as the current Critical (3) and High Risk (1) items.
Future editions will add additional risks from GenAI usage and may update the risk levels based on changes in the operating and regulatory environment.
GenAI Risk Report overview
Sema has identified three critical GenAI risks. Critical = highest level of risk, applicable with any GenAI usage.
- High Risk AI Systems based on the EU AI Act.
- GenAI code is less secure than human-written code.
- Data Leakage: Coders using GenAI may leak proprietary information to external GenAI tools
Sema has identified 1 high GenAI risks High = second highest level of risk, applicable with substantial GenAI usage.
- Pure GenAI code will not get copyright protection.
Detail: Critical risks
GenAI code is less secure than human-written code.
Geographic scope = All
Functional scope = GenAI in Software Development
Industry scope = All
Description: Multiple studies have indicated that GenAI code is less secure than human code. An October 2023 Cornell University study found that 36% of GenAI-generated code introduced significant security warnings (CWEs). A December 2022 Stanford University study found that developers believed GenAI code to be safer while actually introducing more security vulnerabilities.
Mitigation / remediation: Mitigation / remediation: use SAST/DAST and CVE code scanning tools on production code.
High Risk AI Systems based on the EU AI Act
Geographic scope = European Union.
Functional scope = GenAI as Product Functionality, GenAI as Internal Tools, GenAI provided by Vendor (likely).
Industry scope = if the GenAI is used in High Risk systems.
Description: The near-final EU AI Act defines certain uses of GenAI systems as High Risk if “they may adversely affect security or fundamental rights.” Such systems will need regulatory approval before market approval.
Mitigation / remediation: Companies should not use or offer High Risk GenAI systems in the EU until the regulatory landscape is settled and the approval process is finalized.
Data leakage: Coders using GenAI may leak proprietary information to external GenAI tools
Geographic scope = All
Functional scope = GenAI in Software Development
Industry scope = All
Description: Developers may be inadvertently sharing proprietary information while seeking guidance from GenAI tools. For example, a developer may share actual code snippets from the company or methods or feature descriptions.
Mitigation / remediation: Developer training. Selection of safe GenAI tools for developers (GenAI in the SDLC). Banning of non-approved GenAI tools. Monitoring the code for examples of GenAI usage where GenAI tools are not permitted.
Detail: High risks
Pure GenAI code will not get copyright protection.
Geographic scope = USA.
Functional scope = GenAI in Software Development
Industry scope = All
Description: A US court case determined that works that were solely created by GenAI will not get copyright protection.
Mitigation / remediation: Track GenAI code usage. Educate developers on the importance of blending / modifying code. Keep any code where copyright protection may be sought to less than 50% Pure GenAI— the rest can be NotGenAI or Blended GenAI.
Methodology
Geographic scope is based on know regulations or legislation that can be final or pending. Legislation or regulation that did not pass from a previous legislative / policy marking session are excluded.
Functional scope: there are four categories for how GenAI can be used.
- GenAI as Product Functionality: for example, selling a product that uses GenAI to recommend whom to hire
- GenAI as Internal Tools: for example, a data science team using GenAI to analyze resumes received from HR to decide whom to hire
- GenAI provided by Vendor, for example, buying a software subscription to a product that uses GenAI to recommend whom to hire
- GenAI in Software Development: for example, developers using GenAI tools like GitHub CoPilot or ChatGPT to code faster
Risk levels are assigned as follows:
- Critical: likely to cause financial, operational, security or legal risk if any GenAI is used
- High: likely to cause financial, operational, security or legal risk if substantial amounts of GenAI are used
- Medium: potential to cause financial, operational, security or legal risk
- Low: little potential to cause financial, operational, security or legal risk
GenAI Risk Report
More details and continuous updates from the GenAI Risk Report are available through the AI Code Monitor and the GenAI Compliance Tracker. Reply to learn more.
Sign up for the newsletter
You can subscribe by signing up here.
About Sema Technologies, Inc.
Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.
Disclosure
Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.