Blog
Stories

Trustworthy AI reading list: December 28

With 2024 around the corner, Gen AI code adoption is a top priority for technology executives and their broader teams. The productivity benefits are becoming, as specific use cases of Gen AI code are materializing and proliferating.

Dec 28, 2023
#
min read
Share
X

Table of contents

With 2024 around the corner, Gen AI code adoption is a top priority for technology executives and their broader teams. The productivity benefits are becoming, as specific use cases of Gen AI code are materializing and proliferating.

Consider Goldman Sachs, which has identified dozens of use cases for Gen AI to integrate into the organization’s core business units.  With respect to Gen AI code, the Goldman Sachs has identified the following use cases:

  • Writing code in English-language commands
  • Being able to generate documentation
  • Automate parts of the code writing process

As of May 2023, Goldman Sachs has estimated that developers are accepting up to 40% of code recommended by Gen AI, with productivity gains in the low double-digits.

“The technology is changing so fast in front of our eyes that I think it’s almost like the limit is ourselves and being able to rationalize it,” explained Marco Argenti, chief information officer of Goldman Sachs Group Inc., in a recent interview with Wall Street Journal.

Goldman Sachs isn’t alone. At Sema, we are seeing similar trends across our customer base of enterprise organizations and leading investors. As we build towards the launch of AI Code Monitor in 2024, we are evaluating use cases for Gen AI code. Specifically, we are researching, exactly, how regulatory decisions may impact Gen AI code adoption at firms with similar considerations as Goldman Sachs.

How can CTOs balance the productivity benefits with compliance guardrails of Gen AI code? Understanding the bigger picture can help. Here are some resources that are sparking our curiosity here at Sema.

How nations are losing a global race to tackle AI’s harms

Source: New York Times

As a CTO, it’s helpful to understand how compliance and governance efforts fit into a broader regulatory picture.

In short, AI innovation is proliferating faster than policymakers can keep up. 

This NYTimes article provides thoughtful commentary on the problem statement, along with a recent historical summary of the challenges that government regimes are navigating. 

“At the root of the fragmented actions is a fundamental mismatch. AI systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace. That gap has been compounded by an AI knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.”
As CTOs determine the role of Gen AI in their development team’s workflows, it will be helpful to understand the bigger-picture guardrails being implemented at the national level. With this situational awareness, technology teams can prevent technical debt due to compliance violations.

AI is about to face many more legal risks — here’s how businesses can prepare

Source: Fortune

It’s not just up to regulators to determine best practices for governing AI code. Prescriptive decision-drivers will come from the courts.

“We’re more likely to get guidance from the courts than we are for a big pronouncement from a legislator,” says Jordan Jaffe, a partner at Wilson Sonsini Goodrich & Rosati’s San Francisco office, in an interview with Fortune.

Several prominent lawyers participated in interviews with Fortune.

“I think overregulation is as bad as underregulation,” says Danny Tobey, chair of DLA Piper’s AI practice. “Companies should be really focused on the differences between generative AI and traditional AI.”

“There has to be C-suite responsibility for material use of AI in the corporation,” says Brad Newman, a litigation partner resident in Baker McKenzie’s Palo Alto, Calif., office. 

One of Newman’s recommended solutions is that companies designate a chief AI officer to assess how AI is implemented, establish a lawful and transparent technology, and incorporate coders into the decision-making fold.

One potential solution is the implementation of a formal policy, such as the National Institutes of Standards and Technology (NIST) AI Risk Management Framework, released in January 2023.

It will be advantageous for CTOs, chief risk officers, heads of regulatory affairs, and General Counsel to monitor pending court cases.

Insights from the pending Copilot class action lawsuit

Source: Finnegan

One important lawsuit to track is the pending class action lawsuit against GitHub Copilot, Open AI, and Microsoft (GitHub’s parent company), pertaining to alleged violations and breaches of the Digital Millennium Copyright Act (DMCA) and open source licenses.

Here’s Finnegan’s take on the controversy and why the legal dispute will be important to follow. In short, CTOs must determine whether the business needs of implementing Copilot outweigh the legal risks.

“The Copilot case highlights the legal complexities surrounding the use of AI-generated code from tools like Copilot that have been trained on copyrighted materials. Code subject to open-source licenses is still copyright-protected, and the terms and limitations set forth under the open-source licenses govern the code’s use. As discussed, OSS licenses carry diverse obligations, usually including complex attribution requirements that differ by code. For AI companies and companies using AI, determining the content of training sets and whether AI tools directly reproduce code or independently create it remains challenging, especially when companies are dealing with millions of lines of code, if not more.

For software developers, refraining from using tools like Copilot until the lawsuit is resolved is the safest way to avoid an action for breach of OSS terms. But in the current competitive software development market, this recommendation may be impractical. Companies that choose to proceed with using AI-assisted tools should therefore exercise caution and avoid unnecessary risks.”

Microsoft announces new Copilot Copyright Commitment for customers

Source: Microsoft

Equally important to the Copilot case is Microsoft’s response to its customers. Understandably, Copilot customers are concerned about the risk of IP infringement claims that arise from Gen AI code. In response, Microsoft announced its Copilot Copyright Commitment, which extends the company’s existing intellectual property indemnity support to commercial Copilot services.

The announcement, published by Microsoft Vice Chair and President Brad Smith and Hossein Nowbar, CVP and Chief Legal Officer, elaborates: 

“Specifically, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters we have built into our products.”

More details pertaining to Copilot Copyright Commitment and the rationale for offering it are accessible via this link.

Copyright Office affirms fourth refusal to register generative AI work

Source: IPWatchdog 

Going deeper into the topic of copyright for Gen AI code, it is important to monitor decisions on behalf of The United States Copyright Office. On December 11, 2023, the Office’s Review Board released a letter “affirming the USCO’s refusal to register a work created with the use of artificial intelligence (AI) software,” according to Franklin Graves at IPWatchdog.

“The decision to affirm the refusal marks the fourth time a registrant has been documented as being denied the ability to obtain a copyright registration over the output of an AI system following requests for reconsideration.”

This decision affirms the stance of the USCO, that generative works “are not protectable and therefore considered public domain.”

Graves states a key takeaway that a fully developed legal strategy is necessary for communicating with the USCO.

The AI regulations that aren’t being talked about

Source: Deloitte

Among CTOs, compliance officers, and regulatory affairs teams, a key question pertains to nascent patterns among governments navigating Gen AI regulation.

“Directly regulating a fast-moving technology like AI can be difficult,” explains the publishing team at the Deloitte Center for Government Insights.

“Take the hypothetical example of removing bias from home loan decisions. Regulators could accomplish this goal by mandating that AI should have certain types of training data to ensure that the models are representative and will not produce biased results, but such an approach can become outdated when new methods of training AI models emerge. Given the diversity of different types of AI models already in use, from recurrent neural networks to generative pretrained transformers to generative adversarial networks and more, finding a single set of rules that can deliver what the public desires both now, and in the future, may be a challenge.”

To arrive at a recommended set of best practices, Deloitte analyzed an OECD database of 1,600 Gen AI policies. The dataset included regulations to research grants, to national strategies. The team then mapped those policies into a visualization that groups together policies that are frequently adopted together within countries.

“Rather than trying to find a set of rules that can make AI models deliver the right outcomes in all circumstances, our data suggests regulators should focus on incentivizing those desired outcomes,” explains the Deloitte team.

The report further recommends best practices for regulators to consider. Enterprise CTOs and governance teams will value these suggestions.

‘Worlds of possibilities’ in a multidisciplinary approach to AI

Source: McKinsey

Regulation, governance, and ethics are inextricably linked. In a recent interview with McKinsey, Stanford computer science professor Dr. Fei-Fei Li, provided a practical approach to governance models that prioritize human dignity. Li, the founding director of the Stanford University Institute for Human-Centered Artificial Intelligence, proposes a framework in three aspects:

  • Part 1: AI as a multidisciplinary field (i.e. scientific discovery, economic impact, superpower to education and learning)
  • Part 2: The most important use of a tool as powerful as AI is to augment humanity, not to replace it
  • Part 3: Intelligence is nuance and complex

“I think putting guardrails and governance around the powerful technology is necessary and inevitable,” Li elaborates. “Some of this will come in the form of education. We need to educate the public, policy makers, and decision makers about the power, the limitation, the hype, and the facts of this technology.”

Enterprise CTOs have an important role in establishing these norms, simply by sharing data, providing anecdotes, and participating in civic discourse.

“A more urgent issue that many don’t see from a risk point of view—but I do—is the lack of public investment. We have an extreme imbalance of resources in the private sector versus the public sector.”

AI Incidents Monitor (AIM)

Source: OECD

As AI legislation gains traction, policymakers need evidence. Responding to this critical (and practical) need, the OECD has assembled a database of AI Incidents.

“Over time, AIM will help to show patterns and establish a collective understanding of AI incidents and their multifaceted nature and serve as an important tool for trustworthy AI,” writes the OECD.

At the time of our compiling this reading list at Sema, AIM has aggregated a total of 6,871 incidents and 39,685 articles. The database supports a taxonomy for AI principles, industries, affected stakeholders, harm types, and severity.

The resource is particularly helpful for c-suite teams seeking to wrap their minds around the human risks of generative AI, more broadly.

The generative world order: AI, geopolitics, and power

Source: Goldman Sachs

Judgment calls pertaining to Gen AI governance will ultimately culminate into standards that define the global geopolitical world order. Every decision, precedent, and governance-building initiative, on behalf of CTOs, requires forethought and care.

“The emergence of generative AI marks a transformational moment that will influence the course of markets and alter the balance of power among nations,” write the research team at Goldman Sachs Office of Applied Innovation.

“Increasingly capable machine intelligence will profoundly impact matters of growth, productivity, competition, national defense and human culture.”

The authors elaborate:

“Escalating competition between the US and China, wars in Europe and the Middle East, and shifting global alliances have ushered in the most unstable geopolitical period since the Cold War. At the same time, we are experiencing what may be the most significant innovation since the internet: the rise of generative artificial intelligence. With the public release of ChatGPT on November 30, 2022, the defining geopolitical and technological revolutions of our time collided.”

Here we are, in a landscape in which micro and micro decisions are inextricably linked.

“The great power AI competition focuses on hardware, data, software, and talent,” explains the research team at Goldman Sachs.

Code Compliance Newsletter

Periodically, Sema publishes a no-cost newsletter covering new developments in Gen AI code compliance. The newsletter shares snapshots and excerpts from Sema’s code compliance database. Topics include recent highlights of regulations, lawsuits, stakeholder requirements, mandatory standards, and optional compliance standards. The scope is global.

You can sign up to receive the newsletter here.

About Sema Technologies, Inc. 

Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.

Disclosure

Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.

Want to learn more?
Learn more about AI Code Monitor with a Demo

Are you ready?

Sema is now accepting pre-orders for GBOMs as part of the AI Code Monitor.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.