Blog
Stories

GenAI is pushing CTO leadership to evolve: Here's how 3 are navigating the shift

Dec 12, 2023
7
min read
Share
X

Earlier this year, McKinsey published a forecast that GenAI could add the equivalent of $2.6T and $4.4T to the global economy annually, with 75% of 63 defined use cases falling into four areas: customer operations, marketing and sales, software engineering, and R&D. 

Understandably, this macro perspective is the driving force behind current GenAI adoption trends in the software development lifecycle (SDLC), with new pressures on CTOs to establish balance between productivity benefits and risks to engineering teams as a whole. 

Here’s a look into how CEOs are evaluating the resulting challenges and tradeoffs.

Defining the risk v. reward balancing act

It’s a tale of two realities for software development teams.

On the one side of the equation, Large Language Models (LLMs) present clear and tactical benefits to the SDLC, particularly with respect to automating redundant tasks, introducing new problem-solving toolkits to developer workflows, reducing technical debt, and freeing up engineering resources to focus on complex initiatives.

On the other side of the equation, with the proliferation of AI tooling for the SDLC, CTOs are encountering new terrain with respect to governance, risk, and security. The core challenge with AI comes down to the unknown unknowns of a fast-moving market in which GenAI technology evolves faster than defined best practices can keep up. 

There are seemingly contradictory rules of thumb at play. Organizations need to be speeding up and slowing down, all at the same time, against the backdrop of regulatory frameworks that aren’t yet established.

Move too fast, and engineering infrastructure risks technical debt, major security issues, and compliance problems that could result in heavy fines. Move too slowly, and a company’s technical foundations become obsolete.

What is the practical path forward? 

How CTOs from SAP, the City of Seattle, and Nationwide are taking action

Establishing a long-term vision for adaptability

 In moments of fast-paced, compounding change, it’s helpful to take a moment to pause, listen, and look beyond momentary discussions to focus on the long-term. This perspective-setting requires a high degree of pragmatism, especially when the regulatory future of GenAI is yet-to-be determined. 

 “As CTO of SAP, I witness firsthand the potential business benefits and ground-breaking use cases that GenAI brings to the table,” explained Juergen Mueller, CTO at SAP, in an article for CIO, However, with this transformative technology also come questions that keep tech leaders awake at night.” 

At SAP, long-horizon planning — especially considering AI’s regulatory uncertainty — prioritizes the fundamentals of a thoughtful work culture that supports human ingenuity, wellbeing, and happiness at work. 

“As CTOs and CIOs, we must provide resources and strategies to enable employees to thrive. At SAP for example, we have set up an internal ‘AI playground’ that is very popular with employees,” elaborates Mueller. “More than 80% of our early adopters using GitHub Copilot say it has increased their development productivity.” 

In 2023, SAP has begun productizing applications of GenAI to specific enterprise agility bottlenecks (i.e. writing job descriptions, analyzing data, summarizing business-critical information, and providing coding assistance. These capabilities are integrated into an internal PaaS (platform-as-a-service) solution with a SaaS (software-as-a-solution) interface called Joule. Developed an AI assistant, Joule studies the role of each user and incorporates contextual data to make judgment calls. 

Part of these judgment calls involve risk mitigation. 

“Stringent technical and process controls must be in place to detect any potential tampering of trusted data sources used for training, fine-tuning, and embeddings,” writes Mueller.

Commiting to operationalizing self-governance, self-regulation, and privacy  

The White House’s recent Executive Order establishes safe, secure, and trustworthy artificial intelligence. In specific terms, Executive Order directs 26 actions in 8 categories to guide the ethical implementation of AI: 

  1. New standards for AI safety and security 
  1. Protecting Americans’ privacy 
  1. Advancing equity and civil rights 
  1. Standing up for consumers, patients, and students 
  1. Supporting workers  
  1. Promoting innovation and competition 
  1. Advancing American leadership abroad 
  1. Ensuring responsible and effective government use of AI 

These north-star objectives are only practical as the operational blueprint required to implement the thinking. A well-defined policy serves as an organization-specific blueprint. One example to consult is the Seattle Mayor and Interim CTO who recently released a Generative Artificial Intelligence policy for the City. 

“The governing principles include innovation and sustainability; transparency and accountability; validity and reliability; bias and harm reduction and fairness; privacy enhancing; explainability and interpretability; and security and resiliency,” explains a recent article for GovTech.

“The new policy touches on many aspects of generative AI, highlighting key factors of responsible use for a municipality. This includes attributing AI-generated work, having an employee review AI work and limiting the use of personal information to develop AI products.”

This policy is part of a broader plan on behalf of the City of Seattle to address long-standing municipal issues including racial equity, housing costs, and climate change. The City’s Generative AI Policy Recommendations, as part of these bigger-picture human wellness objectives, was released in August 2023, following a six-month collaboration between City employees according to GovTech. It is available for reference here and serves as an example for organizations, both in the public and private sector, to operationalize President Biden’s executive order. 

Connecting governance to engineering activity 

The promises and pitfalls of GenAI adoption require cross-functional problem-solving. Technical leaders will benefit from a protocol that systematizes dissent, critical inquiry, collaboration, and consensus. Nationwide is one example company that has established an AI Steering Committee, led by the office of the CTO Jim Fowler, to shepherd the responsible development of AI systems. The goal of this Steering Committee is to both support employees and evaluate specific business use cases for the implementation of AI solutions. 

In practice, here’s how Nationwide’s AI steering committee operates, according to a recent interview with Fowler:

  • There are two teams, a blue team and a red team
  • The blue team evaluates the benefits of GenAI (i.e. productivity benefits, higher levels of service)
  • The red team assesses potential risks (i.e. cybercrime, unethical use for malicious purposes, detrimental effects of bias)

“In all cases, we have set guardrails, and we believe there must be a human in the loop,” explains Fowler. 

“Our bionic world is humans plus machines – we want to make sure that in any decision that is made specifically with generative AI, there is a human involved in vetting the output of the model. People in the loop provide empathy, judgment and reason needed to complete a project, the machine doesn’t have that capability.”

Ultimately, developers know their engineering workflows best. Guardrails are guiding principles that help everyone make better decisions, in-the-moment.

Conclusion

There are two common threads that connect the experiences above: peer review and people.

AI code is only as safe, valuable, and useful as the people who are making the decisions to implement it — and it’s this exact challenge that necessitates a higher-level degree of collaborative problem solving between CTOs, developers, regulatory professionals, and other stakeholders.

The key to establishing best practices for GenAI evolution is to withstand the test of time. Coding and development capabilities will change. What are the values that will outlast these shifts?

Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. Sign up for the waitlist to get notified when we launch publicly in Q1 2024.

Table of contents

Want to learn more?
Get in touch

Are you ready?

Sema is now accepting pre-orders for GBOMs as part of the AI Code Monitor.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.