Table of contents
Chances are, you’re already thinking deeply about how your company can use Generative AI (GenAI) to increase efficiencies, profits, and customer satisfaction at your company. GenAI has the potential to be a true force multiplier for saving time, amplifying human ingenuity, and helping people become more productive. In order to actualize these benefits, it's important to get ahead of the practical regulatory, security, and compliance risks in order to actualize the benefits.
However, using GenAI is not without risks, some of which are only partially understood and many yet to be discovered. To prepare for the current and future challenges GenAI will introduce to your company, your organization’s c-suite and board of directors will likely need to establish AI Governance initiatives to manage how GenAI can and should be used.
Here are 10 considerations for building your AI Governance program:
1. Build a performance-driven team
The team size should be small enough to be decisive and reactive but large enough to represent the major departments in the organization. Not likely less than 5, but not more than 15, with the Goldilocks number “ideally consisting of between seven and 12 members”, including a rotating member to ensure you maintain a stream of fresh perspectives.
The departments represented may vary, depending on your organization, but OneTrust recommends you include representatives from “Legal, Ethics & Compliance, Privacy, Information Security & Architecture, Research & Development, and Product Engineering & Management.”
2. Establish and document why your company needs Generative AI
More than half of your employees are likely already using GenAI to complete their work, even if you don’t know about it. Here are some common uses for GenAI:
- Analyze data
- Customer Success
- Assistive coding
- Content creation
- Hyperpersonalization
- Document summarization
- Attend meetings and take notes
3. Monitor regulations closely
With such new technology, it shouldn’t come as a surprise that rules and regulations are constantly changing and updating. It’s essential to implement policy monitoring tools to fully understand what’s happening in the legal nexus for your organization. As part of this process, it will be important to work with your general counsel and regulatory affairs team to stay on top of updates.
4. Define clear policies
Once you’ve assembled the team and identified how you want to use GenAI, you need policies to guide employees. You don’t need to start from zero. These resources can help you identify a foundation to get started.
- “5 keys to include” from HRMorning
- “Setting an Acceptable Use Policy for Generative AI” from Ben Saunders, Chief Customer Officer at Mesh-AI
- “7 things to include in a workplace generative AI policy” from HR Dive
5. Map out Gen AI usage
Whether you like it or not, there’s a very high chance that people in your company are already using GenAI, with some passing the work off as their own or using AI without telling you they did so.
To prevent this, focus on transparency and education so employees aren’t hiding their actions, allowing them to make the most of GenAI safely and securely. Making employees successful with GenAI should be your key goal, which requires understanding when and where it is being used.
6. Establish accountability
Things will go wrong when you use GenAI. Like any computer system, they are only as good as their inputs, and since their inputs are from fallible humans, their outputs can never be guaranteed to be fault-free any more than a person can be fault-free. In the future, “AI did it” will be a common excuse for avoiding blame for all kinds of failures. This will result in a loss of credibility and potentially severe legal repercussions.
7. Protect company data and IP
GenAI requires prompts and data to give results, so a critical component when using those tools is considering how and what information can be used. Data privacy is critical in all industries, some more than others, but not using the correct data makes the process useless or even harmful. Understanding how to use AI while protecting privacy is one of the core jobs of your governance team, and there are many tools out there to help.
8. Ensure fairness, equity, and access
GenAI provides a substantial advantage to employees within companies. Those who can successfully use GenAI will produce more and better work, opening up advancement and pay opportunities colleagues might miss out on.
Training programs establish a sense of equity. There are plenty of guides out there to get you started with training and, not surprisingly, plenty of consultants providing educational services.
9. GenAI, customers, and clients
Another key consideration is what and when you communicate with your clients about your use of GenAI. When does it make sense to inform customers and clients about GenAI usage?
Transparency is critical, but so is protecting companies, and regulation will help establish the line between what’s kept public and private. Without a doubt, consumers want to know when GenAI is used, providing them with products, services, and communications about such, and the FTC is starting to take note.
10. Measuring the effectiveness of GenAI
You’re going through all this trouble to use GenAI, but is it all worth it? The only way to know where it’s helping is to measure and compare uses of GenAI. There are several excellent guides on KPI best practices for measuring the success of GenAI.
At the end of the day, building a framework for assessing the value of GenAI is the critical task of your AI Governance program.
Keeping track of global GenAI compliance standards
Periodically, Sema publishes a no-cost newsletter covering new developments in Gen AI code compliance. The newsletter shares snapshots and excerpts from Sema’s GenAI Code compliance Database. Topics include recent highlights of regulations, lawsuits, stakeholder requirements, mandatory standards, and optional compliance standards. The scope is global.
You can sign up to receive the newsletter here.
About Sema Technologies, Inc.
Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.
Disclosure
Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.