Blog
Working Papers

New Capabilities, New Risks, and New Opportunities: the Impact of AI on M&A and Private Equity Insurance

Aug 28, 2024
#
min read
Share
X

Table of contents

Introduction

The rapid adoption of Artificial Intelligence (AI) is reshaping every sector, functional area and organization type.

This transformation is especially pronounced in the realms of Private Equity (PE) and Mergers and Acquisitions (M&A), for three reasons.

  1. First, at the fund level, the elite, data-driven and outcome-focused talent concentrated at PE firms leads to highly rigorous assessments of new technologies and approaches for investment
  2. Second, during due diligences the tight timelines and voluminous materials provided make experimentation with new tools essential
  3. Third, at the portfolio company level, given the tight nexus between ownership and management provided by PE ownership, PE-backed firms (“portfolio companies”) are incentivized and encouraged to experiment quickly and adopt high value operational improvements

As the practices of PE fund and portfolio companies, and Mergers and Acquisitions activities more generally, change via AI– in other words, new capabilities – the providers of insurance products will need to adapt too.

Some of those adaptations are necessary to address new risks from AI, and ensure that risk allocations are properly balanced between insurers and the insured.

But incorporating the rapid adoption of AI into insurance offerings is not just about managing the downside risk. There are significant new opportunities for carriers given the rise of this new technology.

Automotive, Aviation, Cyber and Climate Risk Insurance all came about as the result of significant global technology changes. Our research indicates a similar opportunity for insurers for AI, given the size, breadth and depth of its adoption in the decade ahead.

This white paper:

  • Recaps how AI is transforming PE and M&A, for investors, due diligence and value creation / post merger integration
  • Establishes a nine-part framework for new and modified risks stemming from AI usage relevant to carriers
  • Identifies potential policy-specific changes for Reps and Warranties (R&W), Directors and Officers (D&O), Errors and Omissions (E&O), and Cyber Insurance products that insurers may adopt, and product-agnostic capabilities to keep pace with AI adoption by insured parties
  • Presents a “thought starter” list of six new AI insurance products that could respond to this changing landscape

New Capabilities: The Rapid Rise of AI in PE and M&A

AI tools are rapidly being adopted up and down the PE and M&A value chain, including deal sourcing, due diligence, and company operations via value creation and Post Merger Integration (PMI).

Deal Sourcing

AI tools are increasingly used to identify potential acquisition targets by analyzing vast datasets, including market trends and competitor activities. This automation allows for a more comprehensive market scan and the identification of high-potential targets that may have been overlooked by human analysts.

AI Adoption is already significant and will only increase. According to recent research by SS&C Intralinks:

  • About one-third of firms are early adopters of AI.
  • 43% have invested in AI training for their deal teams.
  • 32% plan to restructure their deal teams as AI adoption accelerates.
  • 97% of M&A professionals believe AI will profoundly impact their operations and processes.

Due Diligence

AI usage is expanding in two: using AI as a due diligence aid and diligencing the use of AI by the potential target.

As a diligence aid, AI can process enormous datasets more quickly and identify potential risks and opportunities that might be overlooked by human analysts.

  • AI systems can identify subtle patterns and correlations in due diligence data, uncovering hidden financial trends and operational inconsistencies that human analysts might overlook, thus providing critical insights for decision-making.
  • AI-powered due diligence software can efficiently identify critical contractual clauses like 'change-of-control' and 'non-compete' provisions, as well as flag potential legal, financial, or documentation risks.

When used wisely, the impact can already be significant. Document review in due diligence, for example, is particularly ripe for AI assistance. Cambridge Investment Management estimated an 85% reduction in document review during diligences. This is in line with our research estimates of a 6.5X-11X return on investment for AI tooling in document review, agnostic to use case.

Beyond today’s uses, due diligence industry is only at the beginning of the creative and impactful ways that AI will facilitate due diligence. For example, a recent Bain Whitepaper highlighted a diligence where general purpose Gen AI was used to replicate an acquisition’s potential technology:

In a matter of days, the team built a series of prototypes using OpenAI’s GPT-4 API and other open-source models. They then tested these “competitors” against the target’s solution and found that all of them performed significantly better in a number of ways. This allowed the fund to quickly make a call on the opportunity.

The second area of AI in diligence is evaluating the portfolio companies’ use of AI. In-house Operating teams, Advisory firms and other M&A ecosystem players have had to develop expertise in evaluating targets’ AI usage overnight. The importance of top-tier technologists, both AI specialists and Tech generalists, who can “talk tech” and also “talk business” has never been higher—these professionals are at all-time levels of demand.

Today, most assessment of targets’ AI risks relies on qualitative / interview-based assessments, but quantitative diligence tools are becoming more widespread.  

For example, in Fall 2023, several late-stage PE firms and global acquirers identified the security, IP, and maintainability risks of Generative AI code in targets’ codebases. They commissioned a vendor to build a software solution to manage that risk. After a few months of testing, that solution expanded into all of their diligences by Summer 2024.

Value Creation and Post Merger Integration

The depth of AI adoption at the investor / acquirer level for sourcing and due diligence is significant. But the breadth of adoption pales in comparison to usage at the target /acquisition level during the Value Creation and / or Post Merger Integration phases. This is due to the number of operating functions across industries that can benefit from general purpose or specific AI tools.

A partial list of use case examples includes:

  1. Predictive Analytics for Business Forecasting and Risk Assessment: AI models analyze vast amounts of historical data to predict potential outcomes, helping firms make more informed decisions. This is particularly useful in identifying patterns that might suggest future financial performance, market trends, and potential risks.
  2. Content Creation: Advanced language models are being employed to automate content creation whether that content is in human languages or computer languages. The adoption of GenAI to assist software developers is a particularly ripe application; our research estimates a 41Xreturn on investment from software developers adopting GenAI tools.
  3. Process Optimization / Automation: AI algorithms quickly sift through enormous datasets, identifying patterns and potential issues that human analysts might miss. This capability is particularly valuable in sectors with complex operational environments, where efficiency and compliance risks need to be meticulously managed. AI-powered tools can also effectively serve customers/ users with standard requests, leading to an explosion in AI usage in customer support functions, as an example.
  4. Customer Insights Generation: AI can go faster and deeper analyzing customer feedback, social media, and other text sources to gauge market sentiment about products or services. This insight can be crucial in assessing customer satisfaction and predicting market reactions to new offerings.
  5. Decision Support Systems: AI systems provide data-driven recommendations, scenario analyses, and risk assessments, in HealthCare (diagnoses), Retail (fraud, recommendations), Insurance (claims), Financial Services (underwriting), Manufacturing (process optimization) and beyond.

Two notes are important here.

First, AI is not a panacea; not every proposed use case has positive ROI or is worth the offsetting risks. But it is note worthy how many use cases already have positive ROI, despite the relatively early stage of general purpose AI tools. As internal and external experimentation continue, the number use cases where the “juice is worth the squeeze” will only increase.

And second, from a risk management perspective for insurers, whether or not an insured’s use of AI is an optimal one is immaterial; that AI is being used can generate risk. To cite but one recent example, Google’s automatic AI summarization inaccurately recommended drinking urine to pass kidney stones. Whether or not Google should have released the tool, they most certainly did so, generating risk for the organization and its stakeholders.

New Risks: How Risk Profiles are Changing Due to PE/ M&AAI Usage

AI adoption in PE and M&A simultaneously introduce new risks and new flavors of old risks, for investors, operating organizations, and insurers. 

These risks can be grouped into nine categories:

  1. General Inaccuracy. AI tools are confidently wrong: producing answers that look correct without caveats or acknowledgement of the uncertainty.
  2. Bias, or Directed Inaccuracy. AI results can give consistently rather than randomly inaccurate answers, with significant implications. For example, an AI system used in hiring might inadvertently and inappropriately favor certain candidates over others based on biased training data.
  3. Unpredictability. Unlike traditional, deterministic data analysis tools, AI models are probabilistic and may not produce the same answer twice when presented with identical data. Too, these tools can evolve over time on their own as they process new data, unlike traditional analytics requiring human intervention to reset standards.
  4. Liability Attribution. As AI systems take on more decision-making roles, questions of liability become more complex. Who is responsible for an error from an AI system that based on a general purpose AI tool (Party A) as modified by a specialist software provider (Party B) and put into use by a business (Party C)?
  5. Intellectual Property / Output Ownership. Intellectual Property experts have identified that courts and regulatory bodies are at the beginning of the journey of deciding who owns what from products made in part or in full from AI.
  6. Ethical Considerations and Reputational Risks. Facilitating or accidentally implementing AI uses that have negative ethical implications can expose organizations to reputational risks with real-world implications.
  7. Security. AI tools require more data and the integration of data that previously may have been siloed. This increases the risk of inadvertent security breaches while also presenting an even more attractive target to bad actors.
  8. Shifting Regulatory Requirements. Organizations that use or provide AI tools will have to stay compliant as the regulatory landscape evolves; the recently enacted EU AI Act is just the tip of the iceberg.
  9. Lack of Historical Data. For risk underwriters in particular, there is a specific risk caused by the newness of AI usage, making it challenging to predict potential losses and set appropriate premiums.

With this framework in mind, consider the following examples of risk profiles changing based on specific AI usage in M&A and PE.

  • The AI-powered tools used to assist with due diligence or operational practices like hiring or risk under writing could have General Inaccuracy or Directed Inaccuracy (Bias).
  • By over-relying on AI without proper human oversight, AI can inadvertently facilitate sub-optimal diligence or operational decisions. Repeated studies have shown that humans are overly trusting of AI outputs.
  • Unlike deterministic processes, probabilistic determinations can make it very difficult to reproduce results and / or provide sufficiently clear explanations of high stakes diligence decisions.
  • On the Intellectual Property side, a strict reading of current US Copyright Office guidance suggests that software generated with AI tools will not receive copyright protection. If GenAI code generation continues to advances at the predicted accelerated rate—consider AWS CEO Matt Garman stating that “it's possible that most developers are not coding” by 2026—the potential consequence could be an expensive, heavily-litigated change to software copyright protection.

New Opportunities, Part 1: Implications for Existing Insurance Products

Section Introduction

As discussed above, the rapid adoption of AI tools has meant new and enhanced capabilities for investors, acquirers, and operators – not to mention insurers themselves.

Along with those new capabilities have come new risks, and new flavors of old risks.  

Even as companies are pressing ahead with AI adoption, their internal risk managers understand how much uncertainty and risk is being introduced.

As one CTO told us: “I’m hearing from my CEO how can I go faster, and from my Chief Counsel how do I know we’re protected?”

For forward-looking insurers, these new AI risks present new opportunities: to adapt existing insurance products to meet the market demands and underwrite risks properly, and to introduce new products. 

A sample of potential AI-driven changes to existing insurance products offered to PE and M&A are covered in this section, along with product-agnostic capabilities that insurers would be wise to build, buy or facilitate. Initial ideas on new products are covered in the following section.

Reps and Warranties (R&W)

Changes to R&W Insurance policies and practices due to AI could include:

  • Adding specific provisions addressing AI-related representations and warranties, including exclusions or limitations for certain risks
  • Accounting for liabilities arising from the use of improperly sourced or biased data in AI models used during the deal process
  • Develop new underwriting methodologies to assess the accuracy and reliability of AI-generated financial projections
  • Accounting increased complexity in evaluating representations related to intellectual property
  • Verifying the completeness and accuracy of AI-related disclosure

Directors and Officers (D&O)

Potential changes include:

  • Explicitly address AI governance and oversight responsibilities in D&O policies
  • Underwriting risks from potential claims related to AI oversight and AI-driven decision-making
  • Considering pricing adjustments, e.g. premium incentives for firms with robust, demonstrated AI risk management processes, and accounting for increased claim frequency and severity
  • Explain ability coverage, for example, for situations where the inability to explain AI-driven decisions leads to legal challenges

Errors and Omissions (E&O)

Potential changes include:

  • Accounting for AI-generated advice and decisions in policy documents
  • Developing expertise to determine liability when errors involve both human judgment and AI recommendations
  • Developing expertise to assess the reliability and performance standards of AI models
  • Considering pricing adjustments in light of changing claim volumes—made more difficult given the lack of historical data, described above
  • Eventually, customers may benefit from Non Use Protection, that is, against claims that a firm failed to utilize available AI technologies, resulting in suboptimal outcomes

Cyber Insurance

Several of the above changes also apply to Cyber. Additional changes could include:

  • Explicitly addressing AI-specific vulnerabilities and potential breaches
  • Expanding coverage for AI model and data theft and misuse, as well as other attack vectors
  • Developing methodologies to quantify business interruption losses resulting from AI model manipulation or theft
  • Considering new coverage sub limits or exclusions for certain high-risk AI implementations or data types

Product-Agnostic Capabilities

For each of these products, insurers could benefit from acquiring or enhancing the following capabilities, through a combination of building (developing capabilities in-house), buying (partnering with vendors) or facilitating (joining industry consortia or government- or academia-led consortia).

  • Standardizing assessments of individual GenAI tools and use cases, both for safety against the nine dimensions above, as well as efficacy
  • Standardizing assessments of GenAI implementations
  • Quantifying and tiering assessments of the role of “humans in the loop” for any high risk AI systems or applications
  • Developing standards for overall AI risk exposure. Great standards—clear, understandable at many levels of technical and risk expertise, and actionable— would effectively drive tradeoffs for not only the C-Suite, Boards and Insurers, but also managers and front-line employees

New Opportunities, Part 2: Looking Ahead: Opportunities for New Insurance Products

New Opportunities, Part 1, covered opportunities for insurers to identify, manage, and price the risk for their existing products serving PE and M&A.

But what about new products?

As noted above, Automotive, Aviation, Cyber and Climate Risk Insurance all came about as the result of significant global technology changes.

What opportunities exist for Insurers to help organizations manage their unique and evolving needs as a result of the massive AI disruption?

Here are six thought starters of new insurance types.

  1. AI Implementation Insurance
    • Covers risks associated with integrating AI systems in acquired companies or portfolio firms
    • Addresses potential disruptions or losses during the AI adoption process
    • Includes coverage for retraining costs and temporary productivity losses
  2. AI Decision Liability Coverage
    • Protects against claims arising from AI-driven business decisions
    • Covers potential losses due to algorithmic bias or AI system failures
    • Includes provisions for legal defense in cases challenging AI-driven decisions
  3. AI Intellectual Property Defense Insurance
    • Covers legal costs related to AI-generated IP disputes
    • Addresses the unique challenges of determining ownership and infringement in AI-created content
    • Includes coverage for defending against claims of AI model or algorithm theft
  4. AI Model Performance Insurance
    • Covers financial losses resulting from underperforming AI models
    • Includes provisions for model validation and ongoing monitoring
    • Offers protection against "black box" risks where AI decision-making processes are not fully transparent
  5. AI Ethics and Compliance Insurance
    • Covers risks related to ethical issues arising from AI use in M&A and PE
    • Includes coverage for regulatory fines and penalties related to AI compliance
    • Offers protection against reputational damage from AI-related ethical breaches
  6. AI Insurance
    • Blanket coverage that covers all of the above

These new insurance products represent significant opportunities for insurers to differentiate themselves in the market and address the evolving needs of M&A and PE firms as they increasingly adopt AI technologies.

Conclusion: Mitigating the Risks, Capturing the Opportunities

As AI continues to reshape the PE and M&A landscape, the role of insurance in mitigating risks and facilitating innovation becomes increasingly critical.

By embracing these changes and proactively addressing the challenges they present, carriers cannot only protect their clients but also drive the responsible and effective integration of AI into these industries and processes.

The work for carriers who want to take a market-leading position to adopt to and shape AI adoption in the decade ahead is not small, but neither is the opportunity. 

The future of PE and M&A risk management be shaped by those who can successfully balance the transformative power of AI with prudent risk management.

Learn More

About Sema  

Sema is a software company that “bridges the gap” between technical and non-technical audiences to explain and evaluate software development and code.

Sema’s Comprehensive Codebase Scans have been used to evaluate $1T worth of software organizations for globally-leading software investors and acquirers.

Sema is also the global leader in GenAI coding usage standards and the inventor of the Generative AI Bill of Materials, which tracks how much Generative AI code has been included inside a codebase, to manage the risks from coders inserting code written by LLMs.

Sema’s Policy and Research Team is working on a broad series of whitepapers related to AI and Insurance, and welcomes hearing about what topics matter most for future research. Too, the Team welcomes feedback and ideas on the next iteration of this whitepaper.

Want to learn more?
Learn more about AI Code Monitor with a Demo

Are you ready?

Sema is now accepting pre-orders for GBOMs as part of the AI Code Monitor.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.