As the world pours billions into artificial intelligence, questions arise about whether the AI surge might be inflating a bubble—and what that could mean for company leaders if things don’t turn out as hoped.
A recent report from the United Nations Conference on Trade and Development predicts the global AI market could skyrocket to $4.8 trillion by 2033, a massive jump from today. This outlook has sparked huge investments in AI experts, data centers, and automation across different industries.
But for the folks in charge—the company directors and officers—the worry isn’t if AI will change business, but whether their decisions on managing, reporting, and funding AI projects during this fast growth could later face criticism from shareholders.
Liability insurance providers are paying close attention. Some warn AI-related lawsuits may soon outpace claims connected to COVID-19 and cryptocurrency, which have been top issues in recent years.
Eric Wedin, head of financial lines for North America at Allianz Commercial, points out that while federal securities cases linked to COVID-19 and crypto have dominated the scene for years, AI has the potential to become the next big cause of legal trouble for company leaders.
Concerns about an AI bubble are not new. Last year, OpenAI CEO Sam Altman wondered aloud if investors might be overly excited about AI. Google’s CEO Sundar Pichai recently said the AI boom has "elements of irrationality," warning that no company would be safe if the bubble bursts.
Investors are now watching closely how companies spend on AI and what returns they promise. Wedin notes that if projects don’t deliver as expected or companies lag behind competitors, shareholders might hold directors responsible.
Often, this accountability shows up as lawsuits claiming companies made misleading statements or exaggerated growth prospects. Such claims tend to spike during rapid tech changes.
Nora Hattauer, head of financial lines at Zurich North America, highlights the unprecedented scale of AI investments, especially in building data centers. She stresses that boards need clear rules for managing these large commitments. Underwriters are demanding more details on accounting, balance sheets, and cash flow related to AI spending.
One major legal risk lies in disclosure. False or missed earnings forecasts are a common cause for securities lawsuits. Wedin says AI-related claims are already rising, with 13 cases filed in the first half of 2025 alone, nearly matching the total for all of 2024. Many suits argue companies overstated AI benefits or downplayed risks.
Even when companies include legal protections, shareholders and lawyers will closely examine leaders’ choices if results don’t match expectations.
Adding to the challenge is a shifting regulatory environment. In December 2025, the US Securities and Exchange Commission’s Investor Advisory Committee suggested new AI-specific disclosure rules. Around the world, different regulators are taking varied and sometimes overlapping approaches, making it tough for leaders of multinational companies to keep up.
Boards also face a tough balancing act with AI. Move too fast, and they might be accused of mismanagement; move too slowly, and they risk missing out on strategic chances. Rushing AI tools into use without proper testing—such as checking for bias or errors—can leave leaders open to legal claims. On the other hand, being seen as overly cautious could lead to lawsuits as well.
Experts recommend that documenting why and how AI decisions are made is one of the best ways for boards to protect themselves. Relying on outside experts in finance, technology, and law, and recording these inputs in meeting notes, can strengthen their case if questioned later.
Wedin also points to the importance of thorough reporting from company management, strong governance, formal risk controls, and ongoing education for directors.
Hattauer advises boards to regularly review AI projects since assumptions and market conditions can change over time. She recommends preparing for surprises and being ready to adjust strategies if needed.
Another area raising eyebrows is how companies finance their AI plans. Zurich has noticed more use of alternative arrangements, like special-purpose vehicles and off-balance-sheet deals, to fund large data centers. While not necessarily wrong, these approaches make insurers wary. Boards must clearly explain why these structures are used and how risks are managed.
Liquidity is a concern, especially for smaller companies eager to keep pace. Boards need to realistically judge if their finances can back their AI goals, particularly if revenue lags.
Beyond infrastructure, governance extends to how AI is used in day-to-day operations. This includes hiring, underwriting, and customer interactions, where new rules and risks around bias and errors are emerging.
Insurance brokers say underwriters look for solid evidence that boards are making careful decisions, relying on qualified experts, providing honest disclosures, and understanding financing challenges.
Following these fundamentals can help reduce risk for both companies and their leaders.