What could go wrong with AI in your org?
Hi Everyone,
Just 2 years ago, 12% of S&P 500 companies mentioned AI as a material risk in their public filings. In 2025, that number hit 72%.
These filings focus on implementation failures, employees leaking data into AI tools, vendor exposure, and regulatory uncertainty – the same risks that affect smaller companies, often with fewer resources to manage them.
So we put together five questions your board, investors, or leadership team will likely raise, and what a solid answer looks like for each.
What's showing up in the filings
The Conference Board / ESGAUGE study breaks the most commonly reported risks into four categories:
- Reputational risk – failed rollouts, biased outputs, customer-facing mistakes
- Cybersecurity risk – AI expanding attack surfaces while giving adversaries better tools
- Third-party and vendor exposure – reliance on cloud providers, SaaS platforms, and external AI partners
- Data breaches and unauthorized access – sensitive customer or business data exposed through AI-driven systems
Can you answer these 5 questions?
These risks don't only apply to public companies. If you're using AI in any part of your business, you also need to manage against these risks. And it's much better to act on the frontfoot now, than have your board or investors catch you on the backfoot later.
Five questions to help you guard against AI risks:
1. Do we know what AI tools our people are actually using?
Your team is almost certainly using AI tools you haven't approved.
They're pasting client data, financial details, and internal documents into ChatGPT and similar tools – often from personal accounts on personal devices.
A solid answer here means you have a list of approved tools, a clear policy on what data can go into them, and some way to check whether people are following it.
2. What happens if one of our AI tools gets something wrong?
A biased hiring recommendation, a hallucinated customer response, or a flawed financial analysis can do real damage.
You should know which AI outputs reach customers, partners, or regulators directly, and which ones must be reviewed by a person first.
If AI touches high-stakes decisions like pricing, hiring, credit, or compliance, someone should check the output before it leaves the building.
3. Who is accountable for AI decisions?
Each AI use case should have a clear owner — someone who approved it, who monitors it, and who is accountable if it fails.
In your company, that might be the CTO, the head of operations, or whoever owns the process in which the AI sits.
4. What's our exposure through vendors?
If you're using AI through a SaaS platform, a cloud provider, or an outsourced service, their risk is your risk.
You should know which vendors run your data through AI, how long they keep it, whether they use it to train their models, and what your contract says if something goes wrong.
5. Are we tracking the regulatory changes that affect us?
The EU AI Act is now partially in force, with fines that can reach €35 million or 7% of global annual turnover.
In the U.S., AI regulation is shifting to the state level, with rules that vary by jurisdiction and industry.
Someone in your organization should be watching what's changing in the places where you operate, and you should have a rough list of which AI systems might be affected.
Go deeper
👉 PwC: Board Oversight of AI – practical guidance on what boards should ask management about AI strategy and risk
👉 Orrick: The EU AI Act — 6 Steps to Take Before August 2026 – a step-by-step breakdown of what companies need to do before the next compliance deadline
👉 Harvard Law School Forum: AI Risk Disclosures in the S&P 500 – deeper analysis of reputational, cybersecurity, and regulatory risks in actual corporate filings
Coming up tomorrow
Next, we're breaking down how to move finance from backward-looking reports to forward-looking decision support.
See you tomorrow!
P.S. Could you answer all 5 of these questions right now for your company? If not, which one would be the hardest?