How Boards Are Starting to Use AI to Navigate Complexity, Risk and Strategic Decisions
Artificial intelligence is reshaping companies, industries and entire business models. Yet one of its most consequential effects has barely been discussed: how AI is starting to transform the boardroom itself.
Until recently, board-level conversations about AI were almost exclusively about oversight: cybersecurity, ethics, regulation, data privacy, operational risk. Directors were expected to supervise AI adoption across the organisation while remaining largely external to the technology. That assumption is breaking down.
More and more organisations are experimenting with AI not as something to be governed, but as something that supports the act of governing. AI is starting to enter the boardroom: helping directors prepare meetings, synthesise complex information, identify emerging risks and, in some cases, challenge assumptions during strategic discussions. AI is not yet replacing the board. It is beginning to change how the board prepares, asks questions, supervises and thinks.
The shift is happening at a moment when governance is becoming markedly more demanding. Directors are expected to navigate geopolitical fragmentation, technological disruption, the climate transition, cyber threats, regulatory complexity and rising stakeholder expectations, all under intense time pressure and persistent information overload. In many companies, board members receive several hundred pages of materials ahead of each meeting. Anyone who has sat on a board knows that access to information is no longer the constraint. The constraint is the capacity to process complexity without losing strategic clarity or sound judgement.
The State of Play
The starting point is sobering. According to a global McKinsey survey of directors, 66% of boards report “limited to no knowledge or experience” with AI, and nearly one in three say AI does not even appear on their agendas. Only around 15% of boards currently receive AI-related metrics, and fewer than 25% of companies have board-approved, structured AI policies. The gap between what AI is doing inside organisations and what boards understand about it is wider than most directors would like to admit.
And yet the strategic implications are becoming increasingly difficult to ignore. Research from MIT CISR found that companies with digitally and AI-savvy boards significantly outperform their peers on return on equity, while companies without such boards lag behind their industry average. While this evidence is largely based on U.S.-led research and should not be read as proof of causality, it does suggest that board-level digital and AI fluency is increasingly associated with stronger corporate performance. At the same time, governance practices still appear to trail boardroom discussion. According to recent NACD survey data, focused primarily on U.S. public company boards, many boards are now dedicating agenda time to AI discussions, yet far fewer have formally embedded AI oversight into committee structures or governance frameworks. In Europe, where regulatory expectations around AI, data governance and accountability are already more explicit, this gap may become even harder to justify. Many boards, in other words, are still in learning mode at precisely the moment when AI is moving from a technological topic to a strategic and governance reality.
Meanwhile, the early adopters are no longer hypothetical. In April 2026, Lloyds Banking Group became what is believed to be the first FTSE 100 company to deploy a specialist AI “board bot” in its boardroom. The system ingests Lloyds’ board packs and supporting documents, and helps directors interrogate them in conversational language: summarising long reports, highlighting inconsistencies and drawing connections between agenda items on topics from cybersecurity and sustainability to financial performance and M&A. Earlier and more symbolic precedents go back over a decade: Deep Knowledge Ventures appointed the algorithm VITAL to its board in 2014, Tieto added the AI agent Alicia T. to a leadership team with voting rights in 2016, and Rakuten introduced a “Robo-Director” for strategic planning. A 2025 Harvard Business Review study based on focus groups with more than 50 board chairs from companies including ASM, Lazard, Nestlé, Novo Nordisk, Randstad, Sandoz and Shell confirms that pioneering boards are quietly experimenting with AI in three distinct ways: assisting individual directors, supplying the full board with better information, and, most ambitiously, having AI participate in discussions itself.
Three Stages of Board Maturity: Reactive, Proactive, Transformative
Before looking at specific use cases, it helps to borrow a useful frame. A 2025 study published in the California Management Review by researchers at Berkeley proposes an AI Governance Maturity Matrix that classifies boards into three stages of readiness, evolving across five dimensions of governance practice. The framing is helpful because it shows that “doing something on AI” is not a binary; it is a trajectory.
- Reactive boards deal with AI on an ad hoc basis. They respond to incidents, regulatory pressure or specific management proposals when these arise, but there is no structured approach. AI typically appears on the agenda as a recurring “topic” rather than as an integrated lens on strategy and risk. According to the study, this is where most boards still sit, including most of the top 50 US companies by market capitalisation.
- Proactive boards have moved from awareness to structure. They have established dedicated committees or assignments, AI reporting protocols, defined risk appetites and clearer expectations of management. AI literacy is part of board induction. The board has a real view of the company’s AI inventory, its key dependencies and its main exposures. AI is being used in some board processes, typically board-pack summarisation or risk dashboards.
- Transformative boards fully integrate AI into strategic governance. AI shapes how directors prepare, how the board interrogates management, how risks are anticipated and how decisions are documented. Crucially, transformative boards combine adoption with an explicit map of where AI is used and where human judgement remains paramount. They treat AI not as a tool to be governed, but as part of the governance architecture itself.
This trajectory matters because the use cases below do not all apply to every board at every stage. A reactive board attempting strategic sparring with AI before mastering board-pack analysis is unlikely to do either well. The point is to know where the board stands and to design adoption accordingly: sequenced, deliberate, and matched to real governance maturity.
Six Use Cases for AI in the Boardroom
With that frame in mind, six distinct uses of AI are taking shape at board level. Each one carries genuine opportunity and a corresponding risk. Treating them separately makes the trade-offs much clearer than discussing “AI in governance” as a single block.
1. Board pack analysis and meeting preparation
Opportunity. AI can summarise hundreds of pages of board materials, surface inconsistencies between papers, highlight the issues most relevant for each director, and generate tailored briefing notes. A small but growing set of governance-specific platforms (Diligent’s GovernAI, Nasdaq Boardvantage, Board Intelligence’s Lucia and others) are moving beyond summarisation toward interrogation, helping directors ask sharper questions before the meeting begins.
Risk. Confidentiality. Board materials contain the most sensitive information a company produces: strategic plans, M&A pipelines, succession discussions, executive compensation, litigation exposure. Generic AI tools are not appropriate for this material. Boards need specialist, audit-ready systems with watertight data isolation, strong provenance and clear audit trails.
2. Reducing the information asymmetry between management and the board
Opportunity. Boards depend, by design, on what management chooses to put before them. AI tools that cross-check management reporting against internal KPIs, external data, peer benchmarks and reputational signals give non-executive directors something they have historically lacked: an independent way of pressure-testing the narrative they are being told.
Risk. Governance friction. A 2025 ECGI working paper by Ferreira and Li argues that as CEOs increasingly use AI as a private advisor, their incentives to share information with the board may actually decline, with potential consequences for monitoring intensity and CEO turnover. Whether AI strengthens or weakens the board’s position depends on whether the board itself adopts it, and how openly it is used on both sides of the table.
3. Bias-checking and challenging board dynamics
Opportunity. Groupthink, confirmation bias, excessive alignment with management and insufficient challenge are among the most persistent governance risks, often most acute in high-performing organisations, where success itself becomes a barrier to questioning. AI used as an intellectual counterweight can flag missing perspectives, surface unstated assumptions and play a structured devil’s advocate role. Lloyds is explicitly using its board bot for this purpose, and Board Intelligence has hinted that future versions could intervene during meetings: “Hang on, I think you’re falling into this trap.”
Risk. Embedded bias in the tool itself. AI systems may reproduce hidden assumptions in training data or in organisational culture, particularly sensitive in succession planning, executive evaluation, compensation and stakeholder prioritisation. A bias-checker that itself carries bias is worse than no bias-checker at all. Boards need to understand how the model was trained and on what.
4. Risk oversight and weak-signal detection
Opportunity. AI systems can monitor cyber threats, reputational signals, geopolitical developments, supply chain vulnerabilities and ESG inconsistencies in near real time, giving audit and risk committees earlier visibility than traditional reporting. The relevance is hard to overstate: data from the AI Incident Database shows reported incidents rose 26% in 2023 and a further 32% in 2024. The risk landscape is not only changing; it is accelerating.
Risk. False precision. AI systems can present incomplete or biased outputs with the appearance of certainty. A confident, plausible-sounding alert that is actually wrong can do more damage to board judgement than no alert at all. Risk committees need to triangulate AI-generated signals with human judgement and independent sources, not treat them as ground truth.
5. Strategic sparring: scenarios, devil’s advocacy, blind spots
Opportunity. Boards can use AI to test assumptions, simulate alternative scenarios and generate challenge questions before major decisions. The most useful prompts are rarely requests for answers; they are requests for better questions: What are we taking for granted? Whose voice is missing? What would have to be true for this strategy to fail? A 2025 HBR study by Stadler and Reeves, based on real boardroom experiments at Austrian company Giesswein, found that AI’s greatest value lies precisely here: not in producing perfect answers, but in disrupting routine thinking patterns and broadening the range of options considered. Used this way, AI becomes a sparring partner for directors who already know that the quality of a decision is largely determined by the quality of the questions that preceded it.
Risk. Hallucination, and the “completeness illusion.” A peer-reviewed Stanford study published in the Journal of Empirical Legal Studies found that leading domain-specialised AI tools hallucinate between 17% and 33% of the time. Stadler and Reeves observed a related risk in the boardroom: AI’s breadth can create misplaced confidence, leading directors to overlook key issues (legal implications, second-order consequences) precisely because the AI’s response felt comprehensive. One 2024 estimate placed business losses from AI hallucinations at $67.4 billion, with 47% of finance executives admitting they had acted on faulty AI content. At board level, a confidently wrong scenario is worse than no scenario.
6. Board workflow: minutes, follow-ups, decision traceability
Opportunity. AI can support agenda construction, automatic minute drafting, action-tracking and decision logs that link back to the underlying papers and discussions. This improves continuity between meetings, makes it easier to revisit how a decision was reached, and supports compliance and audit.
Risk. Accountability drift. If a recommendation, summary or even minute originates with AI, who is responsible when it turns out to be wrong? Fiduciary duty remains, by design, with the human beings around the table. Boards must make sure that AI-supported workflow does not blur the chain of accountability, particularly in regulated sectors.
Summary: Six Use Cases at a Glance
| Use case | Key opportunity | Key risk | Maturity needed |
| 1. Board pack analysis & meeting prep | Cuts cognitive load; sharper questions before the meeting | Confidentiality; needs specialist, audit-ready systems | Reactive → Proactive |
| 2. Reducing information asymmetry | Independent pressure-test of management’s narrative | CEO may withhold information if AI substitutes for board advice | Proactive |
| 3. Bias-checking & dynamics | Counterweight to groupthink; structured devil’s advocate | Tool may carry its own embedded bias | Proactive → Transformative |
| 4. Risk oversight & weak signals | Real-time visibility on cyber, ESG, geopolitics, reputation | False precision: confidently wrong alerts erode judgement | Proactive |
| 5. Strategic sparring & scenarios | Better questions, scenario testing, blind-spot detection | Hallucination: 17–33% error rates even in specialist tools | Transformative |
| 6. Board workflow & traceability | Continuity between meetings; cleaner decision logs | Accountability drift if chain of responsibility blurs | Reactive → Proactive |
The Cross-Cutting Risk: Confusing Sophistication with Wisdom
Across all six use cases there is a single risk that deserves to be named separately, because it is not technical. It is cognitive.
AI systems can create an illusion of precision and certainty even when their outputs are flawed. As Gartner’s Van Baker has put it, large language models are “fundamentally pattern recognition and pattern generation engines” with “zero understanding of the content they produce.” Pattern recognition, however advanced, is not the same as judgement. Accountability cannot be delegated to an algorithm.
That distinction matters deeply at board level. Directors are not responsible merely for optimising outcomes. They are responsible for navigating ambiguity, balancing competing stakeholder interests and exercising discernment in situations where values, ethics and long-term consequences cannot be reduced to data. The board’s job is not to be the smartest analyst in the room. It is to be the most responsible one.
There is also an emerging tension that deserves attention: the relationship between the board and management around AI itself. In some companies, CEOs are pushing back against boards they perceive as urging AI adoption too quickly, without distinguishing genuine value from hype. In others, the dynamic is reversed: management running ahead of a board that does not yet know what to ask. Both situations are governance risks. AI in the boardroom should not become a battleground; it should become a shared mechanism for clearer thinking on both sides of the table.
Looking Forward: What the Next Decade Will Demand of Boards
If the first wave of AI in governance was about oversight, and the current wave is about board-level adoption, the next wave will be about something harder: redesigning the board itself for a world in which AI is a permanent participant in corporate decision-making.
Five shifts are likely to define that next decade.
From AI literacy to AI fluency at board level
Today, only a small minority of boards have a director with genuine AI expertise. That is no longer sustainable. The standard will move from “someone on the board understands AI” to “every director understands enough to ask the right questions.” This does not mean technical training; it means the ability to interrogate a model’s assumptions, understand its limitations, and recognise when human judgement must override its output. Board induction programmes, continuous education and committee composition will all have to evolve.
From episodic AI discussion to structured AI governance
The gap between boards that discuss AI and boards that have formally embedded AI in their committee charters (the 62% versus 27% gap) will close, and quickly. Investors, regulators, proxy advisors and insurers will increasingly expect explicit accountability for how the board oversees AI risk and how it uses AI itself. Expect AI governance to migrate from “topic of the year” to a permanent, allocated responsibility, most likely shared across audit/risk and a strategy or technology committee.
From AI as oversight subject to AI as governance infrastructure
Within the next three to five years, AI-supported board packs, briefing notes, risk dashboards and decision logs will be standard rather than experimental, at least in large listed companies. The interesting design questions will move from “should we use AI?” to “how does AI enter our deliberations without quietly displacing them?” Designing the boundary between AI input and human deliberation will become a core governance skill in its own right.
From a single boundary (“AI does not vote”) to a graduated map of roles
Pippa Begg of Board Intelligence is right that giving AI a formal legal vote would be “a dangerous leap.” But the space between silent briefer and voting director is enormous. Over the coming decade, boards will need to articulate (and disclose) a graduated map: where AI prepares, where it challenges, where it monitors, where it merely informs, and where it is deliberately kept out. Expect governance codes (FRC in the UK, OECD principles, national codes) to begin formalising this map.
From individual board adoption to ecosystem-level governance
AI in the boardroom does not exist in isolation. Auditors, proxy advisors, regulators, institutional investors and insurers are all building their own AI tools, which will increasingly interact with the AI systems boards use. The next decade will likely see the emergence of a governance ecosystem in which multiple AI systems analyse each other’s outputs. Boards that have not thought carefully about the provenance, transparency and auditability of their own AI tools will find themselves at a disadvantage in that ecosystem.
A Hybrid Boardroom
The future boardroom will be neither fully human nor fully automated. It will be hybrid: an environment in which artificial intelligence supports analysis, synthesis and anticipation, while human beings remain responsible for judgement, ethics and accountability.
The opportunity is real, and the data suggests it is also material to long-term value creation. But so is the risk. AI can help boards see more, ask better and anticipate sooner; it can also make them more dependent, more confident and less responsible if it is not governed well.
The central challenge for boards over the next decade is therefore not whether to adopt AI, but whether they can use it to strengthen the quality of human decision-making — without gradually surrendering the very capacities that effective governance requires. The boards that succeed will not be the ones that adopt AI fastest. They will be the ones that adopt it most thoughtfully: with clear purpose, explicit boundaries, honest scepticism about its outputs and an unwavering grip on the responsibilities that no algorithm can carry.
Further Reading
Real-world cases
- Lloyds Banking Group — First FTSE 100 “board bot”, developed with Board Intelligence, to support directors with confidential material, meeting preparation and bias detection. Lloyds puts AI agent in the boardroom (Finextra, April 2026).
- Independent analysis of the Lloyds case, with technical and governance commentary on what AI agents in the boardroom actually require (data isolation, provenance, hallucination resistance, audit trails). Lloyds Puts an AI Agent in the Boardroom: What CX and AI Teams Should Notice (Conversational AI News, April 2026).
- Deep Knowledge Ventures — VITAL (2014). The pioneering, partly symbolic case of an algorithm appointed to support biotech investment decisions. Algorithm Appointed Board Director (BBC News, May 2014).
- Tieto / Tietoevry — Alicia T., an AI agent appointed to the leadership team of a new data-driven business unit, with a vote on business direction. Tieto appoints bot to leadership team (Finextra, October 2016).
- Editorial coverage of Board Intelligence’s expansion into AI-powered board reporting through the acquisition of Competent Boards. Board Intelligence acquires tech platform to provide boards with AI tools (Tech.eu, May 2025).
- Microsoft customer story documenting how Nasdaq integrated generative AI into Boardvantage, with detail on architecture, security and accuracy benchmarks. Nasdaq transforms the boardroom experience with AI integration built on Azure (Microsoft Customer Stories, November 2025).
Articles and academic papers
- Stanislav Shekshnia & Valery Yakubovich — How Pioneering Boards Are Using AI (Harvard Business Review, July–August 2025). Based on focus groups with 50+ board chairs from companies including ASM, Lazard, Nestlé, Novo Nordisk, Randstad, Sandoz and Shell.
- Christian Stadler & Martin Reeves — When AI Gets a Board Seat (Harvard Business Review, March 2025). Field experiment with Austrian company Giesswein on using AI as a strategic sparring partner; explores both the cognitive benefits and the “completeness illusion” risk.
- David F. Larcker, Amit Seru, Brian Tayan & Laurie Yoler — The Artificially Intelligent Boardroom (Stanford GSB / Harvard Law School Forum on Corporate Governance, April 2025).
- Daniel Ferreira & Jin Li — Artificial Intelligence in the Boardroom (ECGI Finance Working Paper No. 1087/2025; winner of the 2026 John L. Weinberg Center / IRRCi Research Paper Competition).
- Deloitte AI Institute — Governance of AI: A Critical Imperative for Today’s Boards (2nd edition) (Harvard Law School Forum on Corporate Governance, May 2025). Survey of 695 board members and C-suite executives in 56 countries.
- Paul DeNicola, Barbara Berlin & Ariel Smilowitz (PwC) — Using AI in the Boardroom — New Opportunities and Challenges (Harvard Law School Forum on Corporate Governance, November 2025).
- McKinsey — Elevating Board Governance Through AI Posture and Archetypes (December 2025). Source of the 66% / 15% / <25% figures and the MIT 10.9pp ROE finding.
- California Management Review — AI Governance Maturity Matrix: A Roadmap for Smarter Boards (May 2025). Source of the Reactive / Proactive / Transformative framework used above.
- NACD — Tuning Corporate Governance for AI Adoption (2025 Governance Outlook). Source of AI Incident Database trends (+26% in 2023, +32% in 2024).

