AI, Board Deliberation and Groupthink

Recently, I have been reflecting on how artificial intelligence might already be influencing, or could eventually influence, our human abilities. I’m not just referring to obvious aspects like productivity, automation, or efficiency, but something far more profound and subtle: the possibility that, by outsourcing certain cognitive tasks to AI, we may slowly diminish the very skills we must actively engage to maintain sound judgment.

Because many of the things we do when we genuinely think require effort: reading deeply, contrasting perspectives, identifying contradictions, building our own synthesis, formulating difficult questions, or holding uncertainty long enough not to rush too quickly toward an apparently reasonable conclusion.

At the same time, however, I also believe exactly the opposite. Artificial intelligence can enormously expand our cognitive capabilities if used well. It can help us explore perspectives we had not considered, detect invisible patterns, challenge our assumptions, identify weak signals or formulate better questions. Perhaps this is precisely one of the most interesting aspects of the entire debate: the same technology can produce radically different effects depending on how we integrate it into our thinking and decision-making processes.

The real question may not be whether AI replaces human intelligence, but whether we stop exercising the capabilities that sustain human judgment.

Boards Are Not Just Information-Processing Systems

When I bring this reflection into the context of Boards of Directors, the subject becomes even more fascinating.

Because a Board is not simply a group of experienced people sitting around a table making decisions. Or at least, it should not be. The real valué of a Board emerges when different experiences, sensitivities, perspectives and interpretations of risk interact in such a way that the quality of collective judgment becomes better than the mere sum of individual contributions. This is precisely where deliberative quality becomes fundamental.

Years ago, I discovered the work of Irving Janis, the American social psychologist and Yale professor who developed the concept of groupthink. Janis studied major political decision-making failures, such as the Bay of Pigs invasion and dynamics surrounding the Vietnam War, trying to understand how groups composed of highly intelligent and experienced individuals could nevertheless end up making profoundly flawed decisions. His conclusion was that, under certain conditions, the search for cohesion, unanimity or alignment could progressively reduce the group’s critical capacity and limit the real space available for questioning assumptions or introducing uncomfortable perspectives.

Since then, the theory has evolved and has been refined by many authors, yet the central idea remains extraordinarily relevant: highly competent groups can deteriorate the quality of their thinking when they converge too quickly around an apparently reasonable narrative.

This is one of the reasons why recent discussions about AI and decision-making have become so interesting. A recent article in Harvard Business Review argues that consensus-based decision-making may become more problematic in the AI era precisely because AI systems generate fast, plausible and apparently balanced syntheses that can push groups toward convergence before genuine deliberation has fully taken place. The article’s underlying concern is not technological efficiency itself, but the possibility that organizations may begin mistaking rapid alignment for high-quality collective thinking.

The Risk Is Not Only Individual

And I believe artificial intelligence introduces an entirely new dimension here.

Because one thing is for each director to use different tools individually in order to broaden their preparation, explore alternative perspectives or contrast public information. In that case, AI could actually enrich collective deliberation precisely because it enhances the individual critical capacity of each Board member.

But something very different would be a scenario in which all directors rely on the same tool, trained on similar patterns, fed with the same documentation and generating similarly structured syntheses.

At that point, I am no longer sure the issue remains purely individual.

Because even if each director maintains strong critical capabilities, something more subtle could begin to emerge: a certain homogenization of interpretative frameworks. Not necessarily because everyone initially thinks alike, but because everyone gradually starts observing reality through increasingly similar cognitive structures.

And this is particularly interesting because perhaps we would no longer be dealing exactly with the classic groupthink described by Janis. Janis focused primarily on human dynamics of conformity, cohesion and implicit social pressure within the group. What may now be emerging is something different: a form of cognitive convergence partially mediated by technology itself.

In other words, AI could end up becoming not only a support tool for thinking, but also a shared infrastructure for interpreting reality.

The deeper risk may not be that AI makes us less intelligent individually, but that it gradually makes us interpret reality through increasingly similar cognitive structures.

This concern is also beginning to appear in more recent academic research. A recent paper on “epistemic independence” and AI-enabled collective decision-making argues that the growing use of homogeneous AI systems may inadvertently reduce cognitive diversity within groups, even when individuals themselves remain highly competent and well intentioned. The paper’s core argument is particularly relevant for Boards: independence of judgment is not only a matter of personal capability, but also of preserving sufficiently diverse frameworks of interpretation within the group itself.

AI, Epistemic Independence and Collective Decision-Making

AI Could Also Expand Collective Intelligence

At the same time, however, the opposite could also happen.

AI could become an extraordinary tool for expanding collective intelligence if it is deliberately used to introduce intelligent contradiction, explore alternative scenarios, identify invisible assumptions or prevent conversations from closing prematurely.

Interestingly, another recent Harvard Business Review article on AI and collective decision-making makes almost the opposite argument: that artificial intelligence may significantly improve decision quality in complex environments if it is used to surface alternative perspectives, structure complexity and help groups process large amounts of information more thoughtfully. In other words, the technology itself is not necessarily the problem; the key issue is whether AI becomes a shortcut toward premature consensus or a mechanism for enriching deliberation.

Interestingly, some of the mechanisms originally proposed decades ago to reduce groupthink may become even more relevant in the AI era. Irving Janis himself suggested practices such as assigning a formal Devil’s Advocate, deliberately introducing dissenting perspectives or creating spaces capable of challenging dominant assumptions before decisions become irreversible.

What becomes fascinating today is that artificial intelligence could potentially help operationalize some of these mechanisms in ways that were previously much harder to implement consistently.

A Board, for example, could deliberately instruct AI systems not only to summarize management recommendations, but also to construct the strongest possible counterargument, identify assumptions that may be invisible to the group, surface weak signals contradicting the dominant narrative or explore how different stakeholders might interpret the same decision from radically different perspectives.

Used in this way, AI would cease to function merely as a mechanism for accelerating convergence and could instead become a tool for institutionalizing constructive friction inside decision-making processes.

The Governance Question Beneath the Technology

And perhaps this is, ultimately, the most important question. Not whether Boards will use artificial intelligence, because they almost certainly will in one way or another, but what kind of deliberative architecture we will build around it.

Because the difference between augmented governance and impoverished governance will probably depend less on the sophistication of the technology itself than on our ability to continue cultivating those things that no artificial intelligence can fully replace: human judgment, the capacity to sustain difficult conversations, cognitive diversity and the courage to think independently even when convergence feels more comfortable, faster and apparently more efficient.

Leave a Comment

Your email address will not be published. Required fields are marked *