What developments in AI tell us about the future of governance models

Commentary
21
February 2024

Misinformation and disinformation have become this year’s buzzwords as voters in over 40 countries head to the voting booths in 2024 along with dozens more scheduled to be held in 2025. Not everyone shares the concern or the outlook, but there is no doubt that the next two years will be decisive for the future of democracy and societies around the world.

In the meantime, artificial intelligence (AI) promises to exponentially increase the availability and production of information in all of its forms, while also rendering our interface with it more difficult. Just as it was aided in the past by revolutionary technological advances—printing press, telegraph, telephone, computers, internet—it seems that the production, collection, and dissemination of information sits on the cusp of a possibly unprecedented leap forward.

Paradoxically, perhaps, as the nature and amount of information increases in complexity and size, the delivery mechanisms, whether text, audio, or video-based, for such information continues to shrink in direct proportion to decreasing attention spans and the ever-narrowing echo-chambers of its users. If “the medium is the message,” as Marshall McCluhan was keen to point out, then one wonders what the message is. These of course are not new problems, but the democratisation of information, the proliferation of new AI-super charged media of production and delivery, and the erosion of trust across a variety of institutions and social norms, make for an explosive combination.

Instead of producing better informed and more discerning citizens, mass hysteria, groupthink, the embrace of wide-spread conspiracy theories, atavistic ideologies, as well as enforced conformity, often follow. It turns out, however, that when dealing with crowd dynamics, a thin line separates Douglas Murray’s madness of the crowds–how groupthink and mob mentality override reason, leading to irrational behaviour and collective hysteria–, from James Surowiecki’s wisdom of the crowd–how diverse groups of people can collectively make better decisions than individual experts.

Otherwise referred to as collective, swarm, or hive intelligence, the wisdom of the crowd argument relies on the observed collective problem-solving ability or decision-making capacity of a group, where each individual contributes their own knowledge, perspectives, and experiences to arrive at a solution or decision. This concept is often observed in social insects like bees or ants, where the collective behaviour of the group leads to complex, coordinated actions.

AI can help enhance hive intelligence in several ways. Firstly, AI algorithms can analyse vast amounts of data contributed by individuals within the group to identify patterns, trends, and correlations that might not be immediately apparent to humans. AI can also facilitate communication and collaboration among group members by providing platforms for sharing information, coordinating efforts, and synthesising diverse viewpoints. Finally, AI-powered prediction models and decision-making tools can assist the group in evaluating different options and selecting the most optimal course of action based on the collective input. As a result, overall, AI technologies have the potential to augment and amplify the collective intelligence of groups, enabling more effective problem-solving and decision-making processes.

Swarm-based predictive models built by UnanimousAI or HivePoweredAi, for example, have contributed in a variety of fields including medicine, UN decision-making, business resource allocation, marketing, and even voting. Commenting on the benefits of tapping into AI-enabled collective intelligence models, Louis Rosenberg, Unanimous AI’s CEO and Chief Scientist, noted that the shifting away from traditional polling methods that drive voter polarisation further, “the biological principle of swarm intelligence can point us in the right direction, enabling us to make group decisions, big and small, that more accurately reflect our collective insights and aspirations.”

These examples serve as important reminders of the reason why cooperative networks and varied governance models were created in the first place. It turns out that while misinformation and disinformation, and therefore distrust thrive in an atomised, polarised, and isolated society, the opposite is true in a society that ‘comes together.’ The consequences of the former, it turns out, are dire, for both institutions and individuals.

This insight is important, not least because over the past decade or so, trust has eroded at alarming rates. Increased access to internet connectivity and the democratisation of information from the 1990s onward, meant that traditional sources of authority were found wanting: too slow, too old, too opaque, too unaccountable, too vertical. Universities, government offices, international organisations, media outlets, religious organisations, once considered the only legitimate sources of status and power, were increasingly forced to loosen their grip once their monopoly over knowledge and information, the only source of their legitimacy, was no longer secure.

The end result, has been the creation of overlapping and yet fragmented layers of ‘reality’, where each individual and or members of a group live their own ‘truth’, apart from that of others, with each of them empowered by instantaneous access to an almost infinite supply of information, both trivial and profound, long-lasting and ephemeral. Due to this erosion of trust in traditional sources of authority, under the best circumstances, this fragmentation into various and competing versions of realities has resulted in societal alienation and polarisation.

However, we may no longer be able to continue operating under a ‘best circumstances’ scenario. Whereas, it is true that in the past, a myriad of idiosyncratic and social factors, including one’s cognitive biases or tribal affinities, be they political, economic, or social based, stopped one from properly interpreting the information available to them, in the overwhelming number of cases, there was never any doubt as to their provenance.

That may no longer be the case. AI-generated content, both dependent and independent of human promptings, will likely exceed human created ones in the coming years. While many have expressed optimism that current limitations and failings of large language models like ChatGPT or Bard will likely be overcome or at least minimised in the future, some are less sanguine. Recent reports, for example, show that not only is ChatGPT performance plateauing, it may actually be getting worse. Evidence that these models suffer from hallucinations, fabrications, and even laziness, continue to pile up.

The end result is the production, dissemination, and consumption of facts, news, analyses, etc that are defective by default. And that’s before we consider the concerns and the pathologies resulting from algorithmic and set up biases, the first the result of incomplete and biased data and erroneous design, the latter, the outcome of biased individuals making decisions leading to pathological outcomes.

Then there is the intentional and malicious use of technology to sow misinformation and disinformation. The seriousness of the threat is underlined by the fact that the United States, the European Union, and the NATO, have made combating Foreign Induced Misinformation (FIMI) one of their top priorities for 2024-2025. Moreover, during the 2024 Munich Security Conference, some of the world’s largest social media and AI platforms signed a voluntary pledge to tamper or outright stop the proliferation of some AI-produced election-related materials.

However, the problem with AI-curated and disseminated misinformation and disinformation goes beyond concerns with elections and is likely to become one of the greatest challenges looking ahead, one that is made even more difficult due to the onslaught on traditional sources of trust. Interestingly, this may create an opportunity for old and new institutions. As some have already suggested, whereas the democratisation of information in the 1990s favoured the sidestepping of institutions, the proliferation of AI-induced and enabled misinformation and disinformation may provide a way back for them as it becomes increasingly difficult to distinguish fact from fiction, reality from science fiction.

The ‘comeback’ of traditional institutional frameworks and trust is not a foregone conclusion, however. New institutional forms of signalling trust may make it difficult for ‘old’ institutions to regain their footing. Despite the historic usefulness of some institutions as ‘trust signifiers’ giving their seal of approval to some information and withholding it from others, citizens may not find highly rigid and hierarchical forms of governance appealing. ‘Old’ institutions like universities, national governments, international organisations, regional bodies, etc, must then avoid becoming what people around the world have already rejected. To regain their legitimacy, they must first recognise that horizontality, transparency, and openness are three non-negotiable principles upon which every attempt to regain trust must be built.

If you are interested in learning more about the impact of AI on politics, law, and society, check out our “IR in the Age of AI” podcast!

Download PDF