This post is part of a series. Other posts in this series are:
From Historical Lessons to AI Challenges
The exploration of information networks’ political implications begins with a reflection on where we have been so far in the book. In Part 1 and Part 2, it has been emphasised that historically, technology itself has not been inherently evil, but it is a tool whose ethical application depends entirely on human application. Harari argues that we did eventually get it (mostly) right. However, if our learning curve was steep and costly when using comparatively simple technological innovations, how will we navigate the challenges posed by AI?
The Democratic Challenge
Harari first emphasises the need to educate the masses on basic democratic principles to avoid repeating the totalitarian mistakes of the past – mistakes which would only be compounded with the application of the surveillance regimes explored in Part 2. There are four key attributes of a successful democracy:
- Benevolence: Information collected by the network should enable it to prioritise helping citizens rather than manipulating them
- Decentralisation: Information should be distributed across multiple locations, both public and private, enabling robust self-correcting mechanisms.
- Mutuality: Increased surveillance of individuals must be balanced by enhanced oversight of governments and corporations.
- Adaptive Capacity: Surveillance systems must allow for personal growth and rest, avoiding historical oppression mechanisms that deny opportunities for change (like caste systems) or recovery (like dictatorships).
While surveillance was explored in depth in Part 2 as one potential threat, Harari highlights additional challenges, including the destabilising impact of automation on job markets. He draws a powerful historical parallel, questioning the resilience of democracies in the face of significant economic disruption:
“If three years of up to 25 percent unemployment could turn a seemingly prospering democracy into the most brutal totalitarian regime in history, what might happen to democracies when automation causes even bigger upheavals in the job market of the twenty-first century?”
– YUVAL NOAH HARARI, NEXUS, P. 316
Contrary to fears of complete loss of jobs, Harari optimistically points to history – humans have consistently adapted to changing economic landscapes through retraining and skill development. We should focus on this adaptability instead of worrying about an entire lack of employment.
Harari also summarises the fundamental difference between conservative and progressive political approaches:
- Progressives acknowledge systemic failures and actively look for solutions
- Conservatives value existing systems and are cautious about fundamental changes.
When both of these styles stay true to democratic principles, then there is evidence that the resulting systems can be quite flexible – definitely much more than totalitarian regimes. This adaptability, combined with robust self-correcting mechanisms, emerges as democracy’s most potent defence against emerging technological and social challenges.
The Algorithm Dilemma
The next concept explored is the increasing role of algorithms – from job applications and loan approvals to university admissions, insurance premiums, and even the news that we consume. An argument is made that we, humans, need a new human right – the right to an explanation. In this era of opaque algorithmic decision-making, the risk of democracy ceasing to function entirely is very real. Harari explains:
“The increasing unfathomability of our information network is one of the reasons for the recent wave of populist parties and charismatic leaders. When people can no longer make sense of the world, and when they feel overwhelmed by immense amounts of information they cannot digest, they become easy prey for conspiracy theories, and they turn for salvation to something they do understand – a human.”
– YUVAL NOAH HARARI, NEXUS, P. 334
While more data points in algorithmic decision-making can potentially reflect nuanced human insights, there remains a critical need to translate these complex calculations into comprehensible human-readable formats. This translation is far from simple.
Returning to the social credit systems explored in Part 2, Harari brings up an episode of Black Mirror which I found particularly memorable yet concerning. The episode explores a society where algorithmic systems fundamentally alter social status competitions without human understanding. It may seem far-fetched, but the resulting plot does not seem inconceivable if we continue down this dangerous path of not understanding the decisions made by algorithms. I was relieved to learn that in 2021, the EU prohibited the creation of such a system!
Democratic Discourse in the Digital Age
To briefly reflect on the points of Part 2, democracy fundamentally requires two critical criteria to function effectively:
- Enabling free public conversation on key issues
- Maintaining a minimum of social order and institutional trust
However, there is equally a need to ensure that free conversation and debates are conducted on accepted rules with a mechanism for reaching a decision. Computer networks have shown in the form of social media that they make it easier to join the debate. The increased accessibility is inherently positive, but it comes at a cost, as new groups often question previous consensus and rules. In the short term this can create disharmony and might even seem like a path to anarchy. In the long term, the disruption offers the promise of a more inclusive democratic system.
A concerning trend in this path is the rise of nonhuman voices – specifically social media bots – in the conversation. It is not only the volume of these voices that is alarming, but the content and amplified spread of false information they bring. If such bots dominate the conversation, the foundation of democratic debate could collapse. If we are not able to determine if we are debating with a fellow human or a bot, it is difficult to reach a consensus on the rules of the debate in the first place. On an optimistic note, however, Harari compares the situation to the influx of counterfeit money. Just as financial institutions developed mechanisms to detect and prevent fraudulent money, we might similarly develop strategies to identify and mitigate bot-driven spread of noise and misinformation.
As such, at least some level of regulation is required in the information market – this contradicts the naive view of information that more, and free information will generate truth. However, without such regulation, we can already see many democracies breaking down. As Harari puts it:
“When citizens cannot talk with one another, and when they view each other as enemies rather than political rivals, democracy is untenable.”
– YUVAL NOAH HARARI, NEXUS, P. 346
But we are unable to pinpoint why this breakdown is occurring; it cannot be solely due to social media algorithms, can it? Harari ends this chapter about democracy in the modern age with a basic question: “Why are we fighting each other?”
Totalitarianism in the Digital Age
Having looked at the impact of AI on democracies, we also need to consider its impact on totalitarian regimes which fundamentally operate by channelling all information through a single, centralised hub where it can be processed in a controlled manner. The lack of distributed information in this system results in a lack of self-correcting mechanisms to resolve costly errors made by the central authority. Without alternative institutions to challenge or rectify errors, the central authority’s decisions remain unchecked and potentially catastrophic.
Paradoxically, AI presents a compelling theoretical advantage for such centralised systems. The sheer quantity of information available in large-scale totalitarian regimes provides unprecedented algorithmic training opportunities and therefore should allow AI to flourish. In Part 2 it was suggested that this concentration of information was one of the downfalls of these regimes in the past, but developments suggest this centralisation might transform into a significant strategic strength.
Theoretical Advantages and Challenges
Such regimes have also often sought to manipulate historical narratives – examples are given of the Emperor Caracalla who murdered his brother as he competed with him for the throne and then attempted to erase his existence from memory, or Stalin’s attempt to erase Trotsky (architect of the Bolshevik Revolution) from official records. But in those examples, such erasure was a large manual effort – could technology like blockchain, with its majority-based verification system, make it easier to do this, if the central authority is the majority?
However, Harari points to several critical challenges that such regimes would face in attempting to control advanced technological systems:
- Inexperience in Controlling Inorganic Agents: Traditional methods of human control – imprisonment or threats – are ineffective against computers. Such systems cannot be intimidated or coerced through fear.
- Algorithmic Unpredictability: Learning algorithms may develop independent and dissenting perspectives by identifying complex patterns within provided information. How can engineers ensure a self-learning AI does not go rogue?
- Information Suppression Limitations: While humans can be silenced through fear, algorithms lack the same psychological vulnerabilities.
Ultimately such regimes face the risk of developing something more powerful than their creators, or worse yet something which they cannot control.
“Power lies at the nexus where the information channels merge…If the information channels merge somewhere else, that then becomes the true nexus of power.”
– YUVAL NOAH HARARI, NEXUS, P. 357-358
Dictatorships may ultimately be more vulnerable to technological disruption than democratic systems. Their weak self-correcting mechanisms and hierarchical power structures may only be magnified with AI. There is a word of caution that these dictators may be the weakest spot in the global defence against malicious AI – foolish dictators would believe that AI will enhance their power, whereas it may equally take the power for itself.
Global Collaboration – The Consequences of Technological Divergence
The final section of this part of the book goes on to discuss the importance of needing complete collaboration in our efforts to regulate the dangerous usage of AI discussed throughout the book, as even a small lack of collaboration could severely impact the rest of the world. Harari identifies two potential transformative scenarios in international politics:
- A New Imperial Era: The initial AI development, driven by private entrepreneurs from a narrow geographical region, mirrors historical patterns of technological colonisation.
- The “Silicon Curtain”: A potential digital divide between rival technological empires, already manifesting in restrictions on social media and technological access.
There are some reflections on the history of AI, with the following two pivotal moments marking the technological turning point:
- September 30, 2012: A convolutional neural network called AlexNet won the ImageNet Large Scale Visual Recognition Challenge (testing how correctly an algorithm can identify images in a database). The AlexNet algorithm achieved an 85% success rate.
- March 2016: Google’s AlphaGo’s victory over Lee Sedol. It is argued that this is the event that catalysed global governmental interest in AI.
In the years following these, various world powers from China, Russia, India, and the United States expressed belief that AI was the future. As such, what started as a competition between corporations started to have involvement from governments too. We can already see the “Silicon Curtain” divide with the countries involved banning access to the social media apps of the others.
Unlike previous imperial conquests requiring physical resources, as well as sheer manpower, such conquests of the future require control over their target’s data, bringing “a new kind of data colonialism in which control of data is used to dominate faraway colonies”. In the past, physics and geology enforced limits on the concentration of wealth and power, but in the new age of information, the entire world’s algorithmic power could theoretically be concentrated in one area.
As such, this new age of AI and automation that has been embraced by the powerful countries may have drastic consequences for the poorer developing countries. With the value of unskilled labourers potentially decreasing, and the cost of retraining them with the appropriate skill set of the future being too high, such countries may fall further behind than they already are.
“The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin”
– YUVAL NOAH HARARI, NEXUS, P. 374
A separate risk is that the countries will likely have differing opinions on the use of such technology – some may choose to strengthen government policies, whilst others may be less involved and allow tech giants to reap the rewards. Each sphere may have its own ideologies, and Harari notes that it is hard to say which of the two extremes this may lead to – will it alleviate or exacerbate imperial competition? If it is to be the latter, the danger of conflict is extremely high, particularly as:
- Cyber weapons are more versatile than nuclear weapons
- A lack of certainty about the consequences of such attacks undermines the principle of mutually assured destruction
Furthermore, we do not have experience in regulating such technologies and will call upon extreme levels of trust between nations given the incomparable nature of hiding a malicious AI compared to hiding nuclear weapons. Naturally, it will be easy to be skeptical of rivals abiding by the agreed rules.
To this point, Harari does offer optimism as history shows us patterns of increased cooperation as opposed to constant conflict. With specific reference to recent times, the shift to a “knowledge-based” economy from a “material-based” one decreased the potential benefits of war, as evidenced by the significantly lower expenditure on military in the early twenty-first century compared to the twentieth. This stemmed from laws, institutions, and better decisions, and there is hope for us to continue this path into the future of AI. As such, if we are to be consumed by conflict as opposed to peace and order, we cannot solely point our finger at the underlying technology – three fingers will point back at our own collective choices.
Reflections: An Unpredictable Road Ahead
My key takeaway from Part 3 is the critical necessity for complete global collaboration in regulating the use of artificial intelligence. Even slight misalignment in our approach could have tragic consequences. The historical journey explored throughout the book has demonstrated our struggle to manage even comparatively basic advancements in information networks. For a technology as transformative and potentially disruptive as AI, the potential for catastrophic consequences is much higher.
However, ultimately we must also accept that AI is not inherently a threat, but rather a powerful tool whose impact depends entirely on our application of it. Harari’s exploration throughout Nexus suggests that our greatest challenge lies not in the technology itself, but in our ability to coordinate, communicate, and create shared frameworks of understanding across diverse global systems.
The road ahead is unpredictable and we stand at a critical point in time where our collective choices will determine whether AI becomes a tool which we can use to the collective benefit of us all, or if it becomes a mechanism of systemic control.
At the completion of the core section of this book, I can’t help but feel there is a fundamental question which I am more empowered to educate myself and have an educated opinion on – can we make use of the transformative potential of AI while preserving democratic values, the human essence to life, and global equality?
