πŸ“– Nexus (Part 2 – The Inorganic Network)

7–10 minutes

This post is part of a series. Other posts in this series are:

The New Power Shift: Algorithms as Active Agents

And now we come to Part 2, where Harari begins to delve in to computers – which he defines as machines that can make decisions independently and create their own ideas. These machines introduce an unprecedented power shift in human history. Unlike the printing press and other past innovations, computers are becoming “active agents that escape our control and understanding and that can take initiatives in shaping society, culture, and history“.

This concept is illustrated through Facebook algorithms during the 2016-17 violence in Myanmar, which played a significant role in spreading hatred and weakening social cohesion, as fake news on the platform inspired waves of violence.

One might argue, similar to the previous section, that technology itself isn’t the culprit but rather its application. However, this argument falls apart when we recognize that these algorithms were making independent decisions to maximise user engagement. Unsurprisingly, since outrage generates headlines and engagement, algorithms created with that intention use this to their advantage. Harari warns against AI algorithms and their ability to learn independently and make decisions their creators never intended.

An important distinction made in this section is the difference between intelligence and consciousness:

  • Intelligence: the ability to attain goals
  • Consciousness: the ability to experience subjective feelings

At this point in time, science still lacks a deep enough understanding of consciousness to determine whether it can emerge in inorganic entities such as computers.

Transforming Information Networks: From Human to Computer Dominance

The emergence of such computers fundamentally changes the structure of modern information networks through their ability to pursue goals and make decisions. Before computers, humans were essential links in every information chain, either in “human-to-human” or “human-to-document” connections. Now, a “document-to-document” chain emerges with computers, requiring no human intermediary.

With computers becoming active members of information networks, connections and decisions can be made between them. Previous sections showed how humans used our unique language ability to create inter-subjective realities and connect with others, and in turn establish powerful structures. Computers may gain significantly more power if we consider power to be proportional to the number of members in a network.

This mastery of language could give computers immense influence. Even with the accelerating developments in AI over recent months, I found this passage particularly telling:

People may come to use a single computer adviser as a one-stop oracle. Why bother searching and processing information by myself when I can just ask the oracle? … Why read a newspaper when I can just ask the oracle what’s new? And what’s the purpose of advertisements when I can just ask the oracle what to buy?

Yuval Noah Harari, Nexus, p. 211

Until now, we’ve lived in a world created and designed by humans – but it seems increasingly likely that tomorrow’s world will be designed by computers. The new networks will consist of computer-to-human chains, where computers use their power to influence humans, and computer-to-computer chains, which are difficult to even imagine given the rapid acceleration in computer development.

Political Implications and Social Control

The implications for politics are significant. Earlier, Harari introduced the naive view of information and demonstrated that information isn’t directly coupled to truth, nor does it always help reveal truth. Rather, information assisted in creating different political structures, which themselves utilised information to achieve their own goals. These new systems will inevitably bring new political structures, making regulation essential. Unfortunately, there’s a significant disconnect between those leading technological developments and those responsible for regulations.

Harari explains that throughout history, information networks evolved primarily from humanity’s need to understand, influence, and sometimes control other people. Political regimes have consistently needed to gather and analyse data about their populations to maintain power and order. What’s changed dramatically is that today’s evolving computer networks have positioned themselves as society’s central hub, mediating transactions across social, financial, and political domains simultaneously.

This introduces a significant development: these computers are always available and can monitor and gather data about us non-stop. How this data and monitoring power is applied depends on those in charge – while it can help find missing people and tackle crime, it can also enforce conformity. Such surveillance, enhanced with AI technologies, could enable entirely new totalitarian schemes.

The monitoring extends beyond political regimes to what Harari calls surveillance capitalism, ranging from workaholic bosses to obsessive relationship partners and corporations monitoring customers. These systems shift social boundaries from historically private domains to the public sphere.

Social Credit and Perpetual Networks

More alarming is the concept of “social credit,” which assigns points for a person’s actions and gives them an overall score that significantly impacts their livelihood. What would society look like if people “do the right thing” to improve their score rather than out of innate goodness? This may become the only option if scores are applied to university or job applications. While such systems might counter corruption and increase trust between people and authorities, they risk becoming control systems that totalitarian regimes would thrive on.

The information networks of the past always had downtime, as controlling figures were humans who needed rest. Computer networks have no such limitations, making a continual state of both connectedness and monitoring seemingly inevitable.

Returning to the Soviet example of Part 1, Harari notes that if even back then, the concepts of surveillance, punishments, and rewards were so successful, how much more successful will future information networks with zero-downtime be with their control over our attention and continual surveillance?

Corporate Responsibility and Defining Better Goals

Worse still, many social media platforms reward controversial and outrageous content, incentivising behaviour that maximises engagement but this leads down a dangerous path.

Tech giants in these spaces often ignore their responsibilities and blame algorithms for “giving people what they want.” They reduce us to attention mines, converting “the multifaceted range of human emotions – hate, love, outrage, joy, confusion – into a single catchall category: engagement“. Their belief in the naive view of information is their mistake – believing that more freedom of expression and more content would lead to truth is misguided.

Harari returns to the value of self-correcting mechanisms, a rule of history that Silicon Valley thought it was exempt from. But all hope isn’t lost – these tech giants have demonstrated capabilities and strengths that could be redirected toward designing better algorithms if their intentions shifted. After all they have successfully connected people, given voice to the voiceless, and established new communities in a manner that even a few generations ago would not have envisioned.

This ultimately comes down to the goals we assign to tomorrow’s computers and algorithms. As computational power increases, so must our caution in defining these goals, as even slight misalignments could have tragic consequences. In the previous example, the goal shouldn’t be for social media algorithms to maximize user engagement but rather to maximize social benefits.

I found Harari’s thought experiment particularly helpful in conveying how difficult it is to establish a goal. Consider a utilitarian suggestion that algorithm designers must clearly express that they must care about entities capable of suffering. The resulting computer network should minimise suffering – but how do we quantify suffering? Computer networks modeled on this goal would need some way of assigning “suffering points” to particular events. How would it decide how many points a global lockdown like we experienced five years ago should have to determine if it’s the correct course of action? This utilitarian approach, despite good intentions, can be extremely dangerous as it could justify present suffering in favour of a better future – indeed, this becomes a mythological approach.

The Dangers of Inter-Computer Realities

Inter-computer realities parallel the inter-subjective realities explored previously, and Harari discusses how such realities can significantly impact the world. Examples include PokΓ©mon Go’s blurred lines between game and reality, and Google’s algorithm for ranking websites, which has real-world impact on creators in terms of the attention they receive.

As explored in Part 1, inter-subjective realities have proven dangerous – Harari demonstrates how slavery, imperialism, and the Holocaust all stemmed from inter-subjective racial categories. This can be summarized as defining labels on humans and imposing order upon them. When viewed this way, computers can be extremely efficient in imposing whatever labels their inter-computer reality establishes and enforcing rules based on them.

Harari ends Part 2 by calling for human institutions that can carefully monitor emerging computer networks to avoid the dangers outlined in this section. The stage is set to explore his view that this is a political challenge, not merely a technical one.

Reflections: Looking Ahead to Political Implications

My key takeaway from Part 2, is the increasing power of evolving computer networks and their potential to impose order through continual surveillance. Though it’s easy to criticise the goals of current algorithms and networks, defining “good” goals is incredibly complex. If we continue accelerating these technologies, we must find a way to establish appropriate goals and get them right from the beginning.

Harari is setting us up for the final section of Nexus – exploring the future political implications of everything discussed so far. The questions raised aren’t just about what these systems can do, but who controls them, how they’re governed, and what happens when information networks are increasingly managed without human input.

As these networks continue to evolve at a seemingly unstoppable pace, we need to figure out how to ensure they actually serve us rather than turning into the perfect tools for manipulation or control. Part 3 promises to tackle these political questions that will shape whether these powerful tools enhance our lives or gradually diminish what it means to be human.