📚 Finished reading Nexus by Yuval Noah Harari.

This is recent book from the author probably most famous for writing “Sapiens”. There is likely some overlap with regards to humanity and its development, but here the focus is very specifically on the topic of the past, present and future of “information networks”. The latter half of the book drills down further on his concerns about how AI may dramatically affect such networks for the worse, potentially at great cost to humanity.

I know there have been some criticism of Harari’s past work around overconfidence and a potential lack of factual veracity. Not being a historian or having followed up each reference, I can’t speak too much to that. But he remains, to me, an extremely skilled story-teller and introduces several concepts here that feel useful for at least trying to understand some of the fundamental dynamics of the world around us.

His starting point is the idea that the reason humans have come to dominate the planet and its various species is largely down to our ability to form information networks in a way that enables mass cooperation. At first this was realised in small tribal groups. More recently of course we have developed the ability to communicate and cooperate throughout a nation, a continent, or across the entire planet.

We do this by leveraging “intersubjective realities”. These are socially-shared fictions that exist beyond the classical dichotomy of objective and vs reality. Intersubjective realities are ones that have no basis in the physical world but nonetheless a group all convince themselves, or at least act like, they are true - but would hold little sway with a rival group who set no stead in them.

Examples include the idea of a nation (the borders on a map often do not reflect any necessary physical boundary), money (what would the intrinsic value of a piece of paper with “£5” written on it be in a society that doesn’t believe in your currency?) or, no doubt controversially to some, religion. Collective belief in these sorts of imaginary things allows a large group of people to cooperate - and can in fact change objective and subjective reality.

However, intersubjective realities exist only within specific information networks. The bible or a £50 banknote would have little impact over a group of people who have never heard of Christianity or currency.

These realities then are mostly informational realities, not based in a necessary physical truth. They’re not the result of us finding out something empirical about the world. Which neatly brings us onto the topic of the purpose of information.

Naively, information is about finding truth. Scientists discover something. Their finding is a true representation of the world. Their facts becomes information. We all learn something. Life is all the better for it.

That sometimes happens of course, but it’s far from the whole story. Since time immemorial information has been leveraged both in the pursuit of truth, but also to create order. In fact the view of some populists is that the value of information is all about using it as a weapon to impose control. The truth of information matters little; what matters is whether you can wield it as a form of power. This is perhaps not a very surprising revelation when one looks at the recent words and actions of various famous populists currently spreading their falsehoods across the planet.

By Harari’s telling, neither view is right in isolation. In reality, the utility of information is a mix of the two, although, by his telling, over time humans have often chosen to prioritise creating order over truth. After all, for a society to work - at least for a while - its people do not need to know the truth about everything. But they do need to believe some of the same stories as each other in order to cooperate. And it is, beyond anything else, that cooperation which empowers humanity to do what it does.

Information doesn’t necessarily inform us about things. Rather, it puts things in formation.

This is why in a modern world where each day humans gain an incredible amount of information - far too much for any individual keep up with - we don’t necessarily learn a whole lot of uncontested objective “truth”. In fact, it’s often quite the opposite. This has led to the situation where in today’s world it is not uncommon for many of us to simply disagree with each other on the what should be the literal basic facts of the world - let alone what should be done in reaction to those facts.

Presently this is likely most visible in the political world, especially around topics that could be classified as “culture war”. No amount of refreshing your Facebook feed and slurping in all that “information” seems like it could possibly to bring humanity together as one - not even in the sense of grounding us all to believe on a single objective truth (or, in some cases, even the possibility of a single truth).

Harari, being a historian cites several example from days past. For instance, sure, the invention of the printing press was an enabler the scientific revolution and the consequent discovery of a whole lot of truth. But it also enabled the mass distribution of nonsensical conspiracy theories with tremendously deleterious effects.

Consider for example the “Malleus Maleficarum”. That was the definitive 15th century book that claimed to describe how to categorised, identify and prosecute witches. These days most of us would realise that the witches they described therein were a made up category of people, not grounded in any reality. But enough people entered into the intersubjective reality that these wicked folk did and that instructions contained within the Malleus Maleficarum was the best way to deal with them.

This of course, led to untold harm and cruelty to the at-least-thousands of people that were tortured and killed as a result of this shared delusion. There was nothing about truth revealed in this tome. But it sure imposed an unconscionable type of order and control over a certain set of people.

Skipping ahead, Harari greatly worries about the impact of current and future AI on the the information networks that govern today’s world. In his view, AI should be thought of as an agent unto itself in the information network. This would mark the first time a non-human entity has gained that status.

Information networks originated as human-to-human things; people verbally told each other stories. This necessarily limited how far and fast they could spread. Then technologies such as documents and books were invented and utilised within the networks. This allowed information to spread far further and far faster. The internet only added to that. But at the end of the day, books were only a method of transmission - the flow only made sense and affected the world if it had humans at both ends: human to book to human.

Books didn’t make decisions. Books didn’t write themselves. Books were necessarily fully controlled by humans. Harari sees modern AI as something different - that it’s not a tool in the conventional sense. It’s not a passive participant in the information network. It can make decisions. It can “write itself”. This makes it an active agent in the information network. And a mysterious, alien one at that. Even today, even experts often can’t really understand the true specifics of how certain incarnations of it works or why it “decides” what it does in any given case.

We already see harms from this in a world where whether you are sent to jail can be dependent on some unknown algorithm. How can anyone possibly challenge the accuracy of a decision when they have no sense of how it was made?

Right now the damage from these systems, as life-ruining as it might be for any given individual subject or group to it, is often limited due to some human’s ability at the end of the day to control it or regulate it. We can ban black box algorithms from the court system and they’d be gone. But Harari worries that this might not always be the case.

At some point - I guess we are there already - no-one really “understands” the precise inner workings of AI (we know how it was trained, sure - but why did ChatGPT say the exact sentence it did? ). So whenever we decide to give it control over something, then we risk at least becoming subject to the decisions made by a kind of unprecedented alien intelligence.

The most terrible results from this could come either from its leveraging by human dictators, or, if we proceed without truly understanding how to make AI have goals that are fully in the interests of humanity (which right now we don’t know how to do) from AI as acting as its own agent.

AI would be the first technology capable of surveilling us every minute of every day, annihilating our privacy in a way that even the worst totalitarian dictators of the past, for all their horrendous crimes against humanity, could not.

Stalin had to sleep. He could only recruit so many informants. He had to trust that they wouldn’t let him down or turn on him. Modern-day computerised surveillance has no such intrinsic limits. You may well right now be wearing a device that constantly uploads your biometric signals to a “cloud” beyond your control. You almost certainly are carrying a device that has the ability to upload your precise location to whosoever wants to know where you are. You probably share your thinking and opinions on someone else’s network such that you could quite easily be marked in real time as a dissident vs supporter of whichever state happens to be in control of your country.

And if AI was to somehow get out of human control, there’s no reason to suspect it has the intuition necessary for or desire to protect humans from disaster by default.

Even if it does not escape its confines, Harari worries that this technology could lead to a new kind of “digital colonialism” wherein only a few rich nations or private organisations have the wealth and power to harvest the necessary training data from the rest of the world in order to train the most powerful AI models. Which they could then hoard for themselves, or sell back fragments to the less privileged folk whose data enabled it.

In the nineteenth and twentieth centuries, if you were a colony of an industrial power like Belgium or Britain, it usually meant that you provided raw materials, while the cutting- edge industries that made the biggest profits remained in the imperial hub….In a new imperial information economy, raw data will be harvested throughout the world and will flow to the imperial hub.

There are after all already a ton of products out there that rather depend on tiny handful of very advanced generative AI models from providers such as OpenAI. If America and China decided to block other countries from using their AI models tomorrow then it’s not like there’s all that many comparable alternatives. As it stands, the UK simply does not have its own ChatGPT at present.

This is rather a gloomy picture. What’s to be done about all this? Happily, the author does not think the situation is hopeless - although I didn’t get the sense that he thinks we are all that likely to solve the impending catastrophe.

Firstly, we have to understand AI as being an agent. Normal bombs do not decide which cities to destroy. AI bombs might well. Maybe they already do.

Secondly, we have to get over our naïve view that more information leads to more truth. Information connects us. It doesn’t need to represent the truth to do that. At best, it produces a balance of truth and order.

…Information sometimes represents reality, and sometimes doesn’t. But it always connects.

But whilst information connects us, it only connects those of us within a given network. Harari introduces the concept of the “silicon curtain”. Imagined as an evolution of the Cold War’s “iron curtain”, this refers to as the situation where citizens divided by geography, ideology or some other such factor become entirely divided into separate “information cocoons”. This will only exacerbate both inequality and the destruction of any sense of a shared reality. And, amongst other unfortunate consequences, without reasonably universal agreements on certain sets of basic facts we would stand little chance of figuring out and enacting the sort of collective action that is necessary to mitigate any number of pending planetary-wide crises.

Thirdly, we must understand the importance of self-correcting mechanisms. Even without any AI doomerism, these are, and always have been, critical for democracies to function, and a strength of them when they do.

Human dictators usually have to pretend that they’re infallible. There are no independent courts or media to push back against their decisions. Information is used for order, not truth. The people around the dictator rarely have any personal incentive to do anything else. Eventually the lack of truth-grounding this leads to is one why dictatorships are, thankfully, vulnerable to rapid destruction.

In a democracy though, as slow and annoying as the process can be, the assumption is that any individual human is fallible and that it is only through some combination of numerous actors with differing motivations in communication with each other that makes the state likely to remain somewhat tethered to truth - and hence more stable and increasingly capable over the long term

Regulation will be key. Our laws aren’t yet designed for today’s technologies, let alone tomorrow’s. Harari notes that we fairly successfully outlawed counterfeit money. This was essential for the story of money as a trustworthy store of value to persist. So how about we consider outlawing counterfeit humans?

If democracies do collapse, it will likely result not from some kind of technological inevitability but from a human failure to regulate.

For regulation to work we will need to build and maintain a variety of strong and resilient institutions. Doing so is a critical part of how we must actively promote the well-functioning of the all-important societal self-correcting mechanisms. Weakening the courts, censoring the media, banning the scientific study of certain topics weaken these self-correcting mechanisms and hence the future society we live in.

The author bids us to think about how the US Constitution was explicitly designed with the knowledge that humans - including those that wrote it - are fallible. Article 5 provides a clear and explicit mechanism as to how the Constitution can be changed, as it invariably has been and will have to over time. Contrast this with, for example, the Holy Bible, which by claiming to originate from divine infallibility has no built in a mechanism for change.

All in all, we must not forget that whilst AI has agency (by Harari’s definition at least), so do we. Harari rails against the idea of technological determinism. For now at least, our decisions matter. And so we must make good ones so as to avoid a world where our decisions no longer mean anything. In his view, we can get a lot of way towards making those good decisions that avoid appalling outcomes by looking towards history, which is, by his telling, in reality the study of change.

Auto-generated description: A book cover for Nexus by Yuval Noah Harari featuring a pigeon and a tagline about the history of information networks.