Bluesky, decentralisation, and the distribution of power: How decentralised Bluesky is depends on what exactly you mean by decentralisation.
Recently I read:
On Bluesky: Regarding its vibes and its technology.
Fat cells have a ‘memory’ of obesity — hinting at why it’s hard to keep weight off: A potential epigenetic reason for why it’s so difficult for folk who’ve lost a lot of weight not to regain it.
The events leading up to the US election seem even more horrifying than usual
I’m wary of succumbing to my own biases by seeing the risks of the rise of the far right looming around every political event no matter how minor, but it’s hard to not to feel a certain amount of hopeless despair when it comes to forthcoming US election.
Perhaps I’ve overdosed on Timothy Synder’s excellent books, but when one sees things like the Madison Square Trump rally it’s hard not to hear several extremely loud metaphorical warning klaxons go off.
Trump vowed to win New York, saying it would be an “honor” to win his home state. But his remarks were overshadowed by the crude and offensive speakers that went before him, which included racist jokes about Puerto Ricans and Black people as well as prominent Democrats.
In the lead-up to Trump, comedian Tony Hinchcliffe, who goes by Kill Tony, referred to Puerto Rico as a “floating island of garbage,” made a crude joke about Hispanics and birth control, inferred that Jews are cheap and Palestinians are “rock-throwers” and made a racist comment about a Black man in the audience eating watermelon.>
…
Other warm-up acts called Hillary Clinton a “sick son of a b***”, another referred to Vice President Kamala Harris as the “antichrist” and a third said Harris “and her pimp handlers will destroy the country.” Former New York City Mayor Rudy Giuliani said Harris is “on the side of the terrorists.”
For all their many harmful, dangerous and deadly faults, it’s hard to imagine the UK Conservatives PRing themselves like that. It feels qualitatively different.
Next up, I suppose there’s no need to spend time lying about vote fraud when you can just burn the votes
A ballot box in Portland and another in Vancouver were set on fire, potentially disenfranchising the wise folk who had already voted early via depositing their ballot into one of them. I don’t think we know much about the what and why this happened yet, but it doesn’t seem exactly normal let alone good. If it makes any difference, the states concerned are (usually) safe Democratic seats.
Finally for now, continuing the habit that US MAGA-style Republicans have of saying the quiet bit out loud, we have ProPublica’s reporting about the previously private speeches of Trump’s “key ally” and previous director of the Office of Management and Budget, Russell Vough. He’s thought to be very likely to get a high-level governmental role should Trump get back into power.
A key ally to former President Donald Trump detailed plans to deploy the military in response to domestic unrest, defund the Environmental Protection Agency and put career civil servants “in trauma” in a series of previously unreported speeches that provide a sweeping vision for a second Trump term.
…
Other policies mentioned by Vought dovetail with Trump’s plans, such as embracing a wartime footing on the southern border and rolling back transgender rights…decrying the “transgender sewage that’s being pumped into our schools and institutions” and referring to gender-affirming care as “chemical castration.”
…
“We want the bureaucrats to be traumatically affected,” he said. “When they wake up in the morning, we want them to not want to go to work because they are increasingly viewed as the villains. We want their funding to be shut down so that the EPA can’t do all of the rules against our energy industry because they have no bandwidth financially to do so.
“We want to put them in trauma.”
Cruelty as praxis, once again.
Vox's take on "Is AI the new nuclear weapons?"
Last year, Vox laid out their view as to how true or useful the “artificial intelligence is the new nuclear weapons” analogy actually is. The high-level summary of their take is:
Similarities:
- The scientific progress on the technology has been very rapid.
- There is potential for mass harm, even if the mechanism is generally less obvious in the case of AI.
- Both require materials - uranium for nukes, certain types of microchips for AI - that are relatively scare and potentially trackable.
- They have dynamics of an arms race.
Differences:
- Nuclear weapons are a wholly military technology, AI is a general-purpose one.
- It’s much easier to copy someone’s AI than their nuclear bomb.
Their takeaway is that the general “AI is like a nuclear weapon” analogy is usually quite parallel enough to prove useful, but that certain specific processes involved are similar. And in any case, in general:
The best way to handle a new, powerful, dangerous technology is through broad international cooperation. The right approach isn’t to lie back and just let scientists and engineers transform our world without outside input.
'Oppenheimer' tells the story of the man behind the world's first nuclear bomb, and his later regrets
🎥 Watched Oppenheimer.
This is the story of J. Robert Oppenheimer, the scientist who led the Los Alamos Laboratory, assigned in 1942 to the task of developing the world’s first nuclear weapon. Whilst he appeared to some reservations from the start, the race was on given his fear that the German Nazis might beat them to it which might then lead them to victory during the ongoing Second World War.
The nuclear bomb Oppenheimer et al. developed of course worked, even if whilst during testing the effects of atomic detonation they weren’t absolutely certain that it wouldn’t cause the literal end of the world. So quite a high-stakes workplace, at least compared to the average day of work in my job.
I’m sure some people might see modern-day parallels with the unconstrained development of AI, even if the mechanisms towards destruction are a lot less straightforward.
After the nuclear bomb was used against Japan in 1945 - in Hiroshima and Nagasaki - Oppenheimer became an advisor to the Atomic Energy Commission.
He became ridden with guilt about the sheer amount of destruction and huge loss of life that his life’s work to date had led to. His ethical concerns led him to argue for global control of nuclear power to avoid nuclear proliferation, particularly the risk of an escalation of the technology during the Cold War with the Soviet Union. This led him to opposing the development of the H-bomb, contrary to the President’s wishes.
This, along with his past associations with the Communist Party, meant that he was subject to accusations of disloyalty. He ended up in front of a private security hearing, after which his security clearance was revoked.
The film is three hours long, but it’s a big story to tell, and an important one. It very much captured my interest throughout, even if my ability to find such a lengthy continuous timespan of focus given the constraints of modern life is such that I ended up have to watch it over two sittings.
Tell me how you measure me and I will tell you how I will behave. If you measure me in an illogical way don’t complain about illogical behaviour.
Eliyahu M. Goldratt, from The Haystack Syndrome.
The New York Times advises us to believe Trump when he tells us what he's going to do
We’re not far away from the next US presidential election, which is to be held this November 5th. Either Harris or Trump are destined to walk away the victor, with the polls being scarily ambiguous on which it will be. It’s a fairly scary time, even for those of us outside of the US.
Whilst campaigning, Trump often makes a series of claims about what he will do that at a glance seem too ludicrous or cruel to be true.
In the past it’s often been suggested, especially by some of his high profile supporters, that we should take Trump seriously but not literally, to take him symbolically, not literally. But The New York Times implores us to take him at his word; to believe him when he tells us what he wants to do.
Donald Trump has described at length the dangerous and disturbing actions he says he will take if he wins the presidency.
…
These statements are so outrageous and outlandish, so openly in conflict with the norms and values of American democracy that many find them hard to regard as anything but empty bluster.
We have two words for American voters: Believe him.
The article goes through the what he says and why you should believe him using the following categories.
Trump says he will:
- use the Justice Department to punish people he doesn’t like.
- will round up and deport millions of immigrants.
- deploy the American military against U.S. citizens.
- allow vigilante violence to end crime.
- order the military to strike foreign civilian targets if the United States is attacked.
- punish blue states by withholding disaster relief.
- use ideological tests to decide which public schools get federal money.
- abandon U.S. allies.
The Pudding visualises the process of getting an abortion as a maze
The Pudding visualises the often laborious and damaging path some who needs an abortion needs to take if situated in the US as a series of mazes.
Even whilst. Roe vs Wade was in force the process was often full of “twists, turns and roadblocks”. But now, since its overturning in 2022, states have the power to take away someone’s right to an abortion, even to ban abortion entirely. 13 did exactly that. Some other states made access harder, and a few put new legislation in place to protect this important right.
Whilst interacting with The Pudding’s visualisation you find yourself following a story of an individual, based on a real-life case, who is seeking such a service whilst trying to make your way through a literal maze of a complexity related to the real life difficulty of obtaining this important medical care - whether this be, for instance, California’s relatively simple process or Tennessee’s fiendishly complicated effort.
In no case are the individual’s stories entirely uncomplicated. A maze is still a maze. But this very effectively, and very creatively, highlights the huge variation in the access our US friends have to this type of healthcare.
Two types of information overload: the situational vs the ambient
I have an information problem. There are 278 books on my “Want to read list”. There are 1,794 articles saved in my read-later app. There appear to be 2,241 episodes in my podcasts “to listen to” queue. The knowledge of hundreds of pending unread journal articles put me off ever even opening my collection of them to check.
Then on the other side of the equation there’s the collection of hundred of read items I want to blog about, and several half-written posts about a fraction of them.
I’m somewhat ashamed to say I even started a new RSS reader account on the basis that my collection of feeds felt somewhat unmanageable when all in one place. I guess I’m going to have to stop teasing my colleague for starting a new Google mail account because their previous one filled up.
It’s absurd. I’ve only got one lifetime, as far as I know. And it’s not like these numbers don’t grow every single day.
I know I’m not alone in this. Nicholas Carr writes about two forms of information overload, only one of which is solvable by the standard solution of improving the information filters, search and prioritisation algorithms that are available to us.
One can intuitively feel this to be true. We have so many more electronic facilities to aid us in sorting through and finding high quality material than we used to - even whilst acknowledging that some of the more famous ones are perhaps getting worse. But who feels like they have less information overload now than in the past?
In Carr’s view, that’s because these filtering systems only solve “situational overload”
Situational overload is the needle-in-the-haystack problem: You need a particular piece of information — in order to answer a question of one sort or another — and that piece of information is buried in a bunch of other pieces of information.
Many information-sorting technologies, from the introduction of indexes, catalogues and the Dewey Decimal system onwards, have made inroads into this.
But these systems don’t help with “ambient overload”.
Ambient overload doesn’t involve needles in haystacks. It involves haystack-sized piles of needles.
We experience ambient overload when we’re surrounded by so much information that is of immediate interest to us that we feel overwhelmed by the neverending pressure of trying to keep up with it all.
This is exactly right. I’m not confused as to which handful of my 278 to-read books are actually going to be of interest to me. In principle they all are. I hand-selected them. This is a post-filtered list, full of likely needles; full of signal, not obscured by noise.
The world’s best search isn’t going to help me here. In fact, improving the filters we have available to us simply pushes ever-increasing amounts of ever-more interesting content in our direction.
There are people trying to solve this problem, but no solution that I’ve seen feels very satisfying. For example, there are multiple book summarisation services - Blinkist, Shortform and their ilk - which I’ve played with a bit in the past. Whilst they have their uses I don’t find them to be an adequate substitute for the original material. Let alone the modern AI based solutions - either the generic “Please, chatbot, summarise this book in 2 paragraphs” options or the dedicated “summarisation services”. Many people naturally have ethical concerns about the type of AI they typically use, alongside the ever-present risk that they in fact fail to summarise the content correctly.
If however we determine that these services do have a legitimate place, then again I feel like - at least for me - they’re addressing situational overload. I’d be using them as filters for “do I want to read the whole thing?” rather than as a substitute for “I have read the whole thing”.
Of course, if you’ve a need to get the basic gist of a book very quickly but have no desire to spend substantial time working through it then those services might potentially work well for that - I haven’t checked the efficacy studies! - but that lack of desire isn’t the problem I face.
So what is the solution? Carr doesn’t really present one. Perhaps there are none, other than somehow reconciling oneself to be able to happily live whilst giving up on the idea that we could possibly indulge in even a sizeable fraction of the things we’re interested in within this new(ish) world of information abundance. Or, as Keenan writes, accepting that “It’s okay if we don’t consume all of the world’s information before we die”.
Currently trying out a combo of FreshRSS and NetNewsWire to experience the joy (?) of aggregating, managing and perusing RSS feeds without relying on someone else’s cloud service.
FreshRSS is “A free, self-hostable feed aggregator”. Think Feedly, Google Reader (RIP) et al. but self-hostable, open source, private (if you want it to be) and subscription-free. You can access it via the web, desktop or mobile. Here’s a screenshot from their site.
NetNewsWire is one of many client RSS apps that are out there that can sync with FreshRSS if you prefer reading your feeds in an app. This one has the selling point being both free, open source and, again, not infused with features to sell your brain to the highest bidder.
There are of course several similar options if those two don’t float your boat.
Found a new creepy critter in the garden, just in time for the Halloween season.
My phone believes this to be an Araneus Quadratus, also known as a “four-spot orb-weaver”.
The wrongness of 'If you've got nothing to hide then you've nothing to fear'
“If you’ve got nothing to hide then you’ve nothing to fear” is a common response to those of us who worry about the ever-increasing rise of systematic and universal surveillance and the peel-back of privacy.
It’s snappy and intuitive. But wrong. I prefer one of the following constructions.
From Edward Snowden’s autobiography - at least that’s where I think I read it? It’s certainly something he said:
Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.
From Shoshana Zuboff’s pivotal book, “The Age of Surveillance Capitalism":
The real psychological truth is this: If you’ve got nothing to hide, you are nothing.
Privacy is a right. The European Convention on Human Rights agrees, as does the UN Declaration of Human Rights.
That said, such legislation is often thought of as targeting the potential for governmental overreach. This may not be the primary concern for the majority of us in a world where it’s primarily capitalists that are capable of and desperate to know what we’re up to (largely so that they can change it). In any case, governments often rely on the technology capitalists for their surveillance work; there’s no clear dividing line.
Some people are only able to thrive - perhaps only stay alive - because they have privacy. This isn’t only theoretical. The LGBTQ community, particularly within countries with outrageously repressive laws and norms around the subject, provides an obvious example.
Via The Verge:
As LGBTQ Egyptians flock to apps like Grindr, Hornet, and Growlr, they face an unprecedented threat from police and blackmailers who use the same apps to find targets. The apps themselves have become both evidence of a crime and a means of resistance.
This might not be the case for more privileged folk for whom little immediate drama might result if our behaviour or implied thoughts were to be revealed. But when de-valuing the right to privacy on the basis that you don’t think that (right now) you have anything to hide from any institution that might wish to surveil you, you also implicitly imperil the right to privacy for other people, for those to whom it may be more acutely important.
Plus rules change. Norms alter. Over in the US, when the Roe vs Wade decision was cruelly overturned some folk who previously may have felt little concern about certain aspects of their data privacy suddenly had a potential new reason to care what happened to the data associated with period tracking apps.
There have been several instances of, for instance, the data associated with people’s web search history or voice assistant usage being used in their legal prosecution. Over here in the UK, the British Crown Prosecution service makes it clear this is a move both intentional and expanding, writing that:
Digital devices like smart doorbells, dashcam footage, car GPS systems and even Amazon Alexas are providing increasingly more evidence in criminal trials, the Director of Public Prosecutions (DPP) said in a speech today.
Some will argue that this is a good thing, making it easier to prove criminal behaviour, more likely that victims of crime will see justice. Others will fear that, even so, this may not be the only use your personal data will be put to.
New Scientist Live show 2024 books list
Inevitably after attending any type of show that features something even vaguely sciencey the list of books that I simply must read expands.
Here, as a service to anyone else similarly afflicted, is a list of books from as many of the speakers from the recent New Scientist Live show as I could quickly find.
The organisation of the show is such that you can’t possibly attend more than a few of the talks whilst you’re there, but that’s no reason to let unread books from people you could in theory have listened to pile up on one’s shelf of course.
Many of these distinguished folk have written several books. In those cases I picked a single one based on either which title it seemed they were implicitly or explicitly sales-pitching the most in their talk, or if not applicable, then perhaps their most recent or, just the one that caught my eye.
TIL: Thanks to the innate lack of privacy that the technology has, whilst we might not know who “Satoshi Nakamoto”, inventor of Bitcoin, actually is, people do believe that he has enough of it - between 750k and 1.1 million coins - to be worth somewhere between 50 and 75 billion dollars today.
In 2021 that made him - if it is a single person and one that goes by ‘he’ - the 15th richest person in the world.
(h/t episode 616 of Core Intuition)
Added a few more books to my absolutely unrealistic “Books about AI I want to read” list.
It is a battle evident in the psyche of many partnered people, and especially so in midlife. It shows itself, even within a happy long-term relationship, in the crystal-clear desire to be left alone.
To be allowed to work undisturbed, to perhaps write or paint or just sit at the computer, to think or nap or to choose to spend a day as one requires. To let things live where one has left them; to visit friends alone without guilt; to be who one is, without that being a daily source of irritation to someone else.
Derren Brown, from A Book of Secrets.
This quote doesn’t seem to resonate with absolutely everyone, but does in seemingly plenty of folk. There can surely be a unique kind of freedom, and certainly peace, in solitude.
Ken Follett's Fall of Giants tells fictional-but-realistic stories of family life in the early twentieth century
📚 Finished listening to Fall of Giants by Ken Follett.
This is the first book of Ken Follet’s “Century Trilogy” which sets out to allow the reader to follow the stories of five families through the highs and lows of the twentieth-century. This one covers the start of the century, up to and including the immediate aftermath of the First World War.
Amongst various other socio-cultural changes there’s big political change in the air, from the Russian revolution, to demands for women’s rights - including the British campaign for the right to vote, for workers rights and more. The “fallen giants” include the aristocracy, the royalty and perhaps even a couple of nations, at least in terms of how people thought about them at the start of the century.
Whilst primarily a work of fiction telling a made-up story of families that didn’t exist, we do see some historical figures pop their heads up. Several are from the political class, inculding Lloyd George, Woodrow Wilson, Vladimir Lenin and, inevitably, Adolf Hitler.
Supposedly Follet has made every effort when it comes to the actually-real folk to ensure that the events they’re involved in either did happen or could realistically have happened. What they say, how they behave, where they are are all intended to be plausible. Personally, this feels usefully educational to my limited knowledge of early twentieth century history, but there’s also a risk of course that I’ll mix up fiction with reality in my mind, although the danger feels less acute than with The Crown.
The five families represent folk from across the class spectrum. One of the major themes is the struggle of the working class to achieve liberation, whether via politics or other methods. We start off with Billy Williams, a child, who’s sent off to work down a Welsh coal mine. Next up we sight the aristocracy, Earl Fitzherbert, who owns the said mine amongst other lands. His sister inadvertently falls in love with a German man, which is to prove problematic given the rise of Hitler and the consequent looming conflict. Over in Russia, two orphans struggle against poverty in very different ways, orphans in part because the Russian aristocracy executed their father.
Throughout, we see how these families live, meet and mix in ways both predictable and not, as they thrive, struggle, rise and fall through a seemingly quite realistic replica of our twentieth century.
Sure, the written form of the book is apparently nearly 1000 pages long, which is something that I’d usually find tremendously off-putting. And I can sometimes find historical fiction somewhat dry. But not in this case; it proved fairly gripping as I happily listened to the 30-and-a-half hour long audio edition at a more rapid rate than I usually manage to listen to audiobooks. And in the end, the next two books in the trilogy, Winter of the World (apparently mostly covering the 1933-1949 period) and Edge of Eternity (1969-1989) have surely made it to my want-to-read list.
Ryan Broderick succinctly sums up why the contemporary mega-hype around AI doesn’t depend on the systems we have being all that good, let alone ‘superintelligent’.
…the so-called AI boom we’re in right now is really selling two things, neither of which have to be very good.
A way to automate work you don’t value enough to hire a human being to do or, at the bare minimum, a way to hide the human beings doing that work so you can feel better about how little you pay them.
After all, the only person that the hype merchants really needs to convince is your boss.
New Scientist Live talks 2024
Journeying back from the ever-fascinating New Scientist Live show, 2024 edition.
A list of the talks I managed to get to in person:
- Weird science: An introduction to anomalistic psychology, by Christopher French.
- A life of crime? By Anne Coxon.
- Our accidental universe, by Chris Lintott.
- Generation pup: The science behind the UK’s pooches, by Rachel Casey.
- The balanced brain: The science of mental health.
- The atomic human, by Neil Lawrence.
- How we break: Navigating the wear and tear of living, by Vincent Deary.
- The human-fungal symbiosis, by Nicholas Money.
More words coming on some of these in the future quite likely.
Unfortunately I was compelled to miss last year’s sci-fest, but here’s the equivalent post from 2022.
I’m curious about precisely what level of surveillance technology is powering this (admittedly useful) public toilet info sign.
Hoping Meta isn’t somehow linking me to my…biological requirements.
Busting ghosts for breakfast.
Well this is one of the more aggravating CAPTCHAs I’ve had to solve. Especially as 2/3 of the time it was the last of the twelve images of dice that was the correct one.
A disappointing selection for the next Conservative leader
Looks like the next UK Conservative leader stands a good chance of being a right-wing weirdo.
Following a surprise outcome of the leadership election, the two prospects MPs are putting to party members are Kemi Badenoch and Robert Jenrick. That’s a shame.
The desire from Cleverly to see his party getting to be a bit less weird - perhaps in reference to a current attack line against the US Republicans - seems to have failed:
During the party’s conference in Birmingham last week, Cleverly had urged party members to be “more normal”
The same kind of subject was echoed by Nadine Dorries in one of her rare good opinions:
Nadine Dorries, the Tory former cabinet minister, said on X: “MPs had one job. To be normal and vote for the person who is best placed to lead you. It really wasn’t hard.”
That said, some in Labour are apparently excited at the prospect:
In a jokey reference to the government’s donations row, a Labour MP asked: “Does Tory leadership result need to be declared as a gift?”
Google's creators envisaged that advertising-funded search engines are a bad idea (but made one anyway)
May we never forget that back in the early days, before Google decided to not not be evil, its founders were very well aware of the perils of what later came to be the funding model behind their own search engine.
The classic citation for this is Brin and Page’s 1998 paper “The Anatomy of a Large-Scale Hypertextual Web Search Engine” wherein they write that:
…we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. Since it is very difficult even for experts to evaluate search engines, search engine bias is particularly insidious.
…
Furthermore, advertising income often provides an incentive to provide poor quality search results.
…
In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines.
…
But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.
That paper’s authors, Sergey Brin and Larry Page, are of course the founders of Google.
Their written conclusions sound perfectly plausible to me. They’re certainly logical. But, words are just words, and all these years later, does the product they mention as being “crucial” exist? It’s certainly not their own product, Google Search, which today operates along precisely the opposite lines. If that crucial product is one out there, it certainly isn’t very mainstream.
To be clear, there certainly do exist today search engines that aren’t funded by advertisers. But the ones I personally know of are still very much commercial, plus have pretty negligible shares of the search market.
Kagi is one such example. They make their money via users paying them between $5 and $25 per month for a subscription directly, rather than by taking payments for advertising bundled in with the revenue associated with subtle expropriation of each user’s behavioural data. This of course changes the incentives for the creators of the search engine, probably in a very beneficial way for the most part. But it also limits its audience to people who can both afford a subscription and are willing to pay for such a thing.
TIL: Until the 1948 Representation of the People Act, graduates of certain universities in the UK were permitted to vote twice in our general elections.
Once for wherever they lived, as everyone else was entitled to do - but they also got another vote corresponding to a metaphorical “university constituency” derived from the graduates of the given university.
'The Age of Surveillance Capitalism' explains why Big Tech does what it does, and why it's dangerous
📚 Finished reading The Age of Surveillance Capitalism by Shoshana Zuboff.
I’ve had this book on my top priority reading list ever since it came out in 2018. It still somehow took me five years to get to it. But I’m glad I did - I wasn’t at all disappointed with my belated reading of it. I’ve come away thinking that something along these lines should be essential reading for almost everyone who has to navigate today’s technological society.
This particular book is rather wordy and academic-sounding. Its style might not be accessible or interesting to everyone trying to make their way through today’s chaotic and busy world. There’s plenty of room for someone to bring out a shorter and simpler version. Or to communicate its message via a different media. The Netflix documentary on Facebook - “The Social Dilemma” - contained a small subset of similar ideas for instance and was surely a lot more accessible to a lot more people. But also, necessarily, that much shallower. This book is for anyone who is interested in digging deeper into what’s happening, why, and why it matters.
In any case, I feel that the ideas within this book very important to understand if you want to know what’s being done to you and why, each time you use the internet. Or, for that matter, an increasing amount of the time when you aren’t aware that you’re anywhere near the internet.
To be honest, I think you’d get a decent summary from reading just the introduction if the full thing is too heavy going.
As a disclaimer, I’ll note that the content plays heavily into my pre-existing biases as to the perils of big tech, particularly the mainstream social media companies. So perhaps some of my enthusiasm comes from the comfortable feeling one gets when reading something you already heavily agree with in many ways. But whilst one might fairly accuse it of over-hyping the current impact of surveillance capitalism I do think many of the basic ideas, the theories behind the system, are fairly indisputable and critical to document and publicise. It’s important to know what the surveillance capitalists want to do to us and why - even if you don’t believe they’re actually all that effective at doing it just yet. They’re unlikely to get any worse at what they do over time, unless we intervene.
Anyway, the general premise is that, as we know capitalism changes and adapts over time. The current version we’re heavily enmeshed within can be described as the titular “surveillance capitalism”.
Living in an the era of surveillance capitalism, the digital realm has permeated very heavily into our lives. It has redefined how things work and what things are acceptable far faster than we’ve had a chance to think about whether these changes are desirable or not. Famously, the big tech companies like to “move fast and break things”.
This is of course a problem if the “things” they are breaking include you as an individual and society as a whole. Without a swift re-evaluation and consequent serious action, Zuboff claims that our emerging “information civilisation” poses a substantial threat to some of our most fundamental rights.
The mechanism by which this plays out can be swiftly summarised via this quote:
…our lives are unilaterally rendered as data, expropriated, and repurposed in new forms of social control, all of it in the service of others’ interests and in the absence of our awareness or means of combat.
Digging deeper here - but not nearly as deep as the book does - the underlying philosophy starts with the claim by surveillance capitalists that they have the right to surveil and ingest our human experience in service of converting it into behavioural data.
Some of this data might be used to improve the products they sell. This is perhaps fair enough, maybe even a net positive for us, and is how things used to work. But ever since Google discovered how profitable this data can be in the marketplace, the rest of our surveilled and datafied human experience is translated into a “behavioural surplus”, to be used exclusively in the interests of the company who snaffled it up in the first place.
The companies concerned then use machine intelligence processes to manufacture a “prediction product” from our behavioural data. The prediction product aims, on the basis of knowing what you’re doing, thinking or feeling right now, to be able to predict what you’ll do in the future.
These prediction products are traded in an entirely separate and new marketplace - the “behavioural futures market”. Selling this information about what we’ve done, what we’re doing and what we’ll do in the future is extremely profitable for the companies that extracted it.
The author reports that it was Google that first discovered the economic value in this. Soon afterwards Facebook, Amazon, Microsoft, and the rest of big tech followed. It’s now become the default way that companies offering online services are funded now, everything from tiny start ups to mega corporations. Consequently, if we want to lead an effective life, we often have no reasonable alternative but to participate, to allow this to be done to us.
The accuracy of the predictions the companies make about us is what makes them valuable. Increased accuracy means increased profits, for which the drive is, naturally under capitalism, insatiable. Thus these companies feel compelled to acquire and infer more and more data about us, which now includes our voices, our emotions and our personalities.
Harrowing as that might be, it’s not where the danger ends. There was a realisation that the most accurate predictive behavioural data is that which is created when the organisations concerned directly alter our behaviour. If they intervene, nudging us to make decisions in line with their interests, then their predictions become more accurate and hence more profitable. Think of how the fun Pokémon Go game ended up sending us to locations that just happened to correspond to restaurants that paid them or how Facebook expropriates a huge amount of data from us and our friends in order to allow its advertisers to more effectively influence us to buy their products.
In doing this, these companies are exercising “instrumentarian power”; a power that knows and shapes human behaviour towards other people’s goals. The end goal is effectively to automate us. Whilst an earlier time’s industrial society was imagined as a well-functioning machine, instrumentarian society can be imagined as a human simulation of a machine learning system.
Instead of the typical assurances that machines can be designed to be more like human beings and therefore less threatening, Schmidt and Thrun argue just the opposite: it is necessary for people to become more machine-like.
The original digital dream - that being digitally connected is intrinsically pro-social, inclusive and tends towards the democratisation of knowledge is dead. Capitalism no longer feeds solely on human labour, but has expanded to feed on every aspect of the human experience.
How was this allowed to happen? Basically because few people understood what was going on. “Cyberspace”, where it all began, was an uncharted and fairly lawless territory. Google et al. carried out deliberately secretive and misleading activities that were anyway so new, complex and illegible to the vast majority of us that even if we’d somehow known what they were up to we wouldn’t necessarily have realised its implications.
And all this was sold to us as a freedom, as an emancipation. Who wouldn’t want free access to the entire world’s information? Who doesn’t desire to connect to their friends upon demand at the click of a button? Who would hate having the ability to publicise their thoughts, worries and concerns to the world at large? Who doesn’t want a digital assistant or convenient online tools to help them get through the struggles and torments of modern life?
At last, herald true personalisation is here to soothe us. Someone - well, something - that recognises us for who were are, acknowledges our interests, never says no, is always available.
But, inevitably, behind the scenes, solving these issues is not the primary purpose that these products were created.
Surveillance capitalists quickly realized that they could do anything they wanted, and they did. They dressed in the fashions of advocacy and emancipation, appealing to and exploiting contemporary anxieties, while the real action was hidden offstage.
To Facebook et al. we are not customers. There’s no economic exchange, price or profit directly taking place between us and Google when we do a web search. Contra to a commonly held idea, we are not even the product. We are merely the source of the raw material that the surveillance capitalists need, the behavioural surplus. The real customers of these companies are the organisations that buy our data from them in the behavioural futures markets.
…individuals are definitively cast as the means to others’ market ends.
These techniques also came to the fore in an era of a neoliberal ideology that resisted the idea of putting controls on businesses. Furthermore, the 9/11 terrorist attack in the US also made the state and its population prioritise the benefits of surveillance over its drawbacks. After all, outside of specific examples like China, most governments often do not have the resources or skills to carry out mass sophisticated surveillance operations of the kind done by the private surveillance capitalists. Instead they’re reliant on, and jealous of, big tech. The US government alternates its time between funding Google and begging for bits of their data.
There’s a tremendous asymmetry in knowledge and power here. Surveillance capitalists know everything about us but deliberately use methods that mean we cannot know very much about them. Our rights to privacy are not respected. But they’re not destroyed; just redistributed. Surveillance capitalism claims the right to privacy whilst depriving us of ours.
At the end of the day:
Two men at Google who do not enjoy the legitimacy of the vote, democratic oversight, or the demands of shareholder governance exercise control over the organization and presentation of the world’s information.
One man at Facebook who does not enjoy the legitimacy of the vote, democratic oversight, or the demands of shareholder governance exercises control over an increasingly universal means of social connection along with the information concealed in its networks.
The dynamics of power under industrial capitalism are shifted from past eras.
…ownership of the new means of behavioral modification eclipses ownership of the means of production as the fountainhead of capitalist wealth and power in the twenty-first century.
As are the fundamental risks to our very existence as we know it:
Just as industrial civilization flourished at the expense of nature and now threatens to cost us the Earth, an information civilization shaped by surveillance capitalism and its new instrumentarian power will thrive at the expense of human nature and will threaten to cost us our humanity.
Not of course that surveillance capitalism doesn’t also flourish at the expense of nature. The incredible power consumption of, for instance, modern day artificial intelligence makes that clear.
This “seventh extinction” will not be of nature but of what has been held most precious in human nature: the will to will, the sanctity of the individual, the ties of intimacy, the sociality that binds us together in promises, and the trust they breed.
Nothing about this was inevitable. The technology didn’t create the system. The problem comes from the very logic of surveillance capitalism, which directs where the technology goes. The nefarious processes we’re talking about were deliberately created to promote the commercial ends of the companies that foist them upon us.
Technology can never be isolated from economics and society; they’re always economic means to an end, not ends in themselves.
Likewise these attempts to surveil and automate us are not to be explained by individual “bad people” or a conspiracy. Firing Zuckerberg, even if it wasn’t impossible, wouldn’t fix the system. The actions taken by surveillance capitalists and the resulting harms are an obvious and inevitable consequence of the logic of accumulation under surveillance capitalism.
It wasn’t so long ago that US society was appalled at the idea of mass behaviour modification techniques. Admittedly they were generally envisaged as being carried out by the state. There haven’t been an equivalent reaction in more modern times to the same practices being used by private companies who simply seek to get ever more rich, ever more powerful.
This remains true, despite the intent of these companies to deprive us of various rights we previously claimed as inviolable. These include essentially any that require our individual autonomy and agency to be respected. Zuboff terms these as being challenges to:
- the sanctity of the individual.
- the right to individual sovereignty.
- the right to the future tense.
- the right to sanctuary.
The loss of these rights mean democracy is also under threat. Democratic engagement requires individuals to be able to exercise autonomous moral judgement and self-determination. If we lose the latter then we lose the former.
There are at least 3 fundamental differences between surveillance capitalism and the capitalisms that came before.
-
Surveillance capitalists insist that they, and only they, should have unfettered freedom and knowledge. The theory of capitalism was built by its theorists on the back of 2 assumptions: firstly that markets are unknowable, and secondly that it’s this unsurmountable ignorance that means market actors must be given freedom of action. This is Adam Smith’s invisible hand at play - the butcher cannot possibly know everything about every potential customer and their relationship to the market and society. But by having the freedom to work in their own interest they end up contributing, unknowingly, to the efficient allocation of resources; a positive result for society. This fundamental assumption breaks down as one side of any transaction - the Google side - gains ever more perfect knowledge but refuses to give up any of its freedoms.
-
Traditionally capitalism has involved reciprocities. The author seems to me perhaps a little forgiving of prior power dynamics at times, although they mention the shareholder value movement as partly disrupting this. But it’s true that ideas around reciprocation were fundamental to the origin of capitalism, such as Adam Smith’s argument that price increases must be balanced with wage increases. However, today’s hyperscale surveillance capitalists don’t need or want to feel any obligation around reciprocities. As individuals, we are neither their customers, nor do they rely on very many of us as workers. These companies tend to have relatively small numbers of staff, all of which are drawn from very exclusive strata of population.
-
Historically capitalism has been greatly in favour of an individualistic vision of the world, insisting on sanctity of individual liberties - often to a problematic level in my view. Traditional capitalism’s vision of society is a set of individual agents freely transacting according to their will. In particular it loathed the idea of a collectivist society; one where planning and control is used to produce a result previously ordained by someone or something, which would necessarily come at the expense of an individual’s freedom to do exactly as they choose. However, surveillance capitalism’s desire is in fact for a collectivist society, one that they organise, that’s “radically indifferent” to our own personal interests and predilections. It’s a world where each of us is tuned and nudged to behave in line with previously ordained plans - plans that are solely in the interest of the surveillance capitalists.
The wealth of these companies means they can plough vast resources into recruiting the finest minds of our times and investing in infrastructure at levels that one would have thought would be more appropriate to efforts to solving world hunger and the in-progress environmental catastrophe. But no, in the real world, the most incredible rewards are given to the few people who have the privilege, education and ability to use of this data to make us…click on more adverts.
So, for those of us who aren’t keen on being surveilled and subconsciously coerced to do things we otherwise wouldn’t have done, what can we do?
Opting out seems the obvious thing at a glance. Don’t use services that surveil and seek to modify your behaviour. However, in reality, that’s nigh on impossible to do without dramatically affecting our lives. The internet is truly rife with this technology and using the internet is fairly critical for leading an effective life these days. And that’s before we get to all the surveillance architecture that’s hidden in physical public spaces.
The prediction imperative transforms the things that we have into things that have us in order that it might render the range and richness of our world, our homes, and our bodies as behaving objects for its calculations and fabrications on the path to profit.
…
There was a time when you searched Google, but now Google searches you.
This tension between a sense of knowing surveillance capitalism should be resisted but finding it almost impossible to do so if we want to have a “normal” life tends to result in us becoming cynical, resigned, and just learning to put up with it.
We can of course attempt to hide. I myself use a wide range of tools that aim to preserve a certain degree of privacy on the internet - tracker blockers, ad blockers, VPNs, special browsers, encryption, all that kind of stuff. A surprisingly high (to me) number of people also take these precautions.
There also exist physical items of relevance, such as clothes that aim to interfere with the ability of outdoor cameras to apply facial recognition to track where you go, what you do.
But these whilst these solutions, or “counter-declarations” can help some individual people with their individual situations, they unfairly and impracticably put the onus to act on us as individuals. These efforts can sometimes can feel as exhausting to do as surveillance capitalism is to endure. Instead we need action at a society level.
…the individual alone cannot shoulder the burden of justice, any more than an individual worker in the first years of the twentieth century could bear the burden of fighting for fair wages and working conditions. Those twentieth-century challenges required collective action, and so do our own.
These actions would include changes in public opinion leading to legislation and jurisprudence.
There’s already been attempts to use monopoly or privacy legislation to address some of the harms. Whilst acknowledging that these are important facets of the issue, the author feels these efforts miss the primary point. The real harm here originates from the practice of the “rendering of our lives as data” in order to increase the control others have over us in the first place. What they argue for is legislation but far more widely scoped and interpreted. Something that’s a much more fundamental challenge to what surveillance capitalists do.
Zuboff refers to this as being a ‘synthetic declaration’, meaning something that imposes a new framework, redefining the facts of the matter, spelling out what we want our future to be. Likely components would include:
- strengthening our democratic institutions.
- constructing a double movement.
- harnessing digital technologies in ways that help us to live an effective life and maintain a democratic social order.
GDPR might one day be seen as green shoots to move in this direction with its requirement that companies justify their data activities. But at the time of the book being written it remained to be seen what its real impact would be. Notably, Zuboff argues that this would be dependent how it’s interpreted by society and the law, which is more important than the specific text of the legislation.
Now it’s been place for a few years and, hey, surveillance capitalism still does what it does, flourishing to an astonishing degree, albeit with slightly reduced rights (that it’s battling to regain) in Europe.
At the end of the day, Zuboff concludes that:
It is not OK to have to hide in your own life; it is not normal. … It is not OK for every move, emotion, utterance, and desire to be catalogued, manipulated, and then used to surreptitiously herd us through the future tense for the sake of someone else’s profit …
…it is up to us to use our knowledge, to regain our bearings, to stir others to do the same, and to found a new beginning
From Ars Technica:
Streaming services say they make more revenue per user on average if the subscriber uses an ad tier
I was curious about that. I suppose it just means we’ll be exposed to ever more ads, of ever greater obnoxiousness, irrespective of what we’re prepared to pay until such time that the breaking point of our patience is reached.
With the degrading quality of these services, it’s easy to see why digital piracy is getting more popular - although apparently Amazon didn’t see many people quit or upgrade the first time they unilaterally put ads in their streaming service.