The Braindump Blog

Recently I read:

More links

Latest posts:

My year in music, 2023

My most listened to albums, 2023.

(at least from the times when I remembered to use apps that can scrobble to last.fm)

Covers of most listened to albums

My year in books, 2023

Here’s the final list of books I managed to finish this year.

The Lost Cause Being You: A New Science of Consciousness Zero Days Upgrade The Extended Mind The Three-Body Problem The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling, 3rd Edition Crime and Punishment We Are Bellingcat Mexican Gothic Q The Complete Maus 11.22.63 Red Team Blues Divergent Mind The Signal and the Noise Unlawful Killings We Do This 'Til We Free Us Statistical Rethinking All About Love The Body Keeps the Score Recursion: A Novel Hood Feminism The It Girl The Dispatcher Empire of Pain The Monopolists

The Lost Cause - an oddly hopeful story of a climate-change ravaged near future

📚 Finished reading The Lost Cause by Cory Doctorow.

This book is set in California, three decades into the future. Climate change has continued to ravage the world. Large parts of the US have been rendered uninhabitable by floods, fires, pandemics et al. That naturally created a lot of internal climate change refugees.

But it’s not entirely dystopian. It got so bad that the US government and its citizenry finally managed to muster up a response that actually sets out to mitigate the worst of the impact, although it required a fairly radical liberal president who brought in a Green New Deal (GND) and didn’t mind ignoring the courts to do so. Who was unfortunately replaced with a rather less effective one by the time the book was set.

Much of the citizenry are also heavily engaged in building back better where it’s still possible to do so. Carbon neutral factories, lots of clean energy and an enthusiasm for building safe and dense housing in places where it’s possible to do so. A gig-like green jobs guarantee means that everyone who wants to work can do so, with plenty of training on hand, all in the cause of protecting humans and the environment. Refugees are generally welcomed and celebrated for the many benefits they bring to their new homes.

This anyway is the life of the protagonist, Brooks, who gets a sense of satisfaction and accomplishment from helping humanity preserve itself and its environment. Lovely, if a little simplistic, and his late-teen dialogue grates a little at time. Maybe I’m just old.

But not everyone is a Brooks. There’s still a mostly old-white-men vanguard who aren’t in line with these new vibes - the MAGA club. These folk hate the GND, they hate the idea of refugees streaming into their historic homes, of welfare programs and so on. Some are into strange conspiracy theories or the weird ideas based around sovereign citizenship. At first they didn’t believe in climate change. Now they think it’s too late to do anything about it so simply want to preserve what’s left of their old way of life.

Separately there’s also the crypto posse, a group of the mostly mega rich tech bro types who live their life on a giant boat. They’re only interested in big technological solutions to the catastrophe - seed the atmosphere, figure out how to go live on Mars - in between living the Bitcoin lifestyle where the freedom and well-being of money is more important than that of people, with the possible exception of the hallowed grindset innovator types.

All the stereotypes are there, although to be fair an overt effort is made now and then to not paint this as a simple battle between good and evil. In reality, people usually have reasons for their beliefs and actions that aren’t just ‘I want to be evil’.

…the Magas didn’t want to watch the world burn. They sincerely wanted to save it. They weren’t wrong because they were cruel.

They were cruel because they were wrong.

Doctorow’s politics certainly shine through. If you’re a reader of his blog you will be familiar with some of the arguments implicitly presented in this book, often the case with his novels. It follows a long line of what might be called ‘activist fiction’ - if nothing else providing a much more accessible entrance to the relevant ideas to the uninitiated than a dry textbook would, and a vision of what might be possible to the already-sympathetic.

For earlier attempts on adjacent topics, the Guardian provides a list of another 10 books they term ‘eco-fiction’, going back as far as to 1962 for JG Ballard’s ‘The Drowned World’.

It might be unusual to come away from a story where central points include climate change ruining the world and people mistreating each other with positive feelings, but this one definitely gives a sense of hope. Ideas of what could be done to change the current trajectory of humankind, visions of a kinder, more effective, competent and empathetic society even whilst such a thing feels hard to imagine right now.   No doubt this is somewhat because my politics are quite similar to the author. It’s nice to imagine a world where the main arguments around climate change, refugees and the like have largely been settled in a progressive way. And furthermore where their implication have been practically implemented into everyday life giving people a sense of progress, of purpose, as they develop and use technology for motives other than rank financial profit.

Now it’s the remaining few disbelievers, those who don’t share certain ’progressive’ values, that are the oddbods, albeit often ones with still substantial access to power. Of course it’d be rather a shame if it takes the same catastrophic destruction of much of today’s world to get us to a similar place in reality.

Book cover for The Lost Cause

Unsurprisingly, ChatGPT and its ilk are finding their way into providing direct customer service for some businesses. Equally as unsurprisingly it’s easy enough to confuse them into providing responses that no human agent would.

There was the time Chris White got their local car dealer’s customer service agent to write a Python script, not a service typically offered on the garage forecourt. Or when another Chris, Chris Bakke, got it to agree to selling him a new Chevy Tavoe car for $1, whilst confirming that it was a legally binding offer.


Some books about Artificial Intelligence I'd like to read - updated

(List last updated 2024-10-15, first written 2023-12-29)

Ever since the generative AI chatGPT craze started - it’s been just over a year since it launched if you can believe that - I’ve been looking for some nice meaty but lay-person-adjacent books to help me understand how best to think about the contemporary variant of artificial intelligence hype.

Not necessarily the technical details, the code underlying the seeming magic - or at least not only those details. I’m interested too in the more more philosophical, more political domains of thought. What will AI do to society? What might it improve, what might it make worse? How best should we consider and handle it to ensure it does more of the former than the latter? Or is it all a tech-bro mirage, nothing more than a technological flash in a pan?

Whilst the current instantiation of the newer large language model public demos are still new enough that there’s likely not been enough time for a huge number of reliable comprehensive-but-approachable books to have been written, it’s not like people weren’t already thinking about AI from both technical and philosophical viewpoints way before OpenAI came to be.

The distinct field of what could be called “AI studies” has been around at least 60 years, depending on how you count it. I certainly was exposed to it at university quite some years ago. Alan Turing was considering whether machines could think back in 1950. And people’s thoughts and dreams about aspects of future AIs have likely been around far longer - centuries longer - as cultural artifacts and thought experiments, even if they use very different terminology. Undoubtedly there’s plenty of very interesting and thoughtful work out there.

Anyway, here’s a few books that have piqued my interest, mostly culled from various articles or podcasts about AI I must have perused over the past few months. They’re not exactly recommendations, as I haven’t yet read them. But I hope to do so one day, and would welcome any other suggestions.

Before the most recent hype cycle I’d already read a handful of works on AI. Three of the ones that I felt I got something useful out of were:

Anyway, here’s my basic fantasy to-read list, in no particular order:


And now the same list again, but this time with some of the publisher blurb to help give a clue on what they’re actually about:

God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning by Meghan O’Gieblyn

A strikingly original exploration of what it might mean to be authentically human in the age of artificial intelligence

For most of human history the world was a magical and enchanted place ruled by forces beyond our understanding. The rise of science and Descartes’s division of mind from world made materialism our ruling paradigm, in the process asking whether our own consciousness—i.e., souls—might be illusions. Now the inexorable rise of technology, with artificial intelligences that surpass our comprehension and control, and the spread of digital metaphors for self-understanding, the core questions of existence—identity, knowledge, the very nature and purpose of life itself—urgently require rethinking.


Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, by Kate Crawford:

The hidden costs of artificial intelligence, from natural resources and labor to privacy, equality, and freedom

What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In this book Kate Crawford reveals how this planetary network is fueling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us.

Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world.


Why Machines Will Never Rule the World: Artificial Intelligence without Fear, by Jobst Landgrebe, Barry Smith:

The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible…

Landgrebe and Smith show how a widespread fear about AI’s potential to bring about radical changes in the nature of human beings and in the human social order is founded on an error. There is still, as they demonstrate in a final chapter, a great deal that AI can achieve which will benefit humanity. But these benefits will be achieved without the aid of systems that are more powerful than humans, which are as impossible as AI systems that are intrinsically “evil” or able to “will” a takeover of human society.


Rebooting AI: Building Artificial Intelligence We Can Trust, by Gary F. Marcus and Ernest Davis:

Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust AI.

Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we are led to believe…The world we live in is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Marcus and Davis show us what we need to first accomplish before we get there and argue that if we are wise along the way, we won’t need to worry about a future of machine overlords. If we heed their advice, humanity can create an AI that we can trust in our homes, our cars, and our doctor’s offices.

Reboot provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of what we can achieve and how AI can make our lives better.


The Coming Wave; Technology, Power, and the Twenty-first Century’s Greatest Dilemma by Mustafa Suleyman.

An urgent warning of the unprecedented risks that AI and other fast-developing technologies pose to global order, and how we might contain them while we have the chance…

In The Coming Wave , Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential unprecedented harms on one side, the threat of overbearing surveillance on the other.

This groundbreaking book from the ultimate AI insider establishes “the containment problem”—the task of maintaining control over powerful technologies—as the essential challenge of our age.


The Eye of the Master: A Social History of Artificial Intelligence by Matteo Pasquinelli.

What is AI? A dominant view describes it as the quest “to solve intelligence,” a solution supposedly to be found in the secret logic of the mind or in the deep physiology of the brain, such as in its complex neural networks.

The Eye of the Master argues, to the contrary, that the inner code of AI is shaped not by the imitation of biological intelligence, but the intelligence of labour and social relations, as it is found in Babbage’s “calculating engines” of the industrial age as well as in the recent algorithms for image recognition and surveillance.

The Eye of the Master urges a new literacy on AI for scientists, journalists and new generations of activists, who should recognise that the “mystery” of AI is just the automation of labour at the highest degree, not intelligence per se.


Impromptu: Amplifying Our Humanity Through AI, by Reid Hoffman

Amplifying Our Humanity Through AI, written by Reid Hoffman with GPT-4, offers readers a travelog of the future – exploring how AI, and especially Large Language Models like GPT-4, can elevate humanity across key areas like education, business, and creativity

His conversation with AI takes us on a journey to the future, where AI is not a threat, but a partner….How might humanity use GPT-4 to continue our long-standing quest to make life more meaningful and prosperous? How can we use it to help solve some of the hardest challenges we face? To expand opportunities for self-determination and self-expression? Along with solutions and opportunities, GPT-4 will also create its own challenges and uncertainties. Impromptu explores how we might address risk as we continue to develop AI technologies that can boost human progress at a time when the need for rapid solutions at scale has never been greater.


Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence, by James Bridle.

Recent years have seen rapid advances in ‘artificial’ intelligence, which increasingly appears to be something stranger than we ever imagined. At the same time, we are becoming more aware of the other intelligences which have been with us all along, unrecognized. These other beings are the animals, plants, and natural systems that surround us, and are slowly revealing their complexity and knowledge - just as the new technologies we’ve built are threatening to cause their extinction, and ours.

From Greek oracles to octopuses, forests to satellites, Bridle tells a radical new story about ecology, technology and intelligence. We must, they argue, expand our definition of these terms to build a meaningful and free relationship with the non-human, one based on solidarity and cognitive diversity. We have so much to learn, and many worlds to gain.


**The Atomic Human: Understanding Ourselves in the Age of AI, by Neil D. Lawrence**.

What does Artificial Intelligence mean for our identity? Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.»Neil D. Lawrence’s visionary book shows why these fears may be misplaced.

By contrasting our capabilities with machine intelligence, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Understanding this will enable readers to choose the future we want – either one where AI is a tool for us, or where we become a tool of AI – and how to counteract the digital oligarchy to maintain the fabric of an open, fair and democratic society.


AI Needs You: How We Can Change AI’s Future and Save Our Own, by Verity Harding.

Artificial intelligence may be the most transformative technology of our time. As AI’s power grows, so does the need to figure out what—and who—this technology is really for. AI Needs You argues that it is critical for society to take the lead in answering this urgent question and ensuring that AI fulfills its promise.

History points the way to an achievable future in which democratically determined values guide AI to be peaceful in its intent; to embrace limitations; to serve purpose, not profit; and to be firmly rooted in societal trust.


The Skill Code: How to Save Human Ability in an Age of Intelligent Machine, by Matt Beane

From one of the world’s top researchers on work and technology comes an insightful and surprising guide to protecting your skill in a world filling with AI and robots. Think of your most valuable skill, the thing you can reliably do under pressure to deliver results. How did you learn it? Whatever your job – plumber, attorney, teacher, surgeon – decades of research show that you achieved mastery by working with someone who knew more than you did. Formal learning—school and books—gave you conceptual knowledge, but you developed your skill by working with an expert. Today, this essential bond is under threat.

Whether you’re an expert or a novice, this book will show you how to build skill more effectively – and how to make intelligent technologies part of the solution, not the problem.


Deep Utopia: Life and Meaning in a Solved World, by Nick Bostrom

Bostrom’s previous book, Paths, Dangers, Strategies changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right?

Suppose that we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock. If this transition to the machine intelligence era goes well, human labor becomes obsolete. We would thus enter a condition of “post-instrumentality”, in which our efforts are not needed for any practical purpose. Furthermore, at technological maturity, human nature becomes entirely malleable.

Here we confront a challenge that is not technological but philosophical and spiritual. In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day?


At last the most important top X things of 2023 post is out. Here’s Rolling Stone’s take on the 21 most defining memes of 2023.

Shamefully, I only even vaguely heard of around 14 of them. Not sure whether I can blame that on the great fragmentation of the Internet, my grumpy refusal to participate in most of the big social media networks or simply old age.

In case you want to test yourself with just the titles, here’s the list along with a tick for the ones I knew of.

  • Angela Basset Did the Thing ❌
  • Skibidi Toilet ✅
  • Boston Cop Slide ✅
  • TimothĂŠe Chalamet as Wonka ✅
  • Tube Girl ✅
  • Congress’s Vote for Speaker of the House ✅
  • Kevin James ❌
  • Big Red Boots ✅
  • Chinese Spy Balloon ✅
  • Nepo Babies ✅
  • Dupes ❌
  • M3GAN ✅
  • The Roman Empire ✅
  • Girl Dinner ✅
  • Orca Attacks ✅
  • One Margarita ❌
  • Babygirl ❌
  • Serving C*nt ❌
  • Planet of the Bass ❌
  • Grimace Shakes ✅
  • Barbie ✅

(Nearly) new year, new URL. I’ve kept this blog up long enough it’s time to give it its own big-boy domain name.

From here on in it’s thebraindumpblog.com.

In theory going to the old address should redirect you to the shiny new one.


Busy wasting time on The Great Scrollback of Alexandria.

This is a preservation effort, attempting to capture the funniest, weirdest, and most memorable posts before Twitter completely burns down.

The Verge does the world a service after the tragic decline of the service formerly known as Twitter by preserving some of its much loved bangers.

One more reminder that nothing you put on other people’s sites necessarily lasts forever, no matter how iconic. Nor most likely will the Verge for that matter.


Over 10,000 research papers were retracted this year

It’s been a record-setting year in terms of how many research articles have had to be retracted from scientific journals for being wrong in one way or another.

As Nature reports:

The number of retractions issued for research articles in 2023 has passed 10,000 — smashing annual records — as publishers struggle to clean up a slew of sham papers and peer-review fraud.

Volume of retractions over time

Good news if it means the scientific record is being cleaned up, bad news if it were to mean that the rates of fraud and serious errors are increasing.

To be fair, an absolutely mind-boggling number of papers get published each year these days. Those 10k papers are only equivalent to around 0.2% of papers that were published this year.

Retracted papers as % of published

8000 of the papers were all from the same publishing house, Hindawi, an unwelcoming $35-40 million surprise for its recent owners, Wiley.


Being You: Anil Seth's new theory of consciousness

📚 Finished reading Being You: A New Science of Consciousness by Anil Seth.

Book cover of Being You

This feels like a book that’s going to be hard to summarise. It doesn’t help that I read most of it a year and a half ago whilst on a sun lounger. But I also found it a fairly dense and complicated book. Although a tome on the subject of “what is consciousness and how does it work?” could hardly be anything else. A key aspect of the topic is after all colloquially known as “the hard problem” - although Seth believes that particular aspect isn’t the one that we should be trying to solve in the first place.

The famous “hard problem of consciousness” is to figure out why the presumably physical inputs we receive from the external world lead to our inner phenomenological experiences. Why do we experience certain vibrations, in the form of sound waves, as music? Why does that experience of “music” make us feel things? Sad, happy, warm, awe-struck, or anything else on the wide-ranging human palette of feelings.

Why does this particular wavelength of light generate the experience of an intense red, and how come it makes us feel relaxed, or on edge? In summary: why does being exposed to physical phenomena make give us an “experience”? Both as a general question, and also, for a given single input why that experience in particular?

Seth thinks that’s the wrong line of inquiry. He wants us to think about the “real problem” of consciousness. By his telling, that means that the goal of science should be to explain, predict and control the phenomenological properties of consciousness. Or as he writes in an article for Aeon:

…how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem).

First up, his definition of consciousness. Here it’s basically “any kind of subjective experience”. But, importantly, the experiences are indeed subjective.

We do not perceive things 1:1 as they are. This doesn’t take too much introspection to figure out even without the aid of a complicated book. Seth describes our perception of reality as a “controlled hallucination”. There are other types of hallucinations - LSD inspired ones for instance. But the shared hallucination that (almost) all of us agree on at a given point in time is the one that we choose to call reality.

We experience colours when what is actually out there is merely photons. We experience music when what is out there is merely vibrations. There surely is a “truth” out there, but we have no direct access to it. Rather, our brains create our conscious experience from its inputs. This explains why we can be predictably mistaken about things. Think for instance of visual illusions. The white & gold / black & blue dress that broke the Internet a few years ago is an obvious example.

…our perceptual experiences of the world are internal constructions, shaped by the idiosyncrasies of our personal biology and history

Why have we evolved to do this presumably computationally intense and certainly extremely unintuitive hard mental work? Because, of course, it helps us get through life.

The discomfort we feel when hungry is useful from a survival, and hence evolutionary, point of view. It motivates us to seek out food. Particularly energy dense food that in times gone past would have given us the most survival bang for our buck. The enticing taste of chocolate thus isn’t some random weird intrinsic property of its constituent molecules. It serves a function. Our conscious experience of the deliciousness of it motivates us want to seek out and consume it. In that way we can replenish our energy stores, and stay alive.

Colours provide another useful example. The properties of the light reflected from the same object in dark surroundings compared to well-light surroundings can be dramatically different. The actual stuff that hits our eyeballs - the sensory information - is thus very dependent on our environment. But it suits the way we work in the world for us to understand that it’s the same object. We still feel that the blue shirt that we previously saw in the midday sun is blue even when the actual sensory information we receive looking at it in the dusky evening is very different.

We perceive the world not as it is, but as it is useful to us

The brain can be thought of as a prediction machine that takes the inputs it receives from our senses and produces a guess of what caused those inputs, of what is out there. This is necessary because knowing what is out there in our environment is a fundamental requirement for us to survive. With no way to access raw truth, our brain has evolved the ability to make educated guesses, or, to put it more formally, predictions. Our conscious sense of reality is essentially those predictions.

Our impression of reality is constantly updated as the brain receives further sensory input. The aim is to reduce prediction errors, likely via Bayesian reasoning principles. Our inner “reality” thus shifts as we subconsciously receive input that is incompatible with our present perception of reality.

It increases in confidence when the stuff coming in through out sense organs is congruent with our prior beliefs. But, be honest, we’re all familiar with having seen or heard something that later turned out to be entirely different from how we perceived it at the time. It’s because what we perceive is nothing more than our best guess from incomplete information. It’s impossible to guess right every time when you’re not omniscient.

This mechanism can also lead to our many famous biases. The prediction machine side of things is why we’re more likely to perceive things that we expect in advance to perceive; yet another phenomenon that makes clear the lack of objectivity in our conscious experience.

For those who follow the active inference theory it might be that the actions we “choose” to take in the world are those that will minimise our predictive errors. We take actions to confirm that our predictions are true, and when it turns out that they are not then we update our predictions.

There are plenty of potential implications of this telling.

  • Our sense of “self” is another of these controlled hallucinations, which, amongst other things, explains why it’s so hard for philosophers to pin down. The self is a mix of our brain’s predictions, beliefs and memories. This explains why people who suffer issues with memory can have their sense of self affected.
  • The same with free will. Even the most ardent free will skeptic might admit to having a sense that they have free will even if they objectively do not believe that can be the case. But it’s another hallucination, “designed” to provoke us into behaviour that is to our evolutionary benefit. Even if in reality there was no real possibility that we would have done things differently in the past, the feeling that we could have done so provides a feeling that we’re able to do things differently the next time if it suits our survival to do so. We can learn.
  • Our emotions don’t cause our physical reactions. Rather it’s the other way around. Our heart doesn’t beat faster because we’re scared. We consciously feel fear because our heart is beating faster, and the feeling of fear is a way to motivate us to actions that’ll help us survive, such as running away.

This leads us to Seth’s “beast machine theory”. For us to survive and thrive, it’s important not only to be within a safe external environment, but also that certain interoceptive signals - those received based on our body temperature or heart rate for example - are maintained within certain value ranges.

If these inner indicators stray into more dangerous territory then the brain provides phenomenological hallucinations, more commonly described as emotions and moods, that provoke us to take actions to return the signals it’s receiving back to a normal and safe value. For us beast machines, regulating our inner-state and integrity is of a higher priority than accurately perceiving things.

There’s a social side of things too. We alter our behaviour based on our perception of what other people might be perceiving about us. Our identities often reflect what other people think of us.

We’re also capable of learning from and about other entities. And not only those from our own species; most people intuit that at least some animals might be conscious. And in the current AI-enthusiast times, some people wonder whether at some point machines might be too. Seth is skeptical of the latter, although doesn’t entirely rule it out.

In any case, it’s important that we do try to figure the parameters of consciousness out - whether the potential experiencer is made of meat, silicon or something else - because as soon as something has consciousness then it gains a moral status such that we should take care to minimise its suffering.

…the entirety of human experience and mental life arises because of, and not in spite of, our nature as self-sustaining biological organisms that care about their own persistence.

For those who prefer videos to books, the author has given a TED talk on the subject too.


The Word of the Year 2023 winners are in

Looking at the various dictionaries' Word Of The Year 2023 winners - let’s hope no good words come out in the next 10 days hey? - I see they’ve been almost as distracted as I have by the robot brains.

Dictionary.com focuses on a key limiting factor of today’s automated chatters: their ability to answer you with fiction, indistinguishable from fact in its presentation. They go with ‘hallucinate’. Which isn’t a new word of course, but OK, it was really only this year that journalists, commentators, and everyone of a certain type who you follow on social media started to write en masse about computers having them:

(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual.

Cambridge goes for something similar. In fact something exactly the same: ‘hallucinate’. It’s definition now includes the following:

When an artificial intelligence hallucinates, it produces false information

I don’t know that I love the way either definition is phrased, but they’re the word experts I suppose.

Collins is on the same theme, but a bit more general. Their word of the year is ‘AI’. Which is actually an abbreviation for Artificial Intelligence, but no need to be picky.

the modelling of human mental functions by computer programs

We have certainly successfully modelled the human mental function of not telling the truth, as confirmed twice above.

Oxford goes with something totally different, something for the youngsters: ‘rizz’. This word was new to me at least in 2023. Perhaps an abbreviation of charisma, it’s a noun:

style, charm, or attractiveness, the ability to attract a romantic or sexual partner.

Avid followers of bad things might remember a crossover technology featuring both rizz and the risk of hallucinations in the guise of rizzGPT, which was to be a weird thing you put on your glasses that offers “real-time Charisma as a Service (CaaS)”.

Mirriam-Webster goes with “authentic”. It has 5 possible meanings in their telling, none of which feel particularly new to this year. The first is as follows:

not false or imitation: real, actual.

Although this choice feels a bit dated to me - didn’t we go through a bout of authenticity a few years ago? - maybe it’s not so unrelated to the ones above; an antithesis to the other dictionaries' theses.

Mirriam-Webster thinks that the increase in interest of the definition of the word authentic is “driven by stories and conversations about AI, celebrity culture, identity, and social media.” - also known as conversations about hallucinations, AI and people trying to demonstrate their rizz.


Good news, it looks like federating is still in fashion. Flipboard has starting federating a few of its publisher’s accounts via ActivityPub, and Threads seems to be actually following through on its initial promise - I’m actually a little surprised - and is at least testing the feature.

This is a controversial move to some folk, especially with regards to Threads. Personally I think it’s a absolutely a good thing at least in theory. But it’s surely not risk free. Undoubtedly one needs to do what one can to avoid any new federating instance successfully following the infamous path of the embrace, extend, extinguish strategy.


🎶 Listening to Stumpwork by Dry Cleaning.

I didn’t really know I needed someone sardonically reading out what appears to be their unfiltered thoughtstream deadpan over a background of various guitar-and-rhythm styles in my life but it turns out I do.

It’s oddly soothing. I guess knowing that other people have the same incessant, random, sort of dull monologue in their heads as they go about their day is reassuring. Kudos to these folks for making it into a real art form.

Everyone calls it post-punk, which I’m not sure I fully know what means yet, but it’s very good whatever it is. Think unrefined poetry being read out by someone who doesn’t really care about it, but again, good.

Their first album, New Long Leg, also recommended, brought to my mind the idea of someone reading out their Facebook timeline, unfiltered, end-to-end, at times. This one feels more about over-sharing the constant array of often meaningless things that pop into one’s mind in between a bit of tedious small-talk with neighbours. But again, in a very good way.

Stumpwork album cover

📚 Finished reading Zero Days by Ruth Ware.

The book opens with Jacintha Cross sneaking around a corporate office, seemingly conducting some kind of heist. It turns out she’s a penetration tester so even if things did go wrong the consequences wouldn’t be all that bad. Except today, when Gabe - her partner in both life and business - ends up in some real dire straits. And most people, the police included, seem to think she’s to blame.

This felt like a fairly standard thriller, but its themes of hacking, social engineering and the like appeal to me, so I was fairly into it. There were some twists and turns to get excited about, although, despite not normally being great at doing, so I felt like I could see many of them coming substantially in advance.

Nonetheless it was compulsive enough to definitely need reading right to the end. Plus I felt like I learned a couple of techniques for confusing people into letting me into places I shouldn’t really be in that, who knows, might come in handy one day.

Cover of Zero Days book

Elon Musk tried to create a ‘politically neutral’ - also known as ‘politically conservative’ once we translate from Elon-speak - AI called Grok, now available to folk who pay for Twitter.

But it turns out that even by training it on Twitter data [“he couldn’t make it horrible enough to do anything but frustrate his fanboys who seem desperate to train hate into it. It’s simply not ‘based’ enough to satisfy their unfortunate desires.

From Forbes:

Grok has said it would vote for Biden over Trump because of his views on social justice, climate change and healthcare. Grok has spoken eloquently about the need for diversity and inclusion in society

What does Grok think about one of Musk’s favourite stupid turns of phrase- the Woke Mind Virus? Via Ed Zitron, the computer brain had this to say:

But let’s be real here, the “woke mind virus” is a load of BS. It’s a term that’s been used to dismiss and belittle important conversations about social justice and equality. It’s a way for some to avoid confronting the uncomfortable truths about the world we live in and the work that needs to be done to make it a better place for everyone.


Maryam Moshiri falls victim to the same self-inflicted disaster that I’m sure all of us remote office workers have at one time or another, except in this case her Zoom-style misdemeanors were being broadcast to…the entire world.

From my favourite edition of BBC News last week:

I mean, the first story she went on to talk about apparently involved Boris Johnson at the Covid inquiry so it’s perfectly understandable.

The Guardian takes the opportunity to re-hash some other BBC classics. Here’s weather-person Tomasz Schafernaker reacting to some light jibing from his colleagues with similarly bad timing:

There was the time when Guy Goma went for his job interview at the BBC but was somehow mistaken for IT expert Guy Kewney and ended up being interviewed on air. Give him maximum credit for bluffing his way through it:

And who could forget the accidental mixup between footage of Scotland’s First Minister Nicola Sturgeon and a representative of an entirely different species?


Currently on consecutive day 9 of wearing my Christmas jumper.


Quiver Quantitative tracks and shares a lot of stock trading information to enable reporting on and the utilization of trading strategies that might previously have been relatively hard to put together for mere retail investors.

For example here’s a dashboard showing which US politicians are buying which stocks. Apparently following what they do has been quite profitable in recent times.

I built a trading bot that buys stocks that are being bought by politicians. It is up 20% since it launched in May 2022. The market has been flat during the same time period.

Other datasets include spending on lobbying, which companies the folk at r/wallstreetbets are talking about , or Google Trends search interest trends in various companies.

(None of this is a personal recommendation to adopt any of these strategies of course!)


Governments spying on Apple, Google users through push notifications - US senator

I don’t think this is surprising, but it’s now “official” that because push notifications from iOS and Android all have to go via Apple and Google it means that those companies are in a position to know which users are getting which notifications. And thus also to provide that info to people that ask them for it.

…the records that governments can obtain from Apple and Google include metadata that reveals which apps a person has used, when they’ve received notifications, and the phone associated with a particular Google or Apple account.

Several government agencies, both inside and outside of the US, have successfully requested this info. Apple claims that until Senator Wyden brought this topic up they were forbidden from sharing any information about their ability to do this, let alone how often it happened.


TIL: When playing fruit machines, legally it’s the person who presses the button that gets the winnings rather than the person that puts the money in, at least in the US.

So I guess be careful who you let have the fun of spinning the reels no matter how deep in you are to a gambling session. Otherwise you risk the fate of Jan Flato:

Jan Flato put $50 into a video poker machine at Florida’s Seminole Hard Rock Casino, and had his lady friend push the button for good luck.

Flato’s money and Marina Navarro’s hand won $100,000, but Flato didn’t get a dime.

They are no longer friends.


The last of the big tech giants recently released its version of a large language model generative AI assistant - Amazon’s somewhat dystopia-sounding “Q”. Not to be confused with OpenAI’s supposed-to-be-secret super powerful AI model codename Q*.

Amazon Q is aimed at businesses, especially those that already use Amazon technology as part of their operations, rather than entertaining your desires to hear robot-generated fan-fic. Perhaps more of a rival to Microsoft Copilot than ChatGPT.

By connecting it up to your existing data sources:

Business users—like marketers, project and program managers, and sales representatives, among others—can have tailored conversations, solve problems, generate content, take actions, and more.

Somewhat predictably, just 3 days later it hit the news due to leaked documents suggesting that it is “experiencing severe hallucinations and leaking confidential data”.


Being an data analyst type of person I am a fervent believer in the below concept, but until now I didn’t realise it had a name.

Per Wikipedia, Twyman’s law states that:

Any figure that looks interesting or different is usually wrong.

That is to say if you think you found something mind-blowingly interesting or revolutionary in your data, the sad truth is that most often you just made some kind of mistake. Whilst we should of course approach each situation with an open mind, the same default principle might be adopted when reviewing the work of others.


Henry Kissinger, War Criminal Beloved by America’s Ruling Class, Finally Dies

Rolling Stone isn’t holding back in their obituary of Henry Kissenger, who died last week at the age of 100.

As US national security advisor and secretary of state to Presidents Richard Nixon and Gerald Ford, Rolling Stone tells a story which situates the responsibility for the deaths of millions of people on Kissenger.

It’s always valuable to hear the reverent tones with which American elites speak of their monsters.

One major part of this was his role in deliberately undermining the potential for an earlier agreement to finish the catastrophic Vietnam war, seemingly on the basis that it might make it harder for his preferred candidate to win the US presidency.

Every single person who died in Vietnam between autumn 1968 and the Fall of Saigon — and all who died in Laos and Cambodia, where Nixon and Kissinger secretly expanded the war within months of taking office, as well as all who died in the aftermath, like the Cambodian genocide their destabilization set into motion — died because of Henry Kissinger.

For it seems like power was Kissenger’s primary motivation. Power for himself, for his President, and for America, at any cost.

The point was American geopolitical dominance, something measured in impunity and achieved by any means necessary.


The world is on track for a “hellish” 3C of global heating, the UN has warned.

Apparently our current efforts to combat global warming have us on track for a 3 degree increase in global warming by the end of the century.

I feel like the news is always the same, only the number of degrees increases point by point, in a very unreassuring manner. Which is fair.

I’m just going to assume next year’s report lets us know that we’re en-route to 3.5 degrees, and finds an adjective somehow even more unnerving than “hellish” to some up our future lives. Even achieving net zero by 2050 would still result in a 2 degree increase, a scenario that was described in rather doomy terms just a few years ago.


📚 Finished reading Upgrade by Blake Crouch.

I read and enjoyed “Recursion” by the same author earlier this year, so was enthusiastic to try this, his latest novel, out.

It’s set in a somewhat dystopian but very recognisable world of the presumably near future. We haven’t solved our environmental problems, in fact parts of Manhattan are unusably flooded amongst other such places. But technology has advanced a bit, in particular our ability to edit genes.

Although gene editing was outlawed following a misguided attempt by top scientist Miriam Ramsay to enhance the resistance of rice to a particular blight. Best of intentions maybe, but there were of course unforeseen consequences which led to a mass starvation, hundreds of millions of deaths, and a ban on genetic engineering. Miriam killed herself.

The ban is enforced by the Genetic Protection Agency, where we find out protagonist, Logan Ramsay, working. Logan is Miriam’s son, who seems to be working there more out of a sense of guilt for the impact his mother had on the world - he himself was involved enough to go to prison for a while - than a love for the job.

One day, a raid goes wrong and he’s exposed to an unknown virus. The symptoms are agonising at first, but he recovers his health soon enough. And more besides. Suddenly he feels stronger, more intelligent, more sensitive, with a better memory. He can even beat his daughter at chess. Until, imprisoned for genetic self-engineering, he no longer has the opportunity to.

Then a figure from his past life turns up, also stronger, fitter and cleverer than either of them had suspected. The problem is that they strongly disagree what they should do about it. The potential consequences of the decision could hardly be higher.

To get to the bottom of that requires resolving several deep ethical problems. What risks do we have the right to take in the name of a potentially better future? And even what does it mean to be human? Not that you’ll need an ethics PhD to understand the situation, honestly it’s mostly an action thriller, substantially less intellectually demanding to me than the last work of fiction I read was. But the conundrum is real, and adjacent to one that humanity is already facing.

I’m not yet sure how I felt about the end of the epilogue, but was fully engrossed throughout the main story.

Book cover for Upgrade