The Braindump Blog
Recently I read:

More links

Latest posts:

The IEA's ten steps to mitigate against the sky-rocketing price of oil

Here’s the full list of recommendations from the International Energy Agency (IEA) as to what its member countries are advised to do in order to reduce their demand on scarce and expensive oil following the impact of Trump’s presumably illegal war against Iran.

  1. Work from home where possible to save petrol.
  2. Reduce highway speed limits by at least 10km/h to reduce fuel usage.
  3. Encourage public transport to reduce oil demand.
  4. Limit car access to roads in large cities through a number-plate rotation scheme.
  5. Increase car sharing.
  6. Encourage efficient driving for commercial vehicles through load optimisation and vehicle maintenance.
  7. Divert LPG use from transport to preserve it for essential needs like cooking.
  8. Avoid air travel where possible.
  9. Encourage electric cooking and other options to reduce reliance on LPG.
  10. Help industrial facilities switch between different petrochemical feedstocks to free up LPG.

Basically: travel less and find alternatives to fossil fuel use. Obvious stuff, but not at all obvious to me that we’ll do it Most of those feel like good advice we should all be following anyway given the ever worsening climate disaster we’re stuck in. After all, the world’s response to the Covid-19 pandemic did slightly decrease damaging carbon emissions- but humanity soon went back to its planet-destroying ways.


The 'OSINT Techniques' book is surely the bible for all things Open Source Intelligence

📚 Finished reading OSINT Techniques: Resources for Uncovering Online Information by Michael Bazzell and Jason Edison.

This is surely the absolute bible of approaches and methods for anyone interested in pursuing the art and science of Open Source Intelligence, aka OSINT, as a hobby or a career, written by a true expert in the field. Note that it’s very much a “how to do” rather than “interesting stories about” type of manual, although a few short case summarise are presented as examples.

I think I read somewhere that some university courses on the topic use it as a textbook, and I can see why. It’s not necessarily the cheapest book to purchase but if you’ve an interest in the topic then reading its nearly 600 pages has got to be worth it.

Part of the book is all about setting your computer environment as a virtual environment perfect for safe and effective OSINT work. This involves running a virtual Linux machine.

Once you’ve got that up and running, there are countless OSINT-adjacent tools he advises how to install and use. It must be said that, if you want to follow this section, a certain amount of nerd computer skill (or willingness to learn) is going to be useful. The author advises against blindly copying and pasting the commands he suggests - which is anyway necessary, because so many of these tools are updated, changed, or deleted every day, as do the information sources some of them rely on. Even with some experience of the technologies involved it took me some time to figure out what to change to get everything installed.

But it was all perfectly doable. And once you’ve done that, well, you presumably have a good amount of the preparation you need to succeed done - and possibly improved your computer skills in general. Which is important because tomorrow new tools will appear, old tools will stop working, new data will surface, old data will become more restricted. It is an ever-evolving field to say the least

Some of the data sources it mentions are a bit US focussed, especially perhaps the people search sites. I’m guessing that’s a mix of that’s where the author lives mixed with the fact the USA has fewer data protections than my own country, so there’s more supposedly “legitimate” data floating about breaching each citizen’s privacy. But we live in a global online world - after all, we all use the same social networks for better or worse - so there’s more than enough to get on with no matter where you live or where you’re investigating.

And besides, part of the point of the book is to make you self-sufficient, to teach you the flexible skills you need to work with tools and data far beyond the many presented directly in the book.

It also became quickly apparent to me that even performing a pale imitation of the sort of rigorous investigations the author’s company conducts is at least a full-time job. But even for those of us that who are not in a position to follow that route there’s a lot of good stuff to learn here. In terms of how to approach the more minor investigations that you might like to pursue (or even how much interest you have in the field itself). And also, conversely, how much data there probably is floating around about you - unless you have gone extremely out of your way to protect yourself (he also sells a book about “extreme privacy").

In fact, why not use yourself as a consenting first OSINT test subject? The results may not please you.

Buying the book also gives you access to some special tools and scripts he developed to make your life easier. The author’s website actually has a ton of great content and tools available for free that you can use directly for your investigations or at least take a look at to get to see if this book feels right for you. I can’t imagine a better intro to the actual practice of the topic than this book though - as long as its length and detail doesn’t prove too intimidating to you. But “detail” is surely the name of the game in OSINT work.

Glancing over the section names included in the book might provide some insight as to what’s included:

100% recommended for anyone interested in the practical side of the OSINT field.

A book cover titled OSINT Techniques: Resources for Uncovering Online Information features redacted text bars and a minimalist black and white design.

Timothy Snyder on why the No Kings protests are important

Justly famed author Timothy Snyder on his participation in the upcoming US No Kings protest.

Everything is at stake.

  • Prosperity. The wealth of workers is handed to oligarchs.

  • The Constitution. One person seeks unrestrained power.

  • Justice. The innocent are punished while the guilty believe in impunity.

  • Peace. Americans kill and die in a war whose purpose is to keep us down.

  • Democracy. Those in power seek to eliminate the right to vote.

It’s worth turning up. Even if each protest doesn’t immediately cause the policy it’s protesting to reverse, there are more subtle ways in which it can have a positive effect on the world.

To take the headers from his list:

  • Protest changes the atmosphere.
  • Protest summons more protest.
  • Protest keeps us organized.
  • Protest affirms freedom.
  • Protest wins elections.
  • Protest brings joy.
  • Protest changes us.

Much more data on “does protest work?”, and what circumstances make it more likely to, can be found here


📚 Finished reading Cult of the Dead Cow How the Original Hacking Supergroup Might Just Save the World by Joseph Menn.

Although I’m not certain how famous they are these days, the hacker group “Cult of the Dead Cow” was greatly influential within hacker culture, and the online world in general, at one point. Including on me. I certainly knew and admired the group - and the tools they shared with the world - when I was more enmeshed in that world, back in the day.

Starting off as a miscellaneous group of pranksters with various technical and non-technical skills exploring and defining online life in the early days of the internet (and before) they grew in size and influence, more recently to campaigning and building tools to preserve the freedoms of the internet and beyond we should all treasure against their constant big-tech and state driven attacks. In the mean time, despite their occasionally legally questionable origins, several of them have gotten into positions of power with regards to various technology or government entities that value their skills - a controversial move to some of course.

The members of the CDC themselves have come and gone over the years. But the group does still exist today, still releasing their text files, well, on and off. In fact their current website looks very much like the t files of the BBS era. But seemingly more as a campaigning activist group seeking to defend human rights online and beyond than continuing their antics of yesteryear, at least publicly.

In an increasingly technological world, hacking, both in a technical and cultural sense, can and surely should be used as a powerful force for good.

A book cover for Cult of the Dead Cow by Joseph Menn, featuring a CDC logo and praise from The New York Times.

Record-shattering March temperatures in Western North America virtually impossible without climate change’ confirm scientists regarding the record-shattering heatwave that affected the US this week.

The consequences are extraordinarily severe:

Heatwaves are the deadliest type of extreme weather, with hundreds of thousands of people dying from heat-related causes each year. Extreme heat is most deadly earlier in the year, when people have not acclimated to the heat, and vulnerable people are exposed to high temperatures for the first time.


An interesting proposal from the government to help Britons reduce fuel consumption and hence costs given the massive war-driven price increases on it:

They have already begun contingency planning in case the conflict is protracted, including considering lowering speed limits to minimise fuel consumption

They’re not alone - the IEA has also suggested similar:

In the face of oil supply shortages caused by the closure of the strait of Hormuz, the International Energy Agency (IEA) suggested the world should use ovens less and cut back on car usage to increase resilience


A short visual history of US interference with Iran

Ted Rall’s cartoon summarises just a few reasons why some of the Iranian population might on occasion express some negative sentiment towards America. And all this is of course long before the current war, which of course was not in any way instigated by Iran.

But hey, as illustrated here, it’s not the like the US didn’t already have a history of violent interference against the country.

This is of course not to suggest Iran has had no agency in the relationship throughout history, or that their current political regime was anything other than truly hideous to its own citizens. But, as always, two things can be true at once.

Auto-generated description: A political cartoon depicts the United States' historical involvement in Iran and Iraq, highlighting events such as the 1953 coup, funding of SAVAK, and the Iran-Iraq War, culminating in military actions in the 1990s.

📚 Finished reading The Cut Throat Trial by S J Fleet.

3 boys are on trial for committing the brutal murder of a seemingly random older man. There’s no denying that they’re involved, but each of them claims that it was the other two that actually did the act. This is the nature of the what is known in the real world as the titular cut throat trial.

A compelling courtroom drama written by someone who, being a barrister in real life, really knows how it all works - the Secret Barrister, who has written a whole lot of interesting non-fiction too.

Auto-generated description: A shadowy figure walks purposefully on a rain-soaked street, illuminated by blue lights, with bold text highlighting The Secret Barrister and The Cut Throat Trial.

🎶 Listening to The Wild, the Innocent & the Raging by Iluka.

This one came to my attention after hearing a particularly resonant set of lyrics blare out from the short-video social networks around me.

While the planet burns for the billionaires Takin' all they please, as if the world was theirs! When the last tree’s cut and the streams run bare Let ‘em eat their money, what’ll they do then?

Well, quite. That’s a verse from Woman Gone Mad.

Many of the other songs also tackle a ton of social issues, especially those most often faced by women - so, by definition, ones that should be understood and taken fare more seriously by all of us. A masterful combination of the personal and political.


🎶 Listening to The Clearing by Wolf Alice.

Checked this one out after hearing one of its tracks, White Horses, played all over the place. But my favourite has to be the one that I and much of my social group identifies with to varying, only occasionally problematic, extents: Sofa.


Noted 6 decades ago: '...extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people'

We can’t say we weren’t warned.

From Joseph Weizenbaum, writing in 1976 (I think):

I had not realized…that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Indeed, ELIZA’s code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA’s questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion

Weizenbaum was writing about one of the world’s first chatbots that he created in 1966, Eliza. It had the persona of a doctor and would correspond in text format with humans in the following manner:

Human: Well, my boyfriend made me come here.

ELIZA: Your boyfriend made you come here?

Human: He says I’m depressed much of the time.

ELIZA: I am sorry to hear you are depressed.

Human: It’s true. I’m unhappy.

ELIZA: Do you think coming here will help you not to be unhappy?

Basically rephrasing whatever the user said back at them as a question.

Nonetheless, he noticed it to be “surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program’s output”. You can give reproductions of it a go on online now on sites like this and this.

And now in 2026…


📚 Finished listening to V for Vendetta by Steve Moore.

This V for Vendetta is the novelisation of the film V for Vendetta, which is the film of the graphic novel V for Vendetta. Which I listened to in audio form. So there’s a lot of format switching going on in that journey. Nonetheless, I still very much enjoyed it, and plan to give it a whirl in its other available forms at some point.

It must be said that it probably wasn’t the greatest idea for me to be multitasking this book with Julia given the similarities in plot and starring characters. But they are distinctly different enough for any genre fans to give a go.

Both famously feature a female citizen’s growth and struggle against a totalitarian regime. However the regimes concerned are codes as opposing ends of the left-right political spectrum (remember that?). The totalitarian world of 1984 gives off Communist vibes. Here, the totalitarian Norsefire regime has arisen in a crisis-laden Britain by promoting neo-fascist, racist, sexist, Christian fundamentalist, and homophobic policies using the propagandising politics of fear to corruptly gain power and wealth whilst attempting to wholly eradicate immigrants and other disfavoured minorities from the country.

I know, I know, it’s impossible to imagine anything like that happening here now (umm, eergh, oh no). Well, let us hope, if we do continue on this trajectory that as a population we too remember, remember at least the metaphorical occurrences of V & Evey’s November 5th.

Auto-generated description: A movie poster for V for Vendetta features a masked figure, a woman, and a large crowd with bold text reading FREEDOM! FOREVER!

One reason LLMs hallucinate is because we incentivise them to give any answer over saying they don't know

A paper by Kalai et al “Why Language Models Hallucinate” gives one explanation as to why large language models such as ChatGPT, no matter how advanced, tend to “hallucinate” aka confidently tell you things are true that are not true.

A pretty basic reason at that. It seems that the how the models are trained and evaluated tends to be on a “did it answer this question correctly - yes or no?”. So when it tries to optimise itself to score as highly as possible, it does exactly what we tell school kids to do on exam questions that they don’t feel confident they know the answer of. It makes a best guess. Or simply any guess. When taking a multiple choice exam you’re always better to tick something at random rather than nothing, right?

The solution to this particular issue is of course to develop a way of penalising incorrect answers over “don’t know”, which the authors go into. Question answering isn’t really a binary matter. Some “not 100% correct answers” are better than others.

This isn’t the only reason that LLMs may promote falsehoods of course. Another handful the authors mention include:


A GUI for the Substack archiving process

We recently covered how to archive Substack newsletters to your local computer, as well as how to convert the resulting collection into an epub file you can read on your ereader if you want to.

That involved the use of a couple of command line tools, sbstck-dl and pandoc.

If you aren’t a fan of remembering commands to type in then here is a GUI that lets you point and click to get the same effect. Behind the scenes it uses exactly the same process and same commands as the textual workflow already described - you can see it building up the same commands as you’d be typing fill in the boxes - but it saves you remembering exactly what those commands are.

It’s 100% vibe coded so of course buyer beware, but it is at least open source and hopefully simple enough to not be too dangerous.

Auto-generated description: A software interface of Substack Archiver is displayed, showing options for downloading and converting content, as well as settings for advanced options and output logs.

🎶 Listening to Bleeds by Wednesday.

This album seemed fairly universally acclaimed in many “best of 2025” lists. It also came up accidentally when I was searching for Wednesday the TV show. Very fortuitous as it is as atmospherically good as claimed 1. I can’t say I’ve ever lived the small-town lifestyle in a deprived southern USA state with its associated ups and downs and everyday struggles - but I’m prepared to believe this genre-hopping album reflects the experience.

It also contains this absolutely unbeatable set of lyrics.

“We watched a Phish concert and Human Centipede. Two things I now wish I had never seen.

No notes.


The Greens win the Gorton and Denton by-election by a whopping 12 percentage points

Well this is an unexpectedly pleasing result from yesterday’s Gorton and Denton by-election.

Auto-generated description: A by-election result chart for Gorton and Denton shows vote shares for five parties and their change since 2024.

Maybe there are fragments of hope to be had, even in 2026.

The Reform candidate, Goodwin, was a particularly loathsome man, even by the standards of the average greedy, power-crazed, working-class-hating, obsequious friend of authoritarian billionaires that is the average high-up in Reform PLC.

I’m grateful to the 71% of Gorton and Denton voters that opposed him for keeping for keeping him as far away from the levers of power as possible.


📚 Finished reading Julia by Sandra Newman.

This is a retelling of George Orwell’s extremely famous book 1984, but from the perspective of one of the main female characters of the original Julia. She’s works as mechanic in the Ministry of Truth, finding one way or another to get by under the outstandingly horrendous regime. She navigates the system pragmatically if cynically, at least until such time as she becomes seemingly somewhat infatuated with the original’s rebellious protagonist, Winston Smith.

Readers of the original will know many of the events that this precipitates - but the shift in perspective that comes from foregrounding of a woman, elucidating the distinctly segmented gendered experience of the totalitarian regime concerned, makes it a compelling read.

A book cover titled Julia: A Retelling of George Orwell's 1984 by Sandra Newman features large black and red text.

The inventor of the world wide web implores us to remember that 'This Is For Everyone'

📚 Finished reading This Is for Everyone by Tim Berners-Lee.

This is partly a biography of possibly the only both famous and decent “guy in tech” that I’m aware of - Tim Berners-Lee, the inventor of the world wide web. And, secondly, a manifesto for his original vision for the web and what we need to do to save it from the actions of the extremely indecent “guys in tech” that are doing their level best to destroy the very intent of the web.

Berners-Lee basically invented the web as a somewhat random project whilst working at CERN. No-one seemed greatly interested at first, but, as we all know by the very existence of this website, people eventually became extremely interested.

He was, and is, a very idealistic gentleman and insisted on giving the technology away freely. No patents, no commercialisation, no restrictions on at least the underlying protocol.

Fundamentally, amongst other huge differences strictly in favour of the web, Facebook can ban me from participating in their ecosystem. No-one can ban me from participating on the web. Facebook of course greatly leverages the web itself, without paying Berners-Lee a cent.

By layering (relatively) easy-to-write hypertext pages on the already existent-internet his dream was to unleash a new era of creativity and collaboration. Information wants to be free! You don’t need a degree in computer science to make a webpage. You don’t need a PhD to read one.

His dream came true. Well, at least for a while, and where it’s latterly failed at all it sure wasn’t his fault.

What went wrong? Well, it could be summarised as corporate capture by the surveillance capitalists. Contrary to the originating vision of huge numbers of equal peers communicating and collaborating across this license-free protocol, a few huge companies - Google, Meta, Amazon et al - have grown to almost entirely dominate the space.

Each of these maleficent players exploits their users' data for their own financial gain, each locks away valuable information in pay-only vaults, each designs their products to explicitly track you, advertise to you, exploit and manipulate you. Facebook of course provides the arch-example of this. But they’re all at it, all optimising for some form of “engagement” for private profit over any consideration of well-being. Algorithmic social media keeps important information in private silos, incentivises clickbait, deliberately enrages its users, spreads disinformation, surveils you to extract and sell your personal data and so on all because this makes you stay within their private walls longer, to hand over more information, to spend time, money and attention contrary to our own interests - all of which are more profitable for them. This could not be further from Berners-Lee’s vision of a hypertext paradise-for-all. And he’s here to explain the problem, and hopefully help us sort it out.

Especially as the era of AI has already clearly dawned. He is perhaps less doomy than some other folk as to the likely impact of AI on the web, but fully understands that there is a necessary fight to be had if we are to be left with any semblance of his vision, let alone much chance of AI improving the lives of the everyday person rather than just the lives of a few weird tech billionaires. The author understands that technology is not self-determinate; rather the social conditions under which it emerges and evolves govern whether it will be used for good or ill.

His favoured solution to the mess seems to address primarily the topic of data sovereignty and interoperability. You should own your data. You should control your data. It should be easy for you to share the bits of it you want to share with specific organisations.

To this end he has helped create a new free-to-everyone protocol called Solid. I had only very vaguely heard of it before, and I’m still not sure if it’s all that likely to solve every one of the web’s current woes, but I’m certainly going to dig a bit deeper into it given its esteemed provenance. You can certainly see the influence of his original vision for the web in how the Open Data Institute, an organisation he co-founded, talks about it:

It’s a bit like carrying all your data in a rucksack with lots of pockets. To access the data, different apps can only open the pocket you allow them to open, rather than taking the whole rucksack. The rest stays private.

Solid lets people take control of their data and combine it to achieve new results. It gives creators new collaborative tools while passing power back to users. It’s technology that returns the web to its original vision of serving people.

Read the book, join the fight!

Although I have to say, if I’d chanced upon his present company’s website - Inrupt - out of context I wouldn’t give it much of a glance. Phrases like “Agentic Wallets add AI for innovative, compliant, and hyper-personal experiences” rather set off tech-bro alarms in my head. But Berners-Lee’s original vision for the web was so beautiful - and if this book is to be believed remains his primary motivation - and his invention and gift to humanity so great, that I feel like it would be wrong not to give it every chance.

Auto-generated description: A book cover featuring This Is For Everyone by Tim Berners-Lee with a colorful circular design and references to him as the inventor of the World Wide Web.

From NPR:

The Federal Communications Commission (FCC) is urging broadcasters to air more “patriotic, pro-America” content in honor of the 250th anniversary of the signing of the Declaration of Independence.

This definitely doesn’t sound anything like the fictional world of Orwell’s 1984, or the non-fictional world of Russian state broadcasting of course…

Carr’s suggestions for today’s broadcasters also include starting each day with the “Star Spangled Banner” or the Pledge of Allegiance; introducing segments that highlight “local sites of significance” to national and regional history such as National Park Service locations; and airing works by canonical U.S. composers such as John Philip Sousa, George Gershwin, Duke Ellington and Aaron Copland.

Of course there’s a way in which more educational history and civics programming would in general be a good thing. But here I suspect the intent is more ‘we are at war with Eurasia so we have always been at war with Eurasia’.


Our American friends continue with their gratifying propensity to name their legislative bills with acronymic labels.

This time, from Democratic lawmakers in New Jersey with this important bill:

Named the Fight Unlawful Conduct and Keep Individuals and Communities Empowered act, the legislation, known by its blunt acronym, would expand residents’ rights under state law to sue immigration officials for unconstitutional conduct.


Companies are trying to force their employees to use generative AI

When a friend told me that her employer was monitoring its staff for generative AI usage I had originally assumed they were checking that their employees weren’t using it too much. Or something more similar to my own employer which had a policy such that they would check that we weren’t using “unauthorised” chatbots.

But no, they were being monitored to ensure that they were using it enough.

It seems like they are far from an isolated case. The huge consulting firm Accenture is now apparently tying decisions such as “do you get promoted?” to “are you using AI enough?”.

Accenture has begun monitoring staff use of its AI tools as part of how it decides top-level promotions, as consultancies push reluctant employees to adopt the technology.

Even if you personally find it entire useless for your job:

One person familiar with the change at Accenture who was not directly affected said they would “quit immediately” if the change affected them. They and a second person both criticised the usefulness of the tools Accenture wants employees to use, claiming some of the tools were “broken slop generators”.

They will also consider firing you if you don’t embrace this brave new world:

Accenture has reduced its global workforce by more than 11,000 in the past three months and warned staff that more would be asked to leave if they cannot be retrained for the age of artificial intelligence.

But hey, you do get a new title!

The firm has since dubbed its employees “reinventors” in an effort to emphasise their ability to advise clients on AI.

Of course, if it is really the case that you can’t do your job effectively without this technology then sure, it makes some sort of sense. I’m sure you’d be looked upon poorly if for instance you refused to use tools such a computer entirely. Although I would dearly hope these companies are doing everything in their ability to train and assist their employees to develop the necessary expertise to use these tools as reliably as possible.

But if any sizable proportion of the staff themselves are telling you that the AIs provided simply don’t work, well, that seems entirely counterproductive and potentially dangerous - and more like an excuse to cut costs.

As noted previously:

AI doesn’t actually have to be better than us at our jobs in order to threaten our current livelihoods. It’s only necessary that our managers come to believe that an AI can do a just-about-adequate version of something akin to our work whilst costing less.


Researchers find that using X's algorithm shifts people's political views to the right

There’s some very valuable insights in paper recently published in Nature - “The political effects of X’s feed algorithm”.

The researchers carried out an experiment where users were either exposed to the standard algorithmic feed on the X social network, or a chronological one for 7 weeks. Their political views over time were measured, alongside their engagement level with the platform. Both changed for users exposed to the algorithmic feed.

Switching from a chronological to an algorithmic feed increased engagement and shifted political opinion towards more conservative positions, particularly regarding policy priorities, perceptions of criminal investigations into Donald Trump and views on the war in Ukraine. In contrast, switching from the algorithmic to the chronological feed had no comparable effects.

In numbers:

Among these participants, those assigned to switch to the algorithmic feed were 5.2 percentage points less likely to reduce their X usage than those who remained on the chronological feed (95% confidence interval (CI): 0.7, 9.7; P = 0.024).

They were 4.7 percentage points more likely to prioritize policy issues considered important by Republicans, such as inflation, immigration and crime (95% CI: 0.7, 8.7; P = 0.023).

They were also 5.5 percentage points more likely to believe that the investigations into Trump are unacceptable, describing them as contrary to the rule of law, undermining democracy, an attempt to stop the campaign and an attack on people like themselves (95% CI: 0.8, 10.2; P = 0.022).

They were 7.4 percentage points less likely to hold a positive view of Ukrainian President Volodymyr Zelensky (95% CI: 1.8, 13.0; P = 0.009).

Finally, they were 3.7 and 2.3 percentage points more likely to follow any conservative account (95% CI: 0.5, 7.0; P = 0.025) and any political activist account (95% CI: −0.1, 4.8; P = 0.061) on X, respectively.

Some reasons why:

First, we confirmed that the algorithmic feed is more engaging.

Second, the algorithm promotes political content and, within that category, prioritizes conservative content.

Third, the algorithm demotes accounts of traditional news media and promotes those of political activists.

Turning off the algorithmic feed for those originally exposed to it didn’t undo the effect. Why?

We found that the algorithm promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm, helping explain the asymmetry in effects.

Which leads to the conclusion that:

….initial exposure to X’s algorithm has persistent effects on users’ current political attitudes and account-following behaviour, even in the absence of a detectable effect on partisanship.

Once again the supposed fears of a part of the alt right that social media companies are “brainwashing” everyone with “dangerous” left-wing liberal views proves to be the exact opposite of the truth. The use of at least the X algorithm are having their viewpoints shifted to those of the current extremely right-wing US administration.

As Ben Werdmuller writes:

This should be a wakeup call for politically-engaged funders and anyone who cares about civil society. It’s not that we need to have less conservative algorithms; it’s that whoever controls the algorithms has a disproportionate say over the electorate’s view of the world.


The Big Short retells and explains what led to the devasting 2007-2008 financial crisis

🎥 Watched The Big Short. ​

This is a surprisingly gripping film about a few renegade bankers back in 2007 who realised that the housing boom in the US is built on a set of extremely risky “subprime” mortgage to consumers - and that the derived products the banks were creating and trading on the back of those same loans were in fact very risky, contrary to the common “wisdom” of every bank and ratings agency.

People familiar with the 2008 financial crisis - in many cases because they lived through it - will be very aware of what happened next. Whilst the bankers who identified this issue were generally laughed out of the room by their peers they were proven correct.

The aforementioned bankers themselves bet on the crisis induced by these irresponsible loans and derived products and made vast profits for themselves and their institutions. But, as the film highlights towards the end, at great, often tragic, cost to the many of the vast swathe of the population who were not rich and privileged bankers.

It’s based on the book “The Big Short - Inside the Doomsday Machine” by Michael Lewis. Based on others of his I’ve read I’m sure that’s equally as comprehendable and engaging.

The film itself is also rather educational to anyone who would like to understand certain financial concepts but doesn’t feel like picking up a dry textbook. I’ve never seen a more engaging explanation of “what is a synthetic collateralized debt obligation” - one of the financial product instrumental to the crisis - than the one from the film, featured below.


$15 million of USAID's former budget seized to provide protection for 1 Trump official

From The New Republic:

Office of Budget and Management Director Russell Vought killed USAID, causing hundreds of thousands of deaths, and is now using the money he “saved” to bankroll his security detail.

A Reuters report Friday states that the Project 2025 architect who promised to put federal employees “in trauma” is spending $15 million of former USAID funding–money that would have gone toward fighting HIV, polio, malaria, and other diseases–to pay his U.S. Marshal security through the end of the year.

One source told Reuters that Vought has more than a dozen U.S. marshals in his security detail

Spending vast sums of public money on a far less legitimate cause than it was originally destined for does not mean you saved it.

And it’s such a painful irony that funds that were ‘protecting’ a huge number of very vulnerable people have been seized to protect one very powerful and not particularly vulnerable person. Not surprising given the cruel corruption of the current Trump administration, but still distressing.

Also I can’t imagine US marshals - folk not usually used to provide protection to people in positions like Vought - earn close to over a million dollars a year? ‘Security’ must be expensive.


How to archive even paid Substacks - converting them into an eBook

In recent years, a sizeable amount of great writers, professional or otherwise, have started sharing their wares via Substack. Substack is billed as a platform that lets you write and monetise email newsletters, although any content is generally a;sp available via the web and, sadly for the free stuff only, RSS.

Firstly, I wanted to caveat that there are some good arguments against promoting the use of Substack as a platform.

Anil Dash begs writers to not refer to their body of work as “a Substack” and be wary of their attempt to control your audience and distribution Then there’s the fact that by giving your attention and money to Substack you are inadvertently helping fund a platform that appears happy to profit from hosting literal Nazi newsletters. This has caused some popular outlets to eschew Substack for other options. But the sad truth is that if you want to read the work of certain authors it’s currently Substack or nothing, so you’ll have to decide for yourself.

Assuming one does want to get involved, or at least read a given outlet’s back catalogue before quitting the platform oneself, it would be nice to be able to both 1) search and archive the content you care about forever - the internet is, after all, notoriously ephemeral, and 2) be able to read it in the manner that you prefer to, even if that manner is “not connected to the internet and logged into the potentially intrusive substack.com website experience”.

Personally, for longer-form content I prefer to read it on an eReader, something that I’m not aware is easily possible with Substack by default. I would basically like an eBook that is the Substack writer’s content, 1 chapter per post, in chronological order to work my way through.

Whilst this is definitely not functionality native to Substack, thanks to some generous open source software writers, I found a way to do this, even for paid Substacks. Note that this method won’t get you the paid content from Substacks that you don’t pay for! It’s not a hack. And neither should it. But it will let you extract and reformat the content you have already paid for as you wish.

There’s two phases. First, downloading the all the relevant content of a Substack publication into an archive. Second, convert that archive into an ebook, for which here I basically mean an epub file given that format is easy to use with all the common eReaders that I’m aware of. Of course if you just want a local copy of the Substack for archival or offline purposes then you could stop after step 1.

Here’s what worked for me:

Step 1: Download a local copy of all the relevant Substack posts.

For this we can use Alex Ferrari’s free and open-source Substack Downloader. This is a command-line tool developed precisely for this purpose. It works on Windows, Linux and the Mac. I used the Windows version, simply downloading the latest .exe file from the repo’s releases page.

Once you’ve got the sbstck-dl-win-amd64.exe saved you’re ready to go. I renamed my copy to the shorter name sbstck-dl for convenience as that’s what the documentation refers to. But you don’t have to do that if you don’t mind typing out the longer exe name each time.

The documentation is on the repo’s home page. You can use it to download either a single Substack post or a set of up to all the posts from a particular Substack publication. I wanted the latter.

Open a command prompt in the folder that you the sbstck-dl file is in so you can get to work using it.

First, just to test it was working, I used the list parameter to list, rather than download all the posts. In my below examples I’ll use “The Lead”, url https://national.thelead.uk/, as my example of a Substack, but of course you can use anything you know to be Substack based.

sbstck-dl list --url https://national.thelead.uk/

Once you’re read to download, the most basic use of the command is:

sbstck-dl download  --url https://national.thelead.uk/

That will simply download local HTML copies of all the free posts in the publication. It’s as simple as that! But there are various parameters you can use to make the output more appropriate to your specific needs, as well as get the paid posts of any Substack that you already have a paid subscription for.

Controlling what exactly is downloaded

By default the above will retrieve all the text of the posts. When you open them you’ll still see the pictures and attachments, but they’re being pulled from their original online location. If Substack was to delete them they’d be gone.

You can use parameters --download-images and --download-files to grab a local copy, as well as have the local HTML files containing the text of the Substack on your computer rewritten to refer to the downloaded images.

sbstck-dl download --url https://national.thelead.uk/ --download-images --download-files

You can control the size of the images downloaded with the --image-quality parameter, which takes values of high, medium or low. Personally given I’m mostly reading articles where the graphics are largely incidental on a black-and-white ereader I find low is usually fine.

sbstck-dl download --url https://national.thelead.uk/ --download-images --image-quality "medium"

Perhaps you don’t want all the posts ever written! Especially for a news site like my example that might be 1) a lot of posts, and 2) some of which are less relevant. You can download posts after a certain date like this:

sbstck-dl download  --url https://national.thelead.uk/ --after "2026-01-05"

There’s an equivalent --before parameter you can use in the same way if you want.

By default everything downloads to the folder you are in when you run the command, with subfolders set up for images and attachments. If you prefer the output to end up somewhere else you can use the -o flag. Here’s how to have things end up in the subfolder “downloads” for instance.

sbstck-dl download  --url https://national.thelead.uk/ -o "./downloads"

One particularly great feature is the --create-archive parameter. If you use that then as well as all the individual posts downloading, the software will create a nice index page that provides a lovely list of posts and links to them. You can then open that in your web browser to navigate through the stories rather than open each one individually.

sbstck-dl download  --url https://national.thelead.uk/ --create-archive --download-images

I also like the --add-source-url parameter which adds the original URL of each post to the bottom of the post’s contents. If like me you are converting them to read out-of-context then I like to have a way to remember where they originally came from

sbstck-dl download  --url https://national.thelead.uk/  --add-source-url 

Finally you can choose the output format. As we’ve seen, by default you end up with HTML files that you can open in any web browser. That’s great if you’re on your computer. But if you want to fulfil my actual goal of making them into an epub for reading on an ereader then probably downloading them in markdown format is a better bet. Less potential HTML cruft, and potentially a slightly more straightforward conversion to epub later on.

For that, use the format parameter.

sbstck-dl download  --url https://national.thelead.uk/  --format "md"

Plain text is another --format option you can specify by replacing md with txt

There are no doubt other parameters, but those were the ones I’ve ever needed. You can use all these parameter in conjunction with each other to get your perfect output.

Downloading paid Substack articles

This is really the only potentially tricky part. Well, if you know how to locate the actual values of cookies in your web browser then it’s simple. If not, it differs per browser - here’s a guide that covers a few.

Log into Substack on your normal web browser. Then find the contents of either the cookie called substack.sid or connect.sid, whichever is there on your computer. I had substack.sid

Now copy its value.

You then add the --cookie_name and --cookie_value to the above command. The name one should be the name of the cookie (substack.sid in my case) and the value should just be the nonsense-looking contents of the cookie.

You shouldn’t be sharing that value with anyone else, so the one in the below example is fake. The real one was actually a lot longer.

sbstck-dl download  --url https://national.thelead.uk/ --cookie_name substack.sid --cookie_val sasfasfsafasfsaf924971204$210493AYx4u$nTd4mEGG.asfaspj0bgc9IU

Do this correctly though and the command will log in and download the full version of all the paid posts, in addition to the free ones.

My usual command

This then the sort of command I’ve been using to get everything I need all together ready for ebookification:

sbstck-dl download  --url https://national.thelead.uk/ -o "./downloads"   --download-images --image-quality low --format md  --add-source-url --cookie_name substack.sid --cookie_val abc123321xyzxxxxxxxxxx

OK, that’s the archiving complete! Enjoy.

If however you want to convert your collection of downloaded substack posts (ideally in markdown format) into an epub, then here is:

Step 2: Convert the archive into an epub ebook.

For this, we can use pandoc, another great free and open-source tool, which is the definitive convert-between-filetypes command line tool for converting between various markup formats.

Download and install the version relevant to your computer from here. Or if, like me, you’re a winget fan you can install it from your Windows computer’s command line via:

winget install --source winget --exact --id JohnMacFarlane.Pandoc

Pandoc is an incredibly comprehensive tool so no way can we go through all the potential command line options here - for that, see their documentation. Instead I’ll just share what worked for me when converting the output of sbstck-dl into epub form.

What you need to do is feed in the list of files you want converting into a command you type into your command-line in this format. My examples assume you downloaded the posts in markdown format as recommended, but it should work fine enough even if in HTML:


pandoc FILE_YOU_DOWNLOADED.md -o "YOUR_FILENAME.epub" --metadata title="TITLE_TO_GIVE_EBOOK" --metadata author="AUTHOR_TO_GIVE_EBOOK" --toc --split-level=1

You should replace the parts in capital letters with the actual file containing the post and values you want your epub to end up with. The toc and split-level part are all about generating a table of contents, one entry per post, which I like, but is entirely optional.

Note however that if you downloaded more than 1 post you probably have several files to convert. You want them all to end up in the same book most likely. If so you can just list them all in the command in the order you want them to appear in the book like this:

pandoc FILE1.md FILE2.md FILE3.md -o "YOUR_FILENAME.epub" --metadata title="TITLE_TO_GIVE_EBOOK" --metadata author="AUTHOR_TO_GIVE_EBOOK" --toc --split-level=1

or, more realistically for a larger substack, specificy “all markdown files in the folder” like this

pandoc *.md -o "YOUR_FILENAME.epub" --metadata title="TITLE_TO_GIVE_EBOOK" --metadata author="AUTHOR_TO_GIVE_EBOOK" --toc --split-level=1

The *.md simply means all files ending in .md. It doesn’t necessarily imply a particular order. So if you want to absolutely make sure that the files end up being ordered alphanumerically in the book - which, given the file names that sbstck-dl produces by default is equivalent to “in chronological order of post date”, you can instead have the computer first assemble a list of the files in order in a variable and pass that variable to pandoc.

Here’s one way to do that that works in Windows powershell. If you are using any different operating system or command prompt then the first part won’t work, but there will be an equivalent that does if you can find it!


# First part creates a list of all .md files in the folder, ordered by filename into a variable called $articles

$articles = Get-ChildItem -Filter "*.md" | Where-Object { $_.Name -ne "index.md" } | Sort-Object Name | Select-Object -ExpandProperty FullName

# Second part is the pandoc command, passing in your ordered list

pandoc $articles -o "YOUR_FILENAME.epub" --metadata title="TITLE_TO_GIVE_EBOOK" --metadata author="AUTHOR_TO_GIVE_EBOOK" --toc --split-level=1

Once you’ve done all the above you should be left with a nice epub file that is formatted well for your ereader! Simply upload it to your ereader and you’re good to go.


Update 2026-04-03: If you prefer to point and click, there’s now a GUI application available to facilitate the above process.