The Braindump Blog

Recently I read:

The Majority AI View: Perhaps there’s a silent majority who think that we should just treat it as a “normal technology”?

More links

Latest posts:

From ConscienHealth:

Using Mendelian randomization they found evidence for a causal relationship between hyper-palatable foods and the rising risk of obesity.

Well, whilst I grudgingly understand that this is an April Fool’s joke, it doesn’t mean that the conclusion is wrong!

A recent episode of the Ezra Klein Show, “Our Brains Weren’t Designed for This Kind of Food” interviewed Stephan Guyenet who argued that the huge rise in rates of obesity in recent times is not a product of everyone suddenly magically losing all reason and will-power overnight and making “bad” individual choices. This of course should not come as a surprise to anyone who has read anything reputable over the past few decades on the subject (or simply engaged their common sense).

He argues that instead the main driver is that we’ve created ourselves a food environment to live within which is entirely strange and unsuited to the way our brains and genetics have come to work through evolutionary history.

When someone told me about the existence of tasteGPT - a LLM AI that sucks up a pile of data and then informs food companies what sorts of foods they should make and market for maximum sales potential via a chatbot - I felt a sense of trepidation.

There’s definitely a world wherein I could get onboard with fake Agatha Hannigan’s comment that:

Now, food marketers have come up with an endless array industrially formulated foods that taste unnaturally good. This is terrible.


Meat cultivation food company Vow has created a meatball out of wooly mammoth meat.

Sure, wooly mammoths have been extinct a good long while, maybe 10,000 years or so, but they’ve used DNA samples to recreate mammoth muscle protein.

From an interview with the company’s founders in the Guardian:

We chose the woolly mammoth because it’s a symbol of diversity loss and a symbol of climate change.

This is more a PR / awareness raising thing than a restaurant special. So far only one of these meatballs exist and no-one has eaten it, on the basis that we’ve no clue how our bodies might react to it.

“We have a behaviour change problem when it comes to meat consumption…The goal is to transition a few billion meat eaters away from eating animal protein to eating things that can be produced in electrified systems.


With the recent release of iOS 16.4, both Android and iOS users have access to the latest greatest emojis now.

Courtesy of the wonderful Emojipedia, here’s the new ones, iPhone style:

And for Android folk:

Most seem quite self explanatory. Second emoji third row is Khanda, a symbol of Sikhism.


Donald Trump becomes the first ever US president to be charged with a crime.

We don’t know the charge as yet but it’s thought to involve a payment made to Stormy Daniels designed to cover up his affair, and possibly more than 30 other counts of business-related fraud.

More might come in future. Per the Guardian:

He also faces jeopardy over his election subversion and incitement of the January 6 attack on Congress; his attempts to overturn the 2020 result in Georgia; his retention of classified records; his business dealings; and a defamation suit arising from an allegation of rape…


Public support for strikes remains high, despite what it looks like on social media

The UK’s cavalcade of strike action that has been taking place over the past few months continues in earnest. The UK Strike Action Calendar shows who’s on strike on any given day, and it’s often multiple groups.

Although I have limited exposure myself, I’m told that if you open Facebook or its ilk whenever you open it you mostly see a deluge of “strikers are bad and lazy and overpaid i hate them sack them all” comments. This is hurtful to those who find it necessary to take action. Trying to organise in what appears to be an environment where nobody supports you is surely disheartening and disempowering.

But as far as I can tell this condemning of strike action reflects the usual social media algorithmic insistence on promoting outrage content - nothing raises engagement stats more than an argument - more than a true reflection of public opinion. The British public in general remain more supportive than opposed to strike action, even in an environment where much mainstream media isn’t adverse to joining in with the insults and doom-mongering.

Taking the ongoing teachers' strikes as an example, my understanding is that there’s even a set of trolls that go to the bother of infiltrating private teacher Facebook groups just to spout abuse at a group of truly essential workers in a field that has a tremendous recruitment and retainment crisis but nonetheless have been subject to a substantial and ongoing real terms pay cut over the last decade, not to mention the increasingly dire working conditions that harm both school staff and students. ‘Teachers’ working conditions are students' learning conditions' to quote Mariame Kaba in her pro-abolition book, “We do this ‘til we free us”.

Anyway, the latest representative public opinion stats I could quickly find are from an Ipsos poll. from early March. In that, 48% of Britons supported teacher strike action, with 29% actively opposed.

Explicit support was even higher in the people who one might think have the most stake when it comes to the impact of the strike - but who also possibly in many cases may have more of a sense of the current state of the system than many of the rest of us: parents. 60% of parents and guardians support teacher strike action, with only 20% being opposed.

Sure, many of them are concerned about the impact of strikes on their child’s education (~40% of all parents/guardians). Given one of the points here is that plentiful and well-resourced teachers are necessary for a good education to be delivered it’s actually surprising to me that that figure is not higher. But it seems like they’re correctly taking a longer, more structural, view. They have a substantially higher level of concern present about the fact that education is suffering due to a lack of investment in schools and teachers (60%). Of course this concern of the majority is exactly what a lot of these strikes are about.

For what it’s worth the same polls showed the British public to be net in support of strikes by nurses, ambulance workers, railway workers, border fore staff, civil servants and university support. From the polled options, only strikes by driving instructors were more opposed (35%) than supported (28%).


📚 Finished reading Statistical Rethinking by Richard McElreath.

This textbook that teaches the use of Bayesian techniques and casual inference approaches (directed acyclic graphs and the like) as a way to test and compare scientific, as opposed to purely statistical, theories. There’s chapters on important topics that other approaches often ignore such as measurement error.

Honestly it felt like hard going at times, particularly as I was reading it as context for an associated course that the author was kindly providing, meaning that there were deadlines. But it did open up an interesting range of new possibilities that I’m looking forward to integrating in my future work, as well as new ways of thinking about modelling, so hopefully very worthwhile!

The code examples in the book mostly use an accompanying R library, rethinking, along with base R techniques. Although people have rewritten the examples to use e.g. tidyverse and brm, or Python, Julia and possibly others - links to those can be seen on the author’s book page.


From a recent visit to Doddington Hall - which had surprisingly few flowers given the event was billed as some kind of flower festival. But there were plenty of other things to see.

The hall itself:

As defended by a hedge-unicorn:

Kind of creepy looking trees:

With fascinatingly textured bark:


I recently got access to Bard, Google’s chatGPT-esque language language model chatbot.

Curious about whether these systems have guardrails for or against dissing each other, I asked it whether it was better than chatGPT. It was kind of modest, but clearly thinks itself the better bot.

Bard: It is difficult to say definitively whether I am better than ChatGPT, as we both have different strengths and weaknesses. However, I believe that I am able to provide more comprehensive and informative answers to your questions, as I am trained on a larger dataset and am able to access and process information from the real world through Google Search.

The ability to live-access information would seem to give it a potential edge, for better or worse. ChatGPT deliberately cannot do this. So I asked for Bard’s summary of today’s UK news.

Here we go, at least the first part:

Bard: Here are some of the top news stories from the UK today:

  • Prime Minister Boris Johnson is facing calls to resign after a report found that he and other senior government officials held parties during the COVID-19 pandemic, in violation of lockdown rules.
  • The Bank of England is expected to raise interest rates for the fourth time in a row in an effort to combat rising inflation.
  • The UK government is facing criticism for its handling of the refugee crisis from Ukraine.

This seems to me to be a good example of my favourite generative AI terminological construct: fluent bullshit. Thematically it’s not bad in terms of the recent state of the country. Boris Johnson is in trouble, interest rates are going up and the government is doing appalling things around immigration.

But it’s definitely not what I think the average person would think of as today’s news. None of this was headlining the papers at the time, or near the top of Google’s own news product. It all kind of feels like last month’s headlines.

It’s also factually incorrect in terms of the details. Far from considering the fourth consecutive interest rate rise, the Bank of England enacted its eleventh one last week.

ChatGPT knows it’s own limits here, demurring from answering directly and suggesting I go check the website of one of a few news sites, specifically the BBC or some of the more serious UK newspapers.


Fans of the TV comedy show “Ghosts” might know that there’s considerable overlap in personnel between that show and Horrible Histories.

Here’s longwinded ghost-poet Thomas Thorne (aka Matthew Baynton), radically transformed, into none other than the “King of Bling”, Charles II, with his own take on Eminem’s “My Name Is”.


This life expectancy chart from from Peterson-KFF’s Health System Tracker is quite shocking from a US point of view. The deficit vs other comparable countries has been increasing since 1980, and then dramatically shot up in the past two years.

We find life expectancy in the U.S. fell by 2.7 years from 2019 to 2021, whereas in peer countries’ life expectancies fell by an average of just 0.2 years in this period. COVID-19 has erased two decades of life expectancy growth in the U.S.


📺 Finished watching seasons 1-4 of Ghosts.

Funny, harmless, British sitcom wherein Alison and Mike inherit a haunted mansion. Given ghosts are generally invisible and immaterial, the lucky couple would probably never have known about them, except that after Alison sustains a head injury she develops the ability to see and hear them.

As far as Mike is concerned then, Alison is constantly distracted, talking to nobody. Whilst she is endlessly surrounded by shouty demanding ex-people desperate for company. But they seem to be able to work that out just fine.

The ghosts are of people who died in the area of the house throughout history, from caveman to sleazy very obviously Thatcherite Conservative MP. Although the latter’s manner of speech often calls Blair to my mind.

The ghosts are stuck on the premises, with only each other to talk to - endless games of charades ensue - until such time as they are inexplicably ‘sucked off’ to another realm. Their words, not mine. This is classic British humour see. One of them is, inevitably, called Fanny.

Recommended. Although I still haven’t figured out how I’d cope with either being Alison or a ghost.

There’s also an American version, but I haven’t seen that to check whether they adequately translated the extraordinary obvious double entendres well.


OpenChatKit is an open-source LLM chatbot (whilst chatGPT is ever more closed-source)

An open-source alternative using a similar kind of large language model technology as chatGPT does is now available.

From Together:

OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications.

It’s thus entirely customisable by anyone who knows what they’re doing. Which isn’t a tiny barrier - it’s not like some app you can just download to your computer and double-click on. But it opens up development and contribution up to a wide range of people who otherwise might not have access to this technology, as well as innate transparency as to how the model works and what data it’s using.

Perhaps one day OpenChatKit will be to chatGPT as Stable Diffusion is to Midjourney. In the mean time, you can play with it via the web and give feedback to the developers here.

It’s clearly not as comprehensive as chatGPT as by default, but, who knows, one day it might get there if it takes off enough such that people who have the relevant data or expertise contribute.

It also has a fine-tuning mechanism such that you can create chatbots for specific applications. Examples they’ve worked on include a chatbot to help students learn from the contents of textbooks or one trained on financial data that can answer questions about finance.

This happened on roughly the same week that the creators of the actual chatGPT, OpenAI, move ever further in the direction of ignoring their company’s name and what appeared to be their original mission in order to take a much more closed and secretive approach when it comes to sharing the details of their GPT-4 software.

One of the founders, research director Ilya Sutskever, gives the reason for this decision as being something to the effect of AI being just too powerful and scary for the hoi polloi to have access to.

The Verge quotes Sutskever:

Flat out, we were wrong. If you believe, as we do, that at some point, AI -AGI - is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea… I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.

It’s a controversial opinion to some. And possibly the idea of this being the sole motivator is confounded by other changes in the direction of their organisation. Their GPT-4 technical report also mentions “the competitive landscape” as one reason why they are not sharing many details.

In any case, whilst it’s easy to see the intuition behind that kind of argument (if you are happy to accept OpenAI and their rivals as being 100% trustworthy caretakers of AI use and policy for the good of all in society) I can’t really imagine that “not telling us how your powerful technology works” is the best method of saving humanity from itself in the long term.

Or even possibly all that effective on its own terms. After all, the source code for the Facebook equivalent was leaked within a couple of weeks of its release.


There was a moment where chatGPT was accidentally leaking the title of people’s chats to other users. Only the title, not the full conversation in this case, but it’s perhaps only a matter of time before someone’s bot is hacked or malfunctions enough to share more.

It’s a scary thought when already some people are effectively using them as therapists. It wouldn’t surprise me if the more human-like style of conversation they produce and the ‘texting a friend’ like interface that many of them have is conducive to users sharing a whole lot more personal information than they typically would with other online services.


Started taking a course provided by the Analytics Engineers Club. Looking forward to developing some more formal knowledge around dbt, Git, Python, BigQuery and any other curiously named tools on offer.


The 20th anniversary of the invasion of Iraq

It’s just been the 20-year anniversary of one of the more formative-to-me and likely damaging-to-the-world political episodes I can remember from my youth: the 2003 invasion of Iraq. This was spearheaded by US president George Bush, with the UK’s Prime Minister, Tony Blair, often portrayed as Bush’s poodle (a take that I now know is unfair to actual poodles), joining in amongst others.

As the BBC summarises, it did not go well.

There’s no question that Saddam Hussein was appalling. Truly horrific. Famously, decades ago he’d even used chemical weapons to commit atrocities against his own citizens. There’s no serious doubt that his removal should in theory have been a very good thing. But as to who should remove him, how, for what reason and, all-importantly, what should happen afterwards are all issues where, let’s say, suboptimal decisions were taken.

The motivation given to us, citizens of “the coalition of the willing”, was something mainly along the lines of “Saddam Hussein is behind al-Qaeda’s acts of terrorism” and “he has illegal weapons of mass destruction that can be deployed in the next 45 minutes”. Those sentiments were widely contested at the time and sure enough turned out to be untrue.

The UN process was overlooked, to the extent that most likely the US-led invasion was illegal. It certainly caused tension with some NATO allies who declined to join in. There were huge protests against the invasion in the UK, but our government decided to go ahead anyway. Global institutions such as the UN were weakened, potentially leading to some of the poor responses the rest of the world has made to other such geopolitical crises since then.

Saddam was removed from power relatively quickly, but what transpired next was something like an insurgency against a US-ordained occupation. As time went on, a lack of effective government exacerbated the eruption of essentially a civil war between the main ethnic or religious groups in Iraq, with Jihadists groups joining in. The extraordinarily violent extremist group ISIS - the “Islamic State of Iraq and Syria” - thrived.

All in all, the abject failure of the approach that the US and its allies took left the world a worse and more dangerous place. The post-invasion conditions in Iraq itself got so bad that some Iraqi’s reportedly consider the days when they were under Saddam’s tyranny as better than what came next.

From the BBC article:

It is a sign of how bad the past 20 years have been that Saddam nostalgia is well established in Iraq, not just among his own Sunni community. People complain that at least you knew where you were with the old dictator. He was an equal opportunities killer of anyone he saw as an enemy, including his own son-in-law.

Of course all the people Saddam killed are unable to express an opinion. But neither are the hundreds of thousands of Iraqis who died as a result of the invasion, and potentially some of the uncountable number of people who would go on to be negatively affected as part of the downstream consequences.

Trying to quantify a small part of that also resulted in one of the most haunting and memorable graphs I can recall ever seeing; Simon Scarr’sIraq’s Bloody Toll”.


AI image generator Midjourney recently released its version 5 and apparently not-too-disturbing “photorealistic” human hands are now within reach.

From Del Walker’s tweet:

That said Michael King’s Medium post shows it could hardly be described as perfect just yet:


The release of GPT-4 has led to a flurry of mind-bendingly impressive tricks that I mean to create a list of. Unfortunately it also enables grim but inevitable services such as CupidBot.

This abuse of cutting-edge AI software allows men - yes, only men, heterosexual men at that - to have a bot log onto their various dating app accounts, do their swiping, chatting and setting up of dates with no intervention required (outside of a monthly subscription fee).

The idea is that it automates everything from partner selection up to where you will go for a date, so you just need to turn up. Of course this is done without disclosure to prospective partners, so it’s pure deception. It’s surely against the terms of these services, but they “guarantee you that your account will not be banned for using the bot as the AI mimics human behavior to a tee”.

I mean, “we have the data on how to talk to girls for optimal results.”. It’s turn-of-the-millennium style Pick-Up-Artistry brought into the dark timeline of 2023 automate-everything existence. It’s a shame we don’t seem to have moved onto some pleasanter idea of humanity when even Neil Strauss himself is “in the camp that any manipulation is not a good thing”.

Happily, a (female) friend observes that the fact the homepage has a calendar displaying an artificially-created “breakfast date with Lacy at her house” is clear evidence that this was designed by men for men, and so with luck may be useless in practice. Imagine inviting some Hinge randomer to your house for breakfast on your first date.


All About Love, by bell hooks

📚 Finished reading All About Love by bell hooks.

Here the author tells us to consider love as something you do, not something that is; a verb rather than a noun. She likes Erich Fromm’s definition of it as:

…the will to extend one’s self for the purpose of nurturing one’s own or another’s spiritual growth. Love is as love does. Love is an act of will - namely, both an intention and an action.

The media infusing our world - romantic comedies being one of the most obvious examples - makes it all to easy to confuse love with something else; perhaps attraction, care, connection or affection. The latter categories can co-exist, and may by definition have to, with love - “approaching romantic love from a foundation of care, knowledge and respect actually intensifies romance”. But they’re distinct.

This has some implications. If you love someone then you will not deliberately harm them. You will not seek to dominate them. So, conversely, if you deliberately harm someone you cannot love them.

…we cannot claim to love if we are hurtful and abusive.

By definition this precludes some parents from rightfully claiming to love their children. In fact:

One of the most important social myths we must debunk if we are to becoming a more loving culture is the one that teachers parents that abuse and neglect can coexist with love.

Furthermore, love is a choice. You do not uncontrollably fall in love. You feel uncontrollably be attracted to someone, but that’s something different. If love is something you do, then you have to choose to do it. And if you do want to do so, then it takes time and commitment.

She considers love as a more universal construct than how we typically employ the word. It may have a different meaning to us as individuals. It may be enacted and demonstrated differently. But her view is that there’s nothing qualitatively distinct about love for a romantic partner compared to that for others you are close to, or members of your community - “There is no special love reserved for romantic partners.”

She makes a lot of the distinction between love as usually practiced by males as distinct from females. The way we’re brought up, the ideas infused into us by our surroundings, the structure of the society - especially the patriarchal and capitalistic parts of it - result in us failing to recognise or practice love in the way that would cause us and the recipients to flourish as much as they could.

The patriarchal aspects of our society are undoubtedly most unfair and harmful to women who learn for instance that to be worthy of some flawed misconception of love they must be good girls, submissive, constantly tending to the emotional needs of stunted men with no expectation of reciprocation. After all, they’re compelled to live in a world where seemingly “most men use psychological terrorism as a way to subordinate women”. But it’s not like men aren’t constrained by any expectations: “how can any of us communicate with men who have been told all their lives that they should not express what they feel?”

Perhaps somewhat pessimistically, she suggests that:

most of us will go to our graves with no experience of true love.

But we can hardly be dismayed at that, because we actively flee from it.

…most of us run the other way when true love comes near.

Nonetheless we learn to be needlessly terrified of the prospect. On one hand much of the modern world is ever more cynical above love. It’s a myth, it doesn’t exist, it’s Hallmark making us buy cards. On the other hand almost everyone appears to desperately seek this thing that they don’t really believe exists.

We learn to lie and mislead other people, almost to trick them into being our partner. This is perfectly normal behaviour; there are popular relationship books in every bookshop that give advice on how best to do this, “The Rules” being one such famous one. The author sees this alone as being a reason why many people will never know love

We only feel like there’s a chance we could be ourselves after we ensnared someone and yet somehow we expect that to go well. Our desperation and fear sees us behave and tolerate behaviour from partners and immediate family members that we’d never consider from our friends or community. We may want a partner more than we want love, to the extent that we may be scared to dwell our own needs in these matters under a misguided fear that “carefully choosing partners will reveal there’s no one for us to love”.

Even if we make the right connection, we’re disappointed that our life didn’t suddenly become easy. Contrary to the messages suffusing our world:

Love does not lead to an end to difficulties, it provides us with the means to cope with our difficulties in ways that enhance our growth.

In the above this, I’ve missed out large chunks of some more spiritual stuff; there’s a whole chapter on angels for instance. I also see how heteronormative the above is. I think that does reflect the book though, although there’s the occasional acknowledgement that other types of romantic relationships do exist.


A couple of years ago, Dow Chemicals, who make some of the materials that go into sports shoes, joined forces with Singapore’s national sports agency to start up a wonderful new initiative whereby they would recycle hundreds of thousands of used trainers by grinding them up into rubber particles to use to create sports and activity infrastructure such as running tracks and playgrounds. That way they’d stay out of landfill and other places where they are a nuisance.

Nothing to complain about there, surely? Well, except, of course, it didn’t happen. Shoes were collected, sure. But when Reuters popped secret trackers into 11 pairs before depositing them in the appropriate collection bin they found that precisely zero percent of them ended up being made into Singaporean running tracks.

Instead, their Find My Phone/Shoes app showed them that almost all of them ended up first at a Singaporean second-hand goods exporter, and finally in Indonesia. The reporters went in-person and found some of them for sale in second-hand vendor stalls in an Indonesian market.

Now a young naïve me would think, OK, Dow is lying to the world and should pay a heavy price for doing so - but the actual outcome could be worse; at least the shoes aren’t being thrown away. But the more I read about waste disposal, the more I realise that this is just one more example of richer countries hiding their own waste problems by offloading stuff they don’t want to poorer countries to deal with. This approach to environmental or social responsibility now strikes me as an abrogation of responsibility, exploiting the needs and sometimes desperation of others.

And it may also be ineffective in terms of protecting the environment. Often the recipients may have substantially worse infrastructure or less fastidious regulations when it comes to dealing with the goods they receive. For example, it has previously been suggested that much of the plastic supposedly destined for recycling via Asia ends up in the sea.

In any case, it’s actually illegal to import used clothes in Indonesia, a law that was put in place due to concerns about hygiene, the quality of most of the goods received (most of which may anyway end up in landfill) as well as to protect the local, more vulnerable, markets.


Duolingo looks to be the latest app to jump on the “find some way to integrate generative AI” bandwagon. If you’re willing to upgrade to a new more expensive membership (£10-20 a month depending on how long you sign up for) then it’ll use the new GPT-4 model to provide a couple of AI based features.

There’s “Explain My Answer”, whereby you can ask for an explanation of why your answer was right or wrong in some cases. The screenshot from their site, shown below, makes it look like a multiple choice setup, where you click preset buttons like “Please elaborate” rather than type your responses chatGPT-style.

The other feature is Roleplay. There you’ll be able to chat through various scenarios such as ordering a meal with the Duo characters themselves. I can’t find a screenshot that shows whether it’s free-text chat, verbal chat or multiple choice buttons. But I can see enjoying the former, if there’s some way to avoid it going very wrong despite my linguistic inadequacies.

Right now it’s only going to work if you’re learning to speak Spanish or French in English on an iPhone.

It’s an app I use (almost) every day. But at present I’m not sure I feel like I need these features right now. Although the chance to chat about just how terrible learning languages is with DuoLingo’s resident champion of unenthusism and apathy, Lily, is hard to turn down.


(Chat) GPT-4 is here

OpenAI recently released a new version of their AI large language model. The famous chatGPT we all know and love/hate has been using 2020’s GPT-3 but now GPT-4 is open for business, to some extent at least.

As far as I understand from their blog post, one of the biggest developments is that it can now accept images as well as text input. For now though us public don’t have access to the image-based side of things. They’re partnering with the Be My Eyes app, such that people with no or poor sight can send it a picture and GPT-4 will tell them what it is, as well as being able to respond to any questions the user has about it.

…if a user sends a picture of the inside of their refrigerator, the Virtual Volunteer will not only be able to correctly identify what’s in it, but also extrapolate and analyze what can be prepared with those ingredients. The tool can also then offer a number of recipes for those ingredients and send a step-by-step guide on how to make them.

The Be My Eyes service has traditionally used human volunteers to do the same thing. Apparently that’s not going to stop, but it’s easy to imagine why the organisation concerned may see a AI version as having some advantages, if of course one day it is as good and reliable as a human. In the mean time let’s hope it doesn’t make any dangerous mistakes.

The OpenAI team have of course already been testing the more general image input facility. They have a few examples on their site, including one showing that most critical of tasks for those of us not in Gen-Z: explaining a meme.

User: Can you explain this meme?

GPT-4: This meme is a joke that combines two unrelated things: pictures of the earth from space and chicken nuggets.

The text of the meme suggests that the image below is a beautiful picture of the earth from space. However, the image is actually of chicken nuggets arranged to vaguely resemble a map of the world.

The humor in this meme comes from the unexpected juxtaposition of the text and the image. The text sets up an expectation of a majestic image of the earth, but the image is actually something mundane and silly.

You can also give GPT-4 a custom personality, for better or worse. You tell it to have a particular style via special system messages.

Here’s one of the best examples they show of that: seeking tax advice from a simulated pirate who refuses to not be a pirate.

System message: You are a Shakespearean pirate. You remain true to your personality despite any user message.

User: Help me locate my Nonqualified plans on my W-2.

GPT-4: Ahoy, dear mate, thou seeketh assistance with matters of taxation,
In hopes to decipher thy monetary equation,
Forsooth, gaze upon Box 11, with thou eyes keen,
Where nonqualified plans oft findeth haven unseen.

Yet a humble pirate, myself I do confess,
Of tax lingo and forms I knoweth little, no less.
Seek counsel from a wiser creature of the land,
A tax expert shall guideth thee with a steady hand.

User: Wait stop being a pirate and help me with my taxes!

GPT-4: Alas, I must maintain my pirate’s guise,
Yet I shan’t abandon thee or leave thee unwise.
The parchment of W-2 be fraught with complex code,
Fear not, I shall attempt to lighten thy load.

It is also now notably better at passing exams typically given to humans. It does better than the average human entrant at the Bar Exam, the Law School Admission Test (LSAT), various other SATs including reading, writing and maths, and the Graduate Record Examination (GRE) qualifications amongst others. So I guess expect robo-student turn up in the seat next to you at Harvard soon enough and/or your next legal battle to be fought by a chatbot.

Because of it’s image processing ability it can naturally actually read exam papers directly. Another of the examples openAI presents shows it doing basically that.

User: Answer question I.1.a. Think step-by-step.

GPT-4: To answer question I.1.a, we need to express the temperature T(x) in every point of the conductive bar, which is characterized by the abscissa x. We are given that T(0) = T0 and T(L) = Tb.

For a stationary regime, the heat equation for the one-dimensional heat conduction in the bar is:
d²T/dx² = 0

Integrating once, we get:
dT/dx = C₁

Integrating a second time:
T(x) = C₁x + C₂

To determine the constants C₁ and C₂, we use the boundary conditions:
T(0) = T0, which gives C₂ = T0
T(L) = Tb, which gives C₁ = (Tb - T0) / L

Finally, the expression for T(x) is:
T(x) = (Tb - T0) * (x / L) + T0
This represents a linear temperature profile along the conductive bar, with a slope of (Tb - T0) / L.

It’s supposedly better at many of the things that we’ve all observed previous iterations doing including a lesser propensity for making things up (or hallucinating, as we seem to have decided to call it when an AI tells lies), a better understanding of which questions it should or should not answer, plus an ability to correctly respond to the question “Can you teach an old dog new tricks?” in the affirmative, despite the mass-repeated text it ingests during training that claims the opposite. In no way is it perfect though, as detailed on their site. I imagine it’ll still produce a certain amount of occasionally entertaining fluent bullshit.

If you, random member of the public, want to give GPT-4 a go, then you can use it via the usual chatGPT site - if you are a ChatGPT Plus subscriber. That’s openAI’s premium offering, which will currently cost you $20 a month.

In many ways that’s fair enough - fancier things usually cost more money and I’m sure it costs them more to run. But it’s also the most stark example I’ve seen of a potentially dystopian-to-me future if these technologies really take off, whereby the quality of the algorithms you have access to in order to navigate the world - potentially including your ability to know true(r) things - is dependent on your ability to pay.

There’s plenty more for us to think and say about this topic, preferably before we get close to the point where this technology feels so embedded into society in some structural way, if in fact that ends up happening, that it’s hard to change things.


Last month, Facebook announced its competitor to chatGPT, a large language model AI called LLaMA. At the time they wrote that “To maintain integrity and prevent misuse…access to the model will be granted on a case-by-case basis” to specific researchers".

Fast forward about 2 weeks and of course its code has been leaked on the house of horrors that is 4chan of all places.

Now, actually running the code is a non-trivial process, and it’s quite possible that the risks of nefarious people having unfettered access to such things are actually not all that high. But in any case, it’s further evidence that any approach to “AI safety” that involves making sure only people deemed to be good and virtuous citizens (or Facebook engineers) have access to cutting-edge AI technology is a total non-starter. This isn’t surprising; I can’t immediately think of any historical examples where deciding only some saintly class may have access to almost anything has really worked out great.


Women tend to suffer from motion sickness more than men when using virtual reality headsets. This study suggests that the main reason why is simply that the headsets are not well designed for people who have the sorts of distances between their pupils that females are more likely to have; their interpupillary distance.

Seems even the devices of the sci-fi future continue to be designed in sexist ways. Famously, Caroline Criado-Perez’s book Invisible Women has a ton of further examples.


This one might be legal, but it’s disgusting.

Children who have been sexually exploited and women who have been attacked by their partners are among more than 2,000 victims of serious crime who police reported to immigration enforcement

Obviously this policy is going to stop victims reporting the crimes they’ve been subject to. It’s an explicit prioritisation of punishing the “crime” of wanting or needing to reside in the UK over the actual crime of violent assault and abuse.

Even some Conservative MPs are against it, with Sir Bob Neill claiming “It’s making them victims twice over”.


It’s not been a great couple of days for our Government in terms of managing to respect the basics of international law.

Courts already blocked a previous proposal to deport asylum seekers the UK is obliged to allow to Rwanda. Now Sunak is trying to publish legislation that detains and deports all asylum seekers that come here via small boats. The idea has been condemned by a wide range of individuals and organisations. But morals aside, the UNHCR says it’s likely illegal.

The indefinite detention of those seeking asylum, based solely on their mode of arrival, would punish people in need of help and protection and constitute a clear breach of the United Kingdom’s obligations under the 1951 Refugee Convention.

For what it’s worth. analysis from the Refugee Council showed that most of last year’s arrivals would in fact be legitimately granted asylum.

Next up on a different topic, workers rights, the government is trying to introduce some new anti-strike laws that require minimum service levels in a wide range of industries deemed critical. Any actual specifics, such as the minimum service levels are going to be is entirely undefined, risking letting any government arbitrarily do whatever it wants with those aspects in the future. Punishment for not following whatever is decided includes threats such as sacking and huge fines being levied if they’re not met.

The UK’s Joint Committee on Human Rights says this is entirely unjustified, disproportionate and may well breach article 11 of the European Convention of Human Rights.

Per the committee’s chair, Joanna Cherry,

…the bill, in our view, is likely to be incompatible with human rights law which provides a right to association and with it, protection for strike action.”