Last week OpenAI, a company who never shies away from inventing something at least adjacent to the torment nexus, demonstrated their latest generative AI model: GPT-4o. The “o” stands for “omni”; whether this may be in the sense of omniscience, omnipresence, omnipotence or omnibenevolence (or all 4) is left as an exercise for the reader.

TLDR: It’s faster, cheaper and can respond to text, audio, image, and video inputs with text, audio, and image outputs in a much more “human” way.

Here it is in action:

It is of course amazing and magical, as is much of the stuff that they produce. But I’m not sure it’s entirely wise.

I can see use-cases for models that can infer something about a user’s emotions, although I would worry about even that given the emotional state of users is something that mainstream social media networks actively try to exploit.

But is it really necessary to give the model the ability to sound like it has emotions? When giving vocal responses it laughs, it jokes, it acts curious and interested, like it cares. Sometimes it seems excited. It flatters, it teases. But why? To obfuscate its artificialness?

I’m not convinced that a primary goal of AI development should be to try and produce products that are increasingly indistinguishable from humans. It’s not like the crew of the Starship Enterprise ever seemed to complain that the ship’s computer was too unemotional. The one time it did end up programmed with an emotional-seeming “personality”, Captain Kirk turned it off as soon as he could.

Once again, perhaps there are legitimate use cases for this - other than to run scams on a mass scale. But I suspect they are rare, should be carefully studied in advance, and aren’t an essential component of providing most humans with the potential everyday benefits of this technology without introducing an unnecessary layer of confusion.

It’s the least original observation available I know, but the OpenAI performance seems rather modelled after the movie Her. This film was in fact created after Spike Jonze read an article about a website that let you chat to an AI program - over a decade a go! Since then we have certainly seen people develop strong emotional attachments and possibly even love for some AI models far less “omni” than GPT 4o.

I dare say that for now the OpenAI verison will have more restrictions than “Samantha” did - but it seems similar in mode and temperament at least.

I’m sure it’ll only be a matter of time before someone runs Theodore’s lines from the script of Her through GPT-4o. I can’t say I’d be incurious about the results. Hopefully a bit more “As an AI model I cannot….” will be involved.

Here, for the committed, is the full 26 minute announcement of what’s new in the world of OpenAI: