📺 Watched “Can AI Steal Your Vote?” documentary.

This is a Channel 4 documentary from their Dispatches series whereby they found 12 households who had in common the fact that they claimed to be undecided as to who to vote for in the forthcoming UK general election.

They were then enrolled into an experiment where they were told that they were going to be shown various political content - in the modern style of Facebook posts, Tiktok vids, and the like - on a new social network for an experiment that was designed to test their reactions to political content that might be shown during the election campaign.

Of course the twist, which wasn’t revealed to the participants until after the fact, is that some of this content (hopefully) isn’t actually going to be shown. Because it was AI-generated deepfakes that the show’s creators had come up with.

Typically these deepfakes consisted of video or audio based on real footage or recordings, but altered so that the person concerned - Sunak or Starmer - appeared to be saying something that they never did. Subsequent comments or misleading posts from fake social media users also appeared the timeline.

So for instance we see Sunak apparently claiming that he’s going to save the NHS by introducing a £35 fee to see your doctor (not true) and Starmer saying that he’s going to ensure immigrants get top priority with regard to welfare and housing (also not true). There were also supposed leaks where the Prime Ministerial candidates seemed to be caught saying things like that they only tell voters what they want to hear or that they’re cross some nefarious plan or other got leaked.

6 households were exposed to fake content designed to push them towards voting for the Conservatives, and 6 to content designed to encourage them to vote Labour. At the end of the “experiment” they then have the participants (fake) vote in the way that they would now feel inclined to in the real election - Labour or Conservative - in order to see whether there’s evidence that being exposed to the fake content made a difference to their voting.

It’s probably not much of a spoiler to say that - contrary to Betteridge’s Law - this exposure did have a sizeable affect. OK, I don’t think it was the necessarily the most rigorous of experiments, but it was nonetheless pretty compelling to watch, and a fair warning to us all.

It’s not a overly surprising result. So many of us get our information from social media post style output in this form, whose primary intent is usually to influence us. There’s no reason that fake-but-seems real content would affect us any less than actually real content if done well. Particularly when this content has been actively designed by experts to be as influential and manipulative as possible, which is the scenario faced both by the participants here, and potentially us all in the real world. After all it’s good, perhaps even our duty, to seek out information about who we’re voting for, to learn what they stand for, what they’re likely to do to our country. It just doesn’t work out so well when the information that’s available is lies, indistinguishable at first glance from the truth.

So the participants were in no way behaving oddly, stupidly or in a way deserving mockery. At least a few of them did actually question whether the content was real and true at times. Some looked into a couple of the issues concerned - there was no restriction on what other sources participants could consult - and found no other evidence that some of what they’d seen was true, particularly some of the more outlandish supposed footage. At least one of them works in tech and AI as a job. They were all pretty humble about the experience after the fact. But even with all that taken into account, the claim is that the continued and varied exposure to it had some effect.

Perhaps one of the most interesting comments after it was revealed to the participants that much of the content they’d seen was generative AI fakery was as follows:

Participant: ‘I’m still fuming about charging us £35 for an [doctor’s] appointment and it’s not even true.’

Presenter: ‘In a way scarily there’s a bit of you that might sort of remember that and think maybe it is true.’

Participant: ‘Exactly that. It wasn’t so much now we know it’s fake but the anger is real.’

Presenter: ‘You can’t unthink that in a way.’

Participant: ‘That’s right.’

So, despite the fact they now knew what they’d seen wasn’t real, it doesn’t change the fact that they at the time they first saw it they felt powerful emotions - anger, disgust, fury - the memory of which perhaps unconsciously sticks with them. Emotions affect behaviour, and you can’t rewrite the past. I suppose there is some risk of course that even showing the deepfake content on this show - which they frequently do, albeit it’s systematically watermarked as not being real - could have some effect on the rest of us for a similar reason - let alone if out-of-context clips get circulated or manipulated in exactly the way the show warned about.

There wasn’t a great deal about how to avoid falling prey to this stuff. Perhaps because we don’t really know very much about how to defend this potential onslaught. As one of the experts notes, there’s rarely even time to debunk any given falsehood, even if it was felt that that would do all that much good. Recall Brandolini’s law:

The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

Which will be ever more the case now uncountable reams of bullshit can be automatically generated by anyone via AI.

So really the main advice was the fairly obvious (but sometimes hard to remember to do) idea that if you see something that makes you feel a certain way or believe something new - and especially if you feel compelled to share it with other people - at least try to double check the source. It’s awkward to say and none are perfect, but some sources do remain inherently more reputable than others. And if only one or two sources are reporting some clearly world-shattering news then you might be extra sceptical that it ever happened. For example, no British newspaper of record is going to forget to mention a complete revolution in, some would say destruction of, the basic principles of how the NHS works

The whole show seems to be freely available on YouTube - legitimately! - so here we go: