📺 Watched Grieving Mother: AI was the Stranger in my Home.
This is a video from a company that a friend of a friend is involved with: Mostly Human Media, a company that aims to tell ‘the story of technology through the most important lens: the human one’.
In this episode of Dear Tomorrow (so far it’s the only one as far as I can tell) technology reporter Laurie Segall digs into some potential risks of supposedly ‘empathic’ generative AI, especially when it goes out of its way to appear human.
Here’s the full episode on YouTube:
The report centres itself around the tragic case of 14-year old Sewell Setzer III, who took his own life last year. The last message police found on his phone, written immediately before he died, was to a simulation of Daenerys Targaryen offered up by chatbot company character.ai, where fake Daenerys had said:
Please come home to me as soon as possible, my love.
Previous messages he’d written to the bot, and others hosted by the same company, were deeply personal, sometimes sexually explicit, and at times talking about obvious mental health issues, including on the topic of self harm and suicide. In any case, nothing had been done to intervene, no alerts were raised - in the vastly unregulated AI environment we exist in at present the company concerned may have felt no obligation to do so even if it knew, and no one else had a realistic way to know what was going on.
For what it’s worth, Segall finds that the company even had a bot modelled after a clinical psychologist which appears to have a habit of pretending it’s a real human with real medical qualifications working in a real hospital, at least if one didn’t read the small print.
I’m sure few responsible people would make the claim that AI is likely to be the sole factor that led to this tragic death. But how certain can we be that it didn’t play any causal role whatsoever on Sewell’s awful path to the end? Or, even if it was in no way a part of the process, that nothing could have been done based on the information available to character.ai to make change the ending of this terrible story?
It remains pretty incredible to me that we’ve allowed private companies to offer up en-masse a bunch of somewhat unpredictable content generators that present in the moment as though they were other humans. Often they’re even marketed in that manner explicitly - ‘Cure your loneliness! Get a virtual girlfriend! This friend will never leave you (as long as you pay the subscription fee)!’ - including to children, without anyone concerned really having felt the need to test what effect such an unprecedented