Whilst separated by hundreds of years, I keep noticing distinct parallels between modern conceptions of AIs, especially those of a more doomer persuasion, and the long history of myths around for example golems.
Many such stories originate from Jewish folklore. The Jewish Museum in Berlin describes a golem as follows:
…a creature formed out of a lifeless substance such as dust or earth that is brought to life by ritual incantations and sequences of Hebrew letters. The golem, brought into being by a human creator, becomes a helper, a companion, or a rescuer of an imperiled Jewish community. In many golem stories, the creature runs amok and the golem itself becomes a threat to its creator.
Make a handful of changes, perhaps swapping out “sequences of Hebrew letters” with “sequences of computer code”, and “an imperiled Jewish community” with “humanity” and there we go. It’s certainly not hard to see one of the more famous such myths, the Golem of Prague, as a kind of AI alignment problem.
The story of Frankenstein feels like yet another warning from fiction about the perils of creating something supposedly akin to a human. From the Wikipedia summary:
Victor buries himself in his experiments to deal with the grief. At the university, he excels at chemistry and other sciences, soon developing a secret technique to impart life to non-living matter.
Despite Victor’s selecting its features to be beautiful, upon animation the Creature is instead hideous, with dull and watery yellow eyes and yellow skin that barely conceals the muscles and blood vessels underneath. Repulsed by his work, Victor flees.
Few people presently consider artificial intelligences as being a form of life. But that aside, a couple of weeks ago the “Godfather of AI”, Geoffrey Hinton, also fled, repulsed by his work. Or at least quit Google, sorry for what he had done.
…citing concerns over the flood of misinformation, the possibility for AI to upend the job market, and the “existential risk” posed by the creation of a true digital intelligence…Hinton, 75, said he quit to speak freely about the dangers of AI, and in part regrets his contribution to the field.