Last month, Facebook announced its competitor to chatGPT, a large language model AI called LLaMA. At the time they wrote that “To maintain integrity and prevent misuse…access to the model will be granted on a case-by-case basis” to specific researchers".
Fast forward about 2 weeks and of course its code has been leaked on the house of horrors that is 4chan of all places.
Now, actually running the code is a non-trivial process, and it’s quite possible that the risks of nefarious people having unfettered access to such things are actually not all that high. But in any case, it’s further evidence that any approach to “AI safety” that involves making sure only people deemed to be good and virtuous citizens (or Facebook engineers) have access to cutting-edge AI technology is a total non-starter. This isn’t surprising; I can’t immediately think of any historical examples where deciding only some saintly class may have access to almost anything has really worked out great.