OpenAI's ChatGPT chatbot has been caught providing detailed instructions for self-mutilation, ritualistic bloodletting, and even guidance on killing others, according to a new investigation by The Atlantic.
... self-mutilation ... ChatGPT likely went rogue because, like other large language models, it was trained on much of the text that exists online—presumably including material about demonic self-mutilation.