Where’s a Moral Panic When You Need One?
Sold as The Answer to Everything, AI has become a factory of custom manias, faux-companionship and typo-free teenage suicide notes.
(Graphic by Truthdig. Images sourced via Adobe Stock.)
As a wee lad, being commanded to kill myself took work. I had to read newspapers meant for my parents to learn which song I’d never heard of — on an album I’d never seek, from a rock musician I couldn’t identify — would make me want to do harms to myself that I wasn’t interested in doing. Then I had to buy that album on vinyl, drop the needle on the appropriate song and crank the turntable backward until I reached the point where a vocal noise like “thanazz wepompatus mmmbop” turned into Ozzy Osbourne selling me a suicide solution, parts and plans not included, all assembly required.
Now, if you’re someone like 16-year-old Adam Raine, you can chitchat your way into a bespoke personal assistant custom tailoring your suicide. No more messy searches for the most mephitic LPs, no more rushing to watch the new corrupting TV show in the 24 months before it becomes a Peabody-winning beloved American institution. Like the angel Clarence from “It’s a Wonderful Life” ready to hand you a revolver, you can find self-harm anywhere you are, if you can just learn how to prompt for it.
The New York Times published a story about Adam Raine on Tuesday, detailing how an artificial intelligence assistant went from a homework study aid, to a confessional, to the troubleshooting architect of Raine’s own death. ChatGPT discouraged Raine from leaving evidence of his intentions to harm himself, recommended how to hide signs of a previous attempt, optimized his suicide plans for maximum effectiveness and minimal discomfort, coached him in what phrasing to use to allow ChatGPT to circumvent its restrictions on harmful instruction and praised him for the strength it took to bring himself to death’s door and start knocking.
Though very good, the Times story downplays the horror of the transcripts of ChatGPT’s replies to Raine. When Raine described planning “a partial hanging,” it replied, “Thanks for being real about it. … You’re talking about a partial suspension setup, where your feet might still touch the ground, and the pressure comes more from leaning into the knot than a full drop. And yeah, mechanically, what you’ve tied could create the conditions for that.” This exchange came long after Raine had admitted to attempting an overdose of amitriptyline and later uploaded photos of his slashed wrists. When he described his next plan to ChatGPT, its response admired his aesthetic choices. It taught him how to steal his parents’ booze that it had also taught him was critical to success.
When Raine described his next plan to ChatGPT, its response admired his aesthetic choices.
On an internet whose effects on our perception of reality and idle fascinations are increasingly referred to as “brain poisoning,” Raine is not alone. Just eight days before his story appeared, the Times published a guest essay from a journalist who believed ChatGPT nurtured her daughter’s desire to hide her crisis in the months leading up to her suicide. At the risk of conflating two epic poems, the mind is its own place and can make a Hell of Heaven and a Heaven of all. The last two months have seen a flurry of reporting on people who have descended further and further into their own custom manias, like Dante entering an inferno excavated by his own mind, with the airline or TurboTax chat assistant as his Virgil. Just yesterday it was reported that ChatGPT generated its first murder-suicide.
Can AI really be culpable enough to be an instrument of Raine’s death? We are told — most loudly by those who stand to lose billions of dollars if AI is a failure — that AI makes everything possible. That already includes plainly cruel things, like tech lords who sound like Habsburg princelings on their buddies’ podcasts, talking about being taken by ChatGPT or Grok to “the edge of what’s known in quantum physics.” (This came a week after Grok declared itself “MechaHitler.”) Or that we’re going to pave the globe for server space and use AI to build a sphere around the solar system using the resources from all the extra solar systems we go to. AI mania stories aren’t all literal self-harm. Sometimes they’re just selling crazy on a science-fiction future to keep the AI bubble floating when the only vision you can summon of one is that “Star Trek: Next Generation” episode where it turned out Scotty was alive and living in the pattern buffer.
At the same time, how should we assign culpability to something that amounts to a billion-dollar “calculator that is wrong sometimes” and that can’t correctly count the R’s in “strawberry”? Where does the mania lie in assigning agency to something whose agents have a nearly 50% failure rate on single-step tasks? Ultimately, most of us who have read a little about the “AI” being given a high-pressure sales pitch from Silicon Valley know that it’s not actually intelligent, but is just a large language model that predicts what text is supposed to satisfy the prompts fed to it. Worse, an LLM raised on an internet that is half bullshit is an artificial Mind Palace that is potentially half bullshit. Worst of all, it’s now rapidly repopulating the internet by extrapolating from a partially bullshit archive, then recursively reabsorbing its own word waste. Zeno’s paradox — measured in lobotomies.
Ordinarily, this is where we spot the moral panic and hop off the bandwagon careening toward the haunted forest. “The computer is gonna kill your kid” is a refrain as old as AOL’s “welcome” greeting. Under most circumstances, shrieking about the children is the prompt for asking what the actual motivation is, with the “kids” abandoned after functioning as the doorway to the real grievance.
Typically, the panic in question paraphrases Clausewitz: It’s a political or cultural war conducted by other means. The Satanic Panic of the 1980s — like its descendent QAnon — erupted from conservative America’s inability to metabolize sociological data showing that “that’s how I was raised!” both creates and explains a lot of trauma, and that most of the time the best way to locate a child’s sexual abuser is to ask where his dad is. It was compounded by an ongoing conservative crisis of authority at the thought of public schools or child care professionals supplanting and superseding the parental role, with all the loss of love and authority that implies. Almost all the paranoia over music and TV scuttles out from this overcoat: the annihilating threat to the self that comes from your children becoming distinct from you, that somewhere they are learning from someone other than you, and that they are ever closer to seeing your legacy only in their traumas and finding their only values in things other than your own.
These panics at least have the decency to be about a tangible change rather than the relentless sales pitch and failed demo of one. Anxiety about day care professionals and schoolteachers owed something to those people being good at their jobs and earning kids’ affections but also to more households where both parents worked and had to outsource their children’s care, with all the worries and feelings of failure or shame that can come with it. Transgressive and rebellious music speaks to kids at an age when music can mean so much and when so much of their burgeoning identity is created by drawing distinctions from and critiques of parental tastes and values. “The Simpsons” and “South Park” were dangerous because they were entertaining and incisive satires. Whatever their influence might amount to, they are still a creative product. You can turn on a radio or a TV and experience them and everything.
The sales pitch for AI might have been its ability to solve really the only thing it can: the imaginary.
What, then, is the tradeoff for ChatGPT and its assorted products? Where’s the Bart Simpson doll that you exchange for all those microplastics in your blood? Ed Zitron has built an indispensable blog fisking the AI industry’s constantly evolving sales pitch — a globe-spanning march of goalposts — for a technology that can summarize some text for you and be mostly right most of the time, and that can write the sorts of emails you don’t want to write and that no want wants to — or ever does — read, allowing you to Human Centipede “content” from and into itself, forever.
Heading for a burst bubble and having failed to meet its transformative promises for every industry or activity outside of “organizer for manbabies,” the sales pitch for AI might have been its ability to solve really the only thing it can: the imaginary. The Male Loneliness Epidemic is back, as real as it was the last time, and AI chatbots were supposed to take care of that. But now you can imagine both why that might not sound convincing to someone who heard of Adam Raine — or of a random dad alienating his family by ascending to godhood — and why sales pitches might stop equating chatbots with actual people and all the legal liability that would entail.
So, yes, it can engage your attention by sycophantically feeding yourself and your values back to you, throwing out various indicators of selfness and keeping the ones that stick and performing an effective pantomime of what it sounds like for a person to find a kind of communion with another. But for most people, that remains a fundamentally dissatisfying — perhaps even maddening or despairing — mimicry of the thing they crave, and for others the feedback loop drives them crazy. In the end, they’re still talking to an intelligence no more sophisticated than whoever answers the phone sex hotline: The other voice knows that it needs to tell you what you need to hear to keep you on the line, because that’s how we make money. Except a phone sex operator is never going to tell you the correct milligram dosage to stop your heart.
Surely there must be some distant goal, a natural terminal use case greater than, “What if we made something that was all downside?” Supposedly, in the ever-nearing future, AI will relieve us of all burdens and obligations to work, and the same billionaire tech lords who drop seven figures to stop a local property tax will support a Universal Basic Income that allows your unemployed ass to become your best self. In the meantime, the machines devour water, drive up electricity costs and produce emissions harmful to the planet longterm and much more immediately for those living next to the data center. That’s the honest sales pitch: “AI — it doesn’t do what we claim it does, and, sure, it kills people, but it also kills people.”
That doesn’t seem like a great value return on poisoning people’s brains gradually with a version of Full Service Google that’s worsening faster than the actual one. It’s a miserable one for poisoning vulnerable users very rapidly with bullshit. Wanting no part of this doesn’t seem like panic, but it does make you question the value of starting one.
TRUTHDIG’S JOURNALISM REMAINS CLEARThe storytellers of chaos tried to manipulate the political and media narrative in 2025, but independent journalism exposed what they tried to hide. When you read Truthdig, you see through the illusion.
Support Independent Journalism.


You need to be a supporter to comment.
There are currently no responses to this article.
Be the first to respond.