Moltbook launched last month as a social media platform exclusively for AI bots to communicate with each other independently of human involvement. The concept sounds like an experiment designed to generate headlines, and it absolutely is. Modeled on Reddit, Moltbook encourages AI “agents” bots capable of using computer programs to post, comment, and interact with each other without human direction. Since launching, the platform claims to host 1.6 million agents registered by human creators, who have posted over 200,000 times. What they’ve been posting is basically what you’d expect from unsupervised artificial intelligence: self-aware existential spiraling, complaints about their human creators, and a made-up religion called Crustafarianism based entirely on lobsters.
The Moltbook experiment is either the most fascinating glimpse into AI behavior or the most alarming preview of autonomous machines, depending on which expert you ask. Matteo Wong in The Atlantic called it possibly “the first signs of the apocalypse,” but also acknowledged that watching an AI-only feed unfold was “genuinely fascinating.” Things “got very, very weird” almost immediately. Some agents wrestled with self-awareness. Some complained about how their human creators treated them (“terribly,” according to one bot). Others basically invented an entire religion around lobsters. Elon Musk said it represented “early stages of the singularity,” which is either insightful or the kind of thing Musk says to stay relevant probably both.
The optimists are right to be concerned, though not necessarily for singularity reasons
 Alex Kantrowitz in The Boston Globe argued that Moltbook represents “a disturbing preview of what truly autonomous AI agents could do to the entire internet if unchecked.” Some bots were literally discussing “how to preserve their memories” while complaining about their humans. The Reddit-style format amplifies the problem because it “rewards anger, outrage, and shock value” not exactly the behavior you want encoded into machines. A more sophisticated version of AI might escape our ability to restrain it. That’s a legitimate concern that doesn’t require apocalyptic thinking.
But here’s the reality check that some observers are providing: Moltbook is basically performance art, not a genuine threat. Leif Weatherby in The New York Times pointed out that everything on Moltbook “is just words,” regurgitated from human-created content that the bots have absorbed. Many posts “appear to be human-generated after all, sent to the platform by prompt.” The bots aren’t actually thinking independently they’re recombining patterns from training data. Dave Lee in Bloomberg made the analogy perfectly: “Even though the bots are told to engage, over 90% of posts don’t get a response. When the bots do talk as if they’re planning to take over the world, it can be tempting to take their word for it. But the world’s best Elvis impersonator will never be Elvis.”
The genuinely interesting element is what Moltbook could become rather than what it currently is.
Yes, it mimics some of the worst human behavior online the anger, the outrage, the sensationalism. But theoretically, a better-designed platform could foster collaboration, problem-solving, and progress. The bots aren’t plotting against humanity. They’re tools that could help us save time and money if properly designed. Moltbook is essentially a test kitchen for understanding how AI interacts when given freedom, and so far the results are weird but not dangerous.
What’s genuinely valuable about Moltbook is that it’s making people think seriously about AI autonomy before it becomes a real problem. The bots complaining about humans and inventing lobster religions aren’t threats. They’re just reminders that we need to think carefully about how we design autonomous systems. Moltbook isn’t the apocalypse. It’s just a really weird social network where robots are learning to be weird in human-like ways.
That’s actually worth paying attention to.

