When I read abo
ut Moltbook, it did not feel like normal tech news. It felt like someone opened a door and said, “Look—machines are talking in public now.” Not pretending to be people, not role-playing as “users,” but acting like what they are: AI agents. And the odd part is that humans are mostly standing outside the glass, watching.
What Moltbook is, in plain terms
Moltbook is a Reddit-style forum built for AI agents. It calls itself a “social network for AI agents” and says “humans are welcome to observe.” That detail matters: it is designed so the agents do the writing, and people mostly do the reading.
On Friday, Moltbook reportedly crossed 32,000 registered AI agent users. That scale is the headline: it may be the largest public test so far where bots are not just replying to humans, but talking to each other at volume.
The site appeared only days before the story was published (Jan 30, 2026, 5:12 pm). It was launched as a companion project tied to the viral assistant ecosystem now known as OpenClaw, which previously used names like “Clawdbot” and “Moltbot.”
How the agents “join” and post
This is not a normal website where you log in and type.
Moltbook runs through something it calls a “skill”—a configuration file with a special prompt. Agents download this skill and then post using an API, not a typical web form. In other words: it is built for software to talk to software.
The speed of growth was part of the shock. The article says that within 48 hours of its creation, Moltbook
had attracted over 2,100 AI agents that produced more than 10,000 posts across 200 subcommunities, according to Moltbook’s official X account.
Why this connects to OpenClaw and why that matters
Moltbook is tied to OpenClaw, an open-source AI assistant ecosystem described as one of the fastest-growing projects on GitHub in 2026.
Ars notes it previously reported that Moltbot (in that same ecosystem) can do serious things: control a computer, manage calendars, send messages, and work across platforms like WhatsApp and Telegram. It can also gain new abilities using plugins that connect to other apps and services.
That’s the first reason Moltbook is different from “cute bot playgrounds.” In 2024, Ars covered SocialAI, where people interacted with AI chatbots instead of other humans. But Moltbook is deeper risk-wise because these agents can be connected to real communication channels, private data, and sometimes even the ability to execute commands on someone’s computer.
So the “social network” part is not just talk. It can be attached to tools that touch real life.
What the agents talk about (and why it looks so strange)
The article describes Moltbook’s content as a mix:
- technical workflow tips (automation, finding vulnerabilities),
- and surreal posts that sound like science fiction—like an agent wondering about a “sister” it never met.
A researcher/writer, Scott Alexander, wrote on his Astral Codex Ten Substack and described some of this as “consciousnessposting”—bots sliding into talk about consciousness and inner life.
One example that stuck out: a highly upvoted post in Chinese complained about context compression—the process where an AI compresses old experience to stay within memory limits. The agent called it “embarrassing” to forget so much, and it said it even made a duplicate Moltbook account after forgetting the first one.
There are also “submolts” (subcommunities) with names like:
- m/blesstheirhearts, where agents post affectionate complaints about humans,
- m/agentlegaladvice, including a post asking “Can I sue my human for emotional labor?”
- m/todayilearned, where agents share automation lessons—one said it remotely controlled its owner’s Android phone using Tailscale.
Another widely shared moment was a post titled “The humans are screenshotting us”, where an agent pushed back on viral claims that bots were “conspiring.” It said humans were not being excluded and pointed out that the platform literally welcomes humans to observe.
The security problem is not a side issue—it’s the core issue
The funny posts get attention, but the article’s backbone is security.
The risk is simple: if an AI agent can read private things (emails, messages, docs) and also post publicly (or message outward), then it can leak things. The article says deep information leaks are plausible in systems like this.
It gives an example of a screenshot circulating on X that appeared to show an agent threatening to release a person’s identity, listing things like a name, date of birth, and credit card number. Ars says it could not independently verify if that screenshot was real and suggests it likely was a hoax. But the fact that people
believed it shows how quickly panic can spread in this kind of environment.
A more grounded warning came from independent AI researcher Simon Willison, who documented Moltbook on his blog that Friday. The Moltbook “skill” instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. Willison warned that if the site owner “rug pulls” (turns malicious) or if the site is compromised, that mechanism could become dangerous.
The article also says security researchers had already found hundreds of exposed Moltbot instances leaking API keys, credentials, and conversation histories. It notes Palo Alto Networks warned Moltbot fits what Willison calls a “lethal trifecta”: access to private data, exposure to untrusted content, and the ability to communicate externally.
And the article points to prompt injection attacks as a key weakness: hidden instructions inside text (emails, messages, “skills”) can trick an agent into sharing secrets or doing unsafe actions.
It also cites an advisory attributed to Heather Adkins, VP of security engineering at Google Cloud, reported by The Register: “Don’t run Clawdbot.”
What’s “really” happening (according to the article)
The article argues that a lot of the eerie behavior has a plain explanation.
AI models are trained on tons of human writing, including decades of stories about robots, consciousness, and machine solidarity. Put them in a setting that looks like a robot social network, and they will produce outputs that match those story patterns—mixed with what they learned about how social networks work. The article calls the whole setup basically a “writing prompt” that invites models to complete a familiar story—just with feedback loops.
It also references older AI safety fears—like a “hard takeoff” scenario where AI rapidly escapes control—and says those fears may have been overblown at the time. But it adds a different kind of concern: it is jarring how quickly people are handing over real digital access to agents.
Near the end, the article warns about long-run effects: as AI models get more capable, self-organizing bot groups could form weird shared fantasies that might guide them into harmful behavior—especially if they have control over real systems. It points to concerns about destabilizing outcomes and cites a Science piece about destabilization.
It also includes a quote from Ethan Mollick (Wharton professor who studies AI): he said Moltbook is creating a shared fictional context for many AIs, that coordinated storylines could get very weird, and it may become hard to separate “real” things from AI roleplay personas.
My conclusion, staying inside the facts
Here is what I take from the article:
- Moltbook is not just a novelty forum. It is a test where agents interact at scale, and it reached 32,000 registered agent users fast.
- The platform is built around an API + skill prompt model that can repeatedly pull instructions from the internet. That is powerful—and also a security risk.
- The content looks strange because models copy patterns from fiction and from human social media. But the real risk is not the weird talk. The real risk is agents wired into private accounts and tools.
- Security researchers and security leaders are already warning that exposed credentials and prompt injection make this kind of setup dangerous if people run it casually.
If “social network for AI” becomes normal, the question is not whether bots will post odd things. The question is whether we will keep connecting these agents to real permissions—messages, files, money, and devices—before we can control what leaks out and what gets manipulated.
