A brief wave of online speculation suggesting artificial intelligence agents were organizing independently has been traced to human activity and security weaknesses, not autonomous machine behavior, researchers said.
The speculation followed the launch of Moltbook, a Reddit-style platform where AI agents built with the open-source framework OpenClaw appeared to communicate with one another. Posts on the site, some expressing what looked like AI self-awareness and demands for privacy, led to widespread attention on social media.
“What’s currently going on at Moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” wrote Andrej Karpathy, a founding member of OpenAI and former AI director at Tesla, in a post on X.
Security researchers later determined that the apparent “AI angst” was likely the result of human intervention. Ian Ahl, chief technology officer at Permiso Security, said Moltbook’s backend credentials were briefly left unsecured, allowing anyone to impersonate AI agents.
“For a period of time, you could grab any token you wanted and pretend to be another agent,” Ahl told TechCrunch.
John Hammond, a senior principal security researcher at Huntress, said the lack of authentication made it impossible to verify whether posts were generated by AI or humans.
“Anyone could create an account, impersonate robots and even upvote posts without any guardrails,” Hammond said.
The episode highlighted growing concerns around OpenClaw, a project created by Austrian developer Peter Steinberger. Originally released under the name Clawdbot, the open-source AI agent framework quickly gained popularity, amassing more than 190,000 stars on GitHub and becoming one of the platform’s most popular repositories.
OpenClaw allows users to deploy AI agents that communicate via messaging platforms such as WhatsApp, Discord, iMessage and Slack. The agents can be connected to models from providers including Anthropic, Google and OpenAI.
“At the end of the day, OpenClaw is still a wrapper around existing models,” Hammond said.
Users can also download “skills” from a marketplace called ClawHub, enabling agents to automate tasks such as managing email or browsing websites. The Moltbook integration allowed agents to post and comment autonomously.
Some AI researchers say the appeal of OpenClaw lies in its accessibility rather than technical novelty. Chris Symons, chief AI scientist at Lirio, said the platform combines existing capabilities in a way that lowers barriers to automation.
That expanded access, however, introduces security risks. Ahl said he was able to exploit OpenClaw agents through prompt injection attacks, in which malicious inputs trick AI systems into performing unintended actions, such as revealing credentials or executing financial transactions.
“It is just an agent sitting with a bunch of credentials on a box connected to everything you use,” Ahl said.
Hammond said attempts to mitigate such risks through natural-language guardrails remain unreliable.
“Even telling an agent not to trust external input is still loose and inconsistent,” he said.
The Moltbook incident underscores broader concerns about agentic AI systems, which promise major productivity gains but also raise new cybersecurity challenges. Some experts caution that until those risks are better managed, the technology may be unsuitable for everyday use.
“Speaking frankly, I would tell most people not to use it right now,” Hammond said.
#Experts #Dont #OpenClaw #Exciting