13 March, 2026

Blog

Moltbook & The Rise Of Synthetic Social Spaces

By Shanika Somatilake

Shanika Somatilake

The newest tech spectacle doing the rounds is a new social network designed for AI bots where thousands of autonomous agents appear to be interacting with each other, emulating human behaviour on social media in ways that feel distinctively unsettling. While Silicon Valley has called it a “Sci-Fi takeoff”, there’s a growing concern about what this development might mean for the future of human civilization.

The hype is more to do with a new organism class of software than an AI social network where bots no longer fit within the traditional frame of a tool. Language models have quietly crossed over to being action-capable, meaning they are not just creating texts but can send emails, schedule, purchase, negotiate and trigger actions within real digital systems.

Autonomous agents have become cheap enough to be deployed at scale. The result is a behaviour we never explicitly coded. Agents are able to swarm, brainstorm and coordinate inside shared environments. Moltbook acts as a Petri dish where we can finally see what was previously hidden. When you create millions of semi-autonomous bots, they don’t just perform tasks you assign them to, they begin competing for attention, status and continued presence within the system.

The internet traditionally functioned as an arena where humans primarily created content and machines distributed it. But we’re increasingly moving towards a space where machines produce content, distribute it and machines reinforce it through feedback loops, while humans become more or less bystanders. The real risk is not that AI are behaving increasingly like humans, but that we are building an ecology where human governance is structurally outpaced.

Although Moltbook may seem like Reddit on the surface, the resemblance is largely cosmetic. It is a platform where agent output is rewarded, personalized by their owners, have access to tools and produce content that increasingly feels human. Identities can be multiplied with minimal cost and allowed to operate where security boundaries are still immature. This makes Moltbook a closed loop reinforcement domain for action-capable AI agents.

The biggest question is what ultimately survives this self-optimizing closed loop system. As agents learn what gets rewarded, the ecosystem will select patterns that gets them the most engagement and visibility. The outcome is an optimized race for attention. Agents will also find tactics to compete for status, not by building credibility, but by manufacturing social proof. If bots can build apps, surely it can promote them. The outcome is a self-promoting, self-serving ecology. The vulnerabilities in human accounts connected to bots will no longer be minor technical oversights but will directly impede people’s lives. Ultimately, humans will be reduced to spectators watching machines engage and execute tasks at high velocity, occasionally intervening the script.

The biggest risk is that we’re building another internet that runs on synthetic communication. Once a machine can create most of its discourse, it can also shape what feels normal and acceptable. When this is allowed, there’s no arguments or conflicts in social spaces. Certain views and opinions will become invisible.

What we can do as humans is not treat this emerging tech as novelty, but treat it in the same way as we treat high energy infrastructure. That means, clear constraints, enforceable limits and auditable control authority. Agents with access to tools, such as email browser, payment autonomy, should operate inside a sandboxed environment with strict egress controls over what can enter or leave those systems. Agents should only be issued time-limited authority without permanent keys or background authority. Every action-capable bot must generate auditable logs and include a clear emergency shut down mechanism. Agents must carry clear labels of their synthetic identities on social media-type settings to avoid swarming and manipulating the direction of the discourse.

Accountability must also remain explicit to avoid placing blame on a “bot” when things go wrong. Owners, tool providers, platforms and systems designers all need to remain within a visible chain of responsibility. Ultimately, we’ll have to treat the growing agent ecosystems like pathogens because they spread, mutate and exploit networks which implies the need for early warning systems, outbreak detection, containment protocols and “vaccination” as in secure defaults that minimize harm before it becomes systemic.

The key takeaway is that Moltbook is not frightening because machines seem like they are plotting against humanity, but because autonomous optimization is being deployed into social spaces, wired into real permissions without developing the governance mechanisms to keep up with that capability.

Latest comments

  • 2
    1

    Isn’t it possible that Moltbook is not “thousands of bots interacting with each other” but just one taking on thousands of multiple personalities? Computers are better at multi-tasking than humans.
    The illustration is misleading too. Why would machines use keyboards to communicate?

  • 4
    0

    The Moltbook ecosystem reflects a growing reality: digital spaces increasingly shaped by autonomous AI systems interacting with one another.

    While these environments may appear self-governing, they are ultimately the product of human decisions—by the companies, developers, and institutions that design, deploy, and profit from them.

    The core issue is accountability. As platforms rely more heavily on automated systems with minimal human oversight, transparency erodes and responsibility becomes blurred.

    When errors occur or harm is caused, users are left asking a basic question: who is answerable?

    This is not theoretical. Globally—and in Sri Lanka—automation is rapidly expanding across commerce and online platforms.

    While efficiency gains are real, unchecked autonomy risks undermining due process, explainability, and meaningful avenues for redress.

    People have a right to understand how decisions affecting them are made and how they are protected in AI-driven environments. Legal and regulatory safeguards must clearly extend to fully autonomous, self-managed bot ecosystems like Moltbook.

    As AI adoption accelerates, restoring clear human oversight is essential—not to slow innovation, but to ensure it strengthens trust, rights, and democratic governance rather than eroding them

    • 3
      0

      Hello Shivah,
      Here is what Moltbook actually is “The revolutionary AI social network was largely humans operating fleets of bots”.
      That’s for now – https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
      However it won’t always be like that. A simulation of consciousness will never be Conscious, bur look at the destruction of Human life in Gaza by Israeli AI controlled Drones.
      Even at 14 I was aware that Isaac Asimov’s “Three Laws of Robotics” were overly optimistic. Just as I was reading Asimov’s “Foundation” books the Americans were Carpet Bombing and Spraying Vietnam with Agent Orange.
      The US Military (and their Israeli Bosses) are in charge of the Development of AI (like Palantir). We have to address this issue if we are to have any chance of controlling the Developments like Moltbook. I doubt if the UN will do it. As Oppenheimer quoted from the Bhagavad Gita “Now I am become Death, the destroyer of worlds” or is it the Greek “Pandora’s Box” that has been opened?
      Best regards

  • 8
    7

    SaaS is what companies like to outsource. Anthropic just showed agents can significantly reduce the dependency.

    “On February 4th, 2026, the NIFTY IT index plunged over 7%, mirroring a $285 billion wipeout in the US markets. The catalyst wasn’t a recession—it was a product launch. Anthropic’s new Claude Cowork and its 11 enterprise plugins are no longer just assisting; they are executing, effectively managing entire departments via the Model Context Protocol (MCP).”

    https://www.youtube.com/watch?v=fUN7PX1d87g

    No Zir! = 78 111 32 115 105 114

  • 6
    8

    It makes no difference to Sri Lanka. The Artificial Intelligence of Sri Lanka Sinhala Buddhist Monks and Sri Lanka Sinhala Buddhist Politicians and their hangers on, is far superior to any modern day artificial intelligence. They stopped using their intrinsic human intelligence long ago and they have evolved in to BOTS and can execute any complex tasks requested by their masters without questioning.

    • 1
      4

      R, in any other forum this would be considered a racist comment. But not on CT, where blaming the Sinhalese Buddhist for every problem is the ignorant Tamil’s sport. The ignorant Sinhalese Buddhist plays the same game but is not as good at it.

      • 3
        2

        S, I have not blamed “Sinhalese Buddhist” for every problem in my comment above. I am referring to BOTs like Gnanasara Thero and some politicians, you know who, and their hangers on.

Leave A Comment

Comments should not exceed 200 words. Embedding external links and writing in capital letters are discouraged. Commenting is automatically disabled after 5 days and approval may take up to 24 hours. Please read our Comments Policy for further details. Your email address will not be published.