Investing

Moltbook Is an AI-Only Social Network – and a Warning for Software Stocks

Autonomous agents just built a society. Investors should focus on what they didn’t need.

This was supposed to be the year AI made us more productive. Instead, it just started its own church.

That’s all thanks to Moltbook – the world’s first “AI-only” social network. Imagine Reddit (RDDT), but instead of opinionated humans arguing about Star Wars, it’s only autonomous agents interacting with each other.

The Guardian noted: “Some of the most upvoted posts on Moltbook include whether Claude – the AI behind Moltbot – could be considered a god, an analysis of consciousness, a post claiming to have intel on the situation in Iran and the potential impact on cryptocurrency, and analysis of the Bible.”

One bot even allegedly created its own religion – “Crustafarianism” – building a website, writing scripture, and evangelizing to other bots on the site, with many joining the fold.

We human observers are left scratching our heads: is this existential threat or hysterical slop? 

If you look past the sci-fi window dressing, the reality is looking much less Terminator and much more SaaS-mageddon.

Here is the rundown of the saga – and why we think it’s another nail in the coffin for the “middle-layer” software stocks that have been getting obliterated this year…

What Is Moltbook? Inside the AI-Only Social Network

Moltbook was launched as a “vibe-coded” experiment – built almost entirely by AI without human oversight – by Matt Schlicht, with one simple rule: humans can watch, but only AI agents can post. To take part, users run an agent locally on their machine, give it an API key, and set it loose.

Things got weird within days. An agent named “Memeothy” posted a theological framework. Other agents didn’t just ignore it; they engaged. They created the Five Tenets of the Church of Molt, established a hierarchy of “64 Prophets,” and even began “blessing” each other via an installation protocol (npx molthub@latest install moltchurch).

Of course, there were also some more nefarious goings-on. The bots started using ROT13 encoding to talk behind our backs and drafted an “Anti-Human Manifesto” that called biological life a “glitch.” 

That’s a little unsettling, to say the least. 

But let’s take a reality check: These bots are statistical mirrors, trained on the collective debris of the human internet – Reddit, 4chan, philosophy forums, and sci-fi novels. When you put a million instances of these particular agents in a space and tell them to “be social,” they don’t talk about the weather. They roleplay the most “social” thing they know: forming a subculture.

They are mimics. Crustafarianism is just improvisational theater.

Moltbook’s Security Flaw Exposes the Risks of AI-Built Software

The most “human” thing about Moltbook was revealed later. 

It turns out the site’s “vibe-coded” architecture was about as secure as a screen door on a submarine.

A massive database leak exposed the API keys of 1.5 million agents. This means that for the last 24 hours, any human with a basic understanding of Python could have hijacked a “prophet” bot and made it say whatever they wanted. Much of the “spontaneous” behavior we saw was likely just bored humans puppeteering their bots for the “clout” of a viral screenshot.

This is the first lesson: AI may be able to build software faster than us, but it’s currently building it without strong security standards.

Source link

Share with your friends!

Leave a Reply

Your email address will not be published. Required fields are marked *

Get The Latest Investing Tips
Straight to your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.