OpenClaw, Did OpenAI Snap Up a New Security Nightmare?

OpenClaw, the virtual AI agent system that helped spark Wall Street’s $2 trillion sell-off in software stocks, is now in the hands of OpenAI. It’s a win for CEO Sam Altman as far as capturing the zeitgeist goes, but he faces the thorny challenge of making this remarkable new form of generative AI—one that doesn’t just say things but does things—secure enough for businesses to use. That could take longer than the market realizes.

Altman is not alone. AI labs like Anthropic and Alphabet’s Google are racing to build agents that can take independent action and all are grappling with the same fundamental tension: The more powerful you make an agent, the riskier it becomes.

The Rise of OpenClaw

OpenClaw is an open-source agent system that runs on a computer and can be given commands through a messaging app like WhatsApp, Telegram or Slack. Its range of capabilities is remarkable. People have told it to manage their emails, automate their business, trade crypto, and, in one case, build a game while they slept before waking up to thousands of users.

The broad possibilities of AI agents flip the idea popularized by venture capitalist Marc Andreessen that “software is eating the world.” Now AI might just eat software. For instance, if you pay a subscription to a price-monitoring tool that tracks the websites of your business competitors, that service could be replaced by a single instruction to an AI agent.

Senior developers often have a half-dozen agents running at once, like digital employees, and can now designate one as the coordinator of a ‘swarm’ of others. This is not incremental improvement; it is a paradigm shift in how work gets done.

The Open Source Frenzy

OpenClaw has inspired a flurry of experimentation and become the fastest growing project on GitHub, a website for sharing open-source code. Shares of Raspberry Pi nearly doubled in value last week on speculation that its cheap computers would be used to run agents. The excitement is palpable.

But as the popularity of OpenClaw has grown, so too have security concerns. Running it on your computer gives it privileged access to your files, e-mail, calendar and applications. A hacker that compromises OpenClaw gains all that access.

The Security Nightmare

Then there’s how it was made. Steinberger only started building OpenClaw late last year, mostly by talking to AI coding agents via voice and then quickly publishing the results without a full review. The code was generated by AI, reviewed by AI, and deployed without human oversight. In the rush to innovate, security was an afterthought.

Research firm Gartner has since warned that OpenClaw poses an “unacceptable” security risk and suggested immediately blocking any traffic related to the platform. Cisco researchers called it an “absolute nightmare.” A Meta executive recently told his team to keep OpenClaw off their laptops or risk losing their jobs.

This is not the kind of reception a product wants from enterprise customers. But for OpenAI, which has now acquired the project, this is the challenge they must solve.

The Acquisition

Last week, Altman announced that he was hiring Peter Steinberger, the Austrian creator of OpenClaw, to “drive the next generation of personal agents.” The move brings the project under OpenAI’s umbrella, giving it resources, credibility, and a path to commercialization.

Altman’s decision to keep OpenClaw as an independent foundation was savvy and keeps liability at arm’s length while retaining the brand buzz. But the broader risks of letting an autonomous system tread your files and send messages on your behalf remain.

The Competition

Anthropic’s Claude Code, which offers a safer but more limited version of OpenClaw’s agents, shows that a more cautious path is possible. The company runs its agents inside a sandboxed virtual machine, with restricted network access. This limits what the agent can do, but also limits the damage if it goes wrong.

Google is also in the race, with its own agent projects. The competition is intense because the stakes are high. The first company to deliver a secure, powerful, and user-friendly agent could capture a massive market.

The Fix

Not everyone believes these security problems are intractable. Gavriel Cohen, an Israeli developer who built an alternative to OpenClaw called NanoClaw, says the core fix is “container isolation,” ensuring each agent can only access data you explicitly give it. The approach is similar to Anthropic’s, but applied differently.

“Where it gets difficult is building it in a way that the defaults are secure” for people who don’t understand the risks, he says. Connect your agent to the wrong WhatsApp chat, for instance, and everyone in that group can control your computer.

The challenge is not just technical; it is about user behavior. No amount of security can protect a user who gives an agent access to everything.

Early Adoption

In spite of the security concerns, Cohen says a fintech company valued at $5 billion has already approached him about the possibility of deploying agents to its employees. The demand is real. Companies see the potential productivity gains and are willing to accept some risk to achieve them.

The Historical Parallel

Some have compared OpenClaw and alternatives like NanoClaw and the more lightweight Picoclaw to the early days of the internet, which was insecure by design but became safer over time. The early internet had no security; everyone connected, and anyone could intercept traffic. Over time, encryption, firewalls, and best practices made it safer.

So too may AI agents—though there’s no guarantee of safety for those in the path of the wrecking ball they may take to many professional roles and the business of building software. The disruption will happen whether or not the security is perfect.

The Timeline

How long that disruption takes depends on how quickly Altman and entrepreneurs like Cohen can make agents both secure and idiot-proof. As any cyber security expert will tell you, the latter problem is the hardest to solve. Humans will always find ways to circumvent security, to give agents too much access, to click on the wrong link.

OpenAI has inherited OpenClaw’s security risks along with its buzz. The challenge now is to turn an “absolute nightmare” into something enterprise customers can trust. That will not happen overnight.

Q&A: Unpacking the OpenClaw Story

Q1: What is OpenClaw and why is it significant?

OpenClaw is an open-source AI agent system that runs on computers and can be commanded via messaging apps like WhatsApp. It can manage emails, automate business, trade crypto, and even build software. It’s significant because it represents a new paradigm—AI that doesn’t just converse but takes independent action, potentially replacing many software services.

Q2: Why are there security concerns about OpenClaw?

Running OpenClaw gives it privileged access to files, email, calendar, and applications. A hacker compromising it gains all that access. The code was built quickly using AI, without full security review. Gartner called it an “unacceptable” security risk; Cisco researchers called it an “absolute nightmare.” Meta banned employees from using it.

Q3: What has OpenAI done with OpenClaw?

OpenAI hired Peter Steinberger, OpenClaw’s Austrian creator, to lead development of “personal agents.” The acquisition brings the project under OpenAI’s umbrella but keeps it as an independent foundation, limiting liability while capturing brand buzz. The challenge now is to make it secure enough for enterprise customers.

Q4: How might the security problems be fixed?

The core fix is “container isolation”—ensuring each agent can only access data explicitly given to it, similar to Anthropic’s sandboxed approach. The difficulty lies in making secure defaults for non-technical users. User behavior remains a vulnerability; connecting an agent to the wrong WhatsApp chat could give control to everyone in that group.

Q5: Is there historical precedent for this situation?

Some compare it to the early internet—insecure by design but becoming safer over time through encryption, firewalls, and best practices. However, the disruption to professional roles and software businesses may happen before security matures. The timeline depends on how quickly companies can make agents both secure and idiot-proof.

Your compare list

Compare
REMOVE ALL
COMPARE
0

Student Apply form