Why the OpenClaw AI agent is a ‘privacy nightmare’
An AI agent embedded in your computer? What could go wrong?

A new AI agent that can run locally on computers is reverberating inside and outside Silicon Valley, performing everything from writing emails and updating calendars to implementing workflow automations and creating custom applications.
What sets OpenClaw — the recently updated name of the platform — apart is its ability to directly interface with a user’s apps and files for greater access and control, according to AI and cybersecurity experts.
That level of access allows OpenClaw to perform tasks that would be impossible for standard large language models to perform alone, the experts said. There are already more than 3,000 community-built skill extensions on ClawHub, OpenClaw’s marketplace.
But it doesn’t come without major risks.
“I think it’s a privacy nightmare,” said Aanjhan Ranganathan, a Northeastern University cybersecurity professor in the Khoury College of Computer Sciences.
Not only are you letting an AI agent look at sensitive information like your passwords and documents, but you also have limited insights into how it’s processing your information and where it’s sending it, he said.

“From a technology perspective, it’s absolutely interesting,” Ranganathan said. “But what I would do is set up my own virtual machine, set up a separate laptop, new email account, new calendars without giving it any real access.”
For his part, OpenClaw developer Peter Steinberger recently shared in a blog post that he is working to make the software platform more secure, announcing a series of updates.
Any user who wants to upload a skill to Clawhub now must have an account on Github, the online platform where the software is hosted, for at least a week. Clawhub also added a feature that allows users to flag “malicious” skills.
While OpenClaw runs natively on a device, users can connect it with other large language models to work with it as well, including OpenAI’s ChatGPT and Anthropic’s Claude. Because of that, there are potentially thousands of ways users can interact with it.
Editor’s Picks
In many ways, it’s the exact type of assistant people have wanted out of artificial intelligence since ChatGPT launched four years ago, explained Christoph Riedl, a business professor at Northeastern University who studies the intersection of AI and business.
“When ChatGPT came out, people really loved it because it was the first time people could interact with a real AI system, and they were impressed by its capacity,” he said.
However, they quickly realized that the interaction was limited, he said. Sure, they could ask the chatbot to write an email for them, but it was then up to the end user to copy and paste that message into their email client.
“People realized they are nice, but they can’t really do anything for you,” he said. “In fact, the more you use the chatbot, the more you realize that your own capacity to do things becomes the bottleneck.”
That frustration led to the rise of agentic AI models that have the capacity to perform tasks independently.
OpenClaw is the latest example of that type of technology at work, he said, but it is far from a perfect system.
In addition to giving the AI agent access to sensitive information like your passwords and or any proprietary information you might have saved on your computer, you are also letting it take actions on your behalf.
“The problem is once you give an agent agency, suddenly doing things wrong really matters,” he said. “It’s booking a flight. It’s sending an email on your behalf. Now your capacity to review it and change it or approve it. You don’t have that anymore.”










