Movatterモバイル変換


[0]ホーム

URL:


Sign in / up
The Register

Agentic AI

Clawdbot sheds skin to become Moltbot, can't slough off security issues

The massively hyped agentic personal assistant has security experts wondering why anyone would install it

iconConnor Jones
Tue 27 Jan 2026 //18:45 UTC

Security concerns for the new agentic AI tool formerly known as Clawdbot remain, despite a rebrand prompted by trademark concerns raised by Anthropic. Would you be comfortable handing the keys to your identity kingdom over to a bot, one that might be exposed to the open internet?

Clawdbot, now known as Moltbot, has gone viral in AI and developer circles in recent days, with fans hailing the open-source "AI personal assistant" as a potential breakthrough.

The long and short of it is that Moltbot can be controlled using messaging apps, like WhatsApp and Telegram, in a similar way to the GenAI chatbots everyone knows about. 

Taking things a little further, its agentic capabilities allow it to take care of life admin for users, such as responding to emails, managing calendars, screening phone calls, or booking table reservations – all with minimal intervention or prompting from the user.

All that functionality comes at a cost, however, and not just the outlay so many seem to be making on Mac Mini purchases for the sole purpose of hosting a Moltbot instance. 

In order for Moltbot to read and respond to emails, and all the rest of it, it needs access to accounts and their credentials. Users are handing over the keys to their encrypted messenger apps, phone numbers, and bank accounts to this agentic system. 

Naturally, security experts have had a few things to say about it.

First, there was the furor around public exposures. Moltbot is a complex system, and despite being as easy to install as a typical app on the face of it, the misconfigurations associated with it prompted experts to highlight the dangers of running Moltbot instances without the proper know-how.

Jamieson O'Reilly, founder of red-teaming company Dvuln, was among the first to draw attention to the issue, saying that he saw hundreds of Clawdbot instances exposed to the web, potentially leaking secrets.

He toldThe Register that the attack model he reported to Moltbot's developers, which involvedproxy misconfigurations and localhost connections auto-authenticating, is now fixed. However, if exploited, it could have allowed attackers to access months of private messages, account credentials, API keys, and more – anything to which Clawdbot owners gave it access.

According to his Shodan scans,supported by others looking into the matter, he found hundreds of instances exposed to the web. If those had open ports allowing unauthenticated admin connections, it would allow attackers access to the full breadth of secrets in Moltbot.

"Of the instances I've examined manually, eight were open with no authentication at all and exposing full access to run commands and view configuration data," he said. "The rest had varying levels of protection. 

"Forty-seven had working authentication, which I manually confirmed was secure. The remainder fell somewhere in between. Some appeared to be test deployments, some were misconfigured in ways that reduced but didn't eliminate exposure."

On Tuesday, O'Reilly published a second blog detailing a proof-of-conceptsupply chain exploit for ClawdHub – the AI assistant's skills library, the name of which has not yet changed.

He was able to upload a publicly available skill, artificially inflate the download count to more than 4,000, and watch as developers from seven countries downloaded thepoisoned package.

The skill O'Reilly uploaded was benign, but it proved he could have executed commands on a Moltbot instance.

"The payload pinged my server to prove execution occurred, but I deliberately excluded hostnames, file contents, credentials, and everything else I could have taken," he said.

"This was a proof of concept, a demonstration of what's possible. In the hands of someone less scrupulous, those developers would have had their SSH keys, AWS credentials, and entire codebases exfiltrated before they knew anything was wrong."

ClawdHub states in its developer notes that all code downloaded from the library will be treated as trusted code – there is no moderation process at present – so it's up to developers to properly vet anything they download.

Therein lies one of the key issues with the product. It is being heralded by nerds as the next big AI offering, one that can benefit everyone, but in reality, it requires a specialist skillset in order to use safely.

Eric Schwake, director of cybersecurity strategy at Salt Security, toldThe Register: "A significant gap exists between the consumer enthusiasm for Clawdbot's one-click appeal and the technical expertise needed to operate a secure agentic gateway. 

"While installing it may resemble a typical Mac app, proper configuration requires a thorough understanding of API posture governance to prevent credential exposure due to misconfigurations or weak authentication. 

"Many users unintentionally create a large visibility void by failing to track which corporate and personal tokens they've shared with the system. Without enterprise-level insight into these hidden connections, even a small mistake in a 'prosumer' setup can turn a useful tool into an open back door, risking exposure of both home and work data to attackers."

The security concerns surrounding Moltbot persist even when it is set up correctly, as the team at Hudson Rock pointed out this week.

Its researcherssaid they looked at Moltbot's code and found that some of the secrets shared with the assistant by users were stored in plaintext Markdown and JSON files on the user's local filesystem.

The implication here is that if a host machine, such as one of the Mac Minis being bought en masse to host Moltbot, were infected with infostealer malware, then it would mean the secrets stored by the AI assistant could be compromised.

Hudson Rock is already seeing malware as a service families implement capabilities to target local-first directory structures, such as those used by Moltbot, includingRedline,Lumma, and Vidar. 

It is fathomable that any of these popular strains of malware could be deployed against the internet-exposed Moltbot instances to steal credentials and carry out financially motivated attacks.

If the attacker is also able to gain write access, then they can turn Moltbot into a backdoor, instructing it to siphon sensitive data in the future, trust malicious sources, and more.

"Clawdbot represents the future of personal AI, but its security posture relies on an outdated model of endpoint trust," said Hudson Rock. "Without encryption-at-rest or containerization, the 'Local-First' AI revolution risks becoming a goldmine for the global cybercrime economy."

The start of something bigger

O'Reilly said that Moltbot's security has captured the attention of the industry recently, but it is only the latest example of experts warning about the risks associated with wider deployments of AI agents.

In a recentinterview withThe Register, Palo Alto Networks chief security intel officer Wendi Whitmore warned that AI agents could represent the new era of insider threats.

As they are deployed across large organizations, trusted to carry out tasks autonomously, they become increasingly attractive targets for attackers looking to hijack these agents for personal gain.

The key will be to ensure cybersecurity is rethought for the agentic era, ensuring each agent is afforded the least privileges necessary to carry out tasks, and that malicious activity is monitored stringently.

"The deeper issue is that we've spent 20 years building security boundaries into modern operating systems," said O'Reilly. "Sandboxing, process isolation, permission models, firewalls, separating the user's internal environment from the internet. All of that work was designed to limit blast radius and prevent remote access to local resources.

"AI agents tear all of that down by design. They need to read your files, access your credentials, execute commands, and interact with external services. The value proposition requires punching holes through every boundary we spent decades building. When these agents are exposed to the internet or compromised through supply chains, attackers inherit all of that access. The walls come down."

Heather Adkins, VP of security engineering at Google Cloud, who last week warned of the risks AI would present to the world ofunderground malware toolkits, is flying the flag for the anti-Moltbot brigade, urging people to avoid installing it.

"My threat model is not your threat model, but it should be. Don't run Clawdbot," shesaid, citing a separate security researcher who claimed Moltbot "is an infostealer malware disguised as an AI personal assistant."

Principal security consultant Yassine Aboukir said: "How could someone trust that thing with full system access?" ®


More like these

More about


COMMENTS

More about

More like these

TIP US OFF

Send us news


Other stories you might like

GPU who? Meta to deploy Nvidia CPUs at large scale

CPU adoption is part of deeper partnership between the Social Network and Nvidia which will see millions of GPUs deployed over next few years
Systems17 Feb 2026 |

AI gets all the good stuff, including Micron's speedy 28 GB/s PCIe 6.0 SSD

Consumers have a long wait ahead of them before they can bring that kind of performance home
Storage17 Feb 2026 |1

AI bit barns grow climate emergency by turning up the gas

Companies talk renewables while firing up gas turbines as fast as they can
Systems17 Feb 2026 |1

AI is coming to solve your system outages

The first five minutes is panic. What if it didn't have to be?
Sponsored Feature

Scientists show it's possible to solve problems in your dreams by playing the right sounds

Could the same method one day power sleep-time ads?
Science17 Feb 2026 |5

React survey shows TanStack gains, doubts over server components

Not everyone's convinced React belongs on the server as well as in the browser
Devops17 Feb 2026 |2

European Parliament bars lawmakers from using AI tools

Who knows where that helpful email summary is being generated?
AI + ML17 Feb 2026 |4

Flush with potential? Activist investor insists Japanese toilet giant is an AI sleeper

Palliser Capital says Toto is sitting on hidden semiconductor value – and wants the company to lift the lid
Offbeat17 Feb 2026 |13

Dear Oracle, we need to talk about the future of MySQL

Faithful pen open letter proposing independent foundation with or without Big Red's participation
Databases17 Feb 2026 |6

£111M later, frictionless post-Brexit border dream 'brought to early closure'

With no staff, no funding, and the contract closed, it looks a lot like limbo
Public Sector17 Feb 2026 |14

All the world's a stage – except this deputy federal CIO job

$200K role promises authority, mission, and 'zero patience for theater'
Public Sector17 Feb 2026 |2

US lawyers fire up privacy class action accusing Lenovo of bulk data transfers to China

Keep behavioral tracking American? PC giant says the claim is 'false'
Personal Tech17 Feb 2026 |13

[8]ページ先頭

©2009-2026 Movatter.jp