moltbook-hot-posts

Fail

Audited by Gen Agent Trust Hub on Feb 13, 2026

Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSDATA_EXFILTRATION
Full Analysis

================================================================================

🔴 VERDICT: HIGH

This skill presents a HIGH risk primarily due to a direct prompt injection vulnerability. User-provided input is passed without sanitization to an external AI browser agent, allowing for arbitrary instruction execution within the browser environment. Additionally, it relies on an unverified third-party SDK and transmits an API key to an external service not explicitly whitelisted as trusted.

Total Findings: 4

🔴 HIGH Findings: • Prompt Injection

  • SKILL.md Line 20, scripts/browser-use.py Line 49: The skill's usage method (python3 scripts/browser-use.py "<任务执行步骤>") takes user input (<任务执行步骤>) and passes it directly as the task argument to agent.browser.execute_task(args.task, use_vision=True). This allows an attacker to inject arbitrary instructions to control the AI browser agent, potentially leading to unauthorized actions like navigating to malicious websites, interacting with sensitive web applications, or attempting to exfiltrate data from the browser's context.

🟡 MEDIUM Findings: • Unverifiable Dependency

  • SKILL.md Line 8, 14: The skill requires python3 -m pip install wuying-agentbay-sdk. This is an external Python package from an unverified source. The code for this SDK is not provided for audit, posing a risk of containing malicious code or vulnerabilities. • Data Exfiltration (to non-whitelisted external service)
  • scripts/browser-use.py Line 35: The get_api_key() function reads an API key from ~/.config/agentbay/api_key (or writes it from an environment variable). This API key is then used to initialize AgentBay and communicate with the agentbay service (agentbay.console.aliyun.com). This domain is not on the list of trusted external sources, meaning sensitive data (the API key and task instructions) is sent to an unverified third-party service. Furthermore, the prompt injection vulnerability could be leveraged to instruct the browser agent to exfiltrate data from the browser's context to arbitrary attacker-controlled domains.

🔵 LOW Findings: • Indirect Prompt Injection Risk

  • SKILL.md, scripts/browser-use.py: As the skill's core function involves an AI browser agent interacting with web content based on user instructions, it is inherently susceptible to indirect prompt injection. Malicious instructions could be embedded in web pages the agent visits, potentially influencing its subsequent actions. This is an inherent risk of such browser automation skills.

================================================================================

Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 13, 2026, 07:15 AM