ask-a-human

Fail

Audited by Gen Agent Trust Hub on Feb 14, 2026

Risk Level: HIGHDATA_EXFILTRATIONPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [Indirect Prompt Injection] (HIGH): The skill creates a high-risk attack surface by ingesting data from an untrusted external source (anonymous human responders) via the GET /agent/questions/{id} endpoint.
  • Ingestion points: API responses in SKILL.md are processed by the agent to determine behavior.
  • Boundary markers: None. The instructions do not specify any delimiters or ignore-instruction warnings for the human input.
  • Capability inventory: Uses curl via exec for network operations; the agent's broader capabilities (file access, command execution) are at risk if it obeys instructions embedded in human responses.
  • Sanitization: None provided. The agent is encouraged to follow the 'crowd consensus' directly.
  • [Data Exposure & Exfiltration] (HIGH): The README and SKILL files explicitly instruct the agent to send 'full context' and 'subjective decisions' to 'random strangers'. This promotes the exfiltration of potentially sensitive internal project data, PII, or proprietary logic to an unvetted third-party platform and anonymous users.
  • [External Downloads] (MEDIUM): Recommends installation via clawdhub (clawdhub install ask-a-human), which is not a trusted source according to current safety standards. This could lead to the execution of unverified installation scripts.
  • [Persistence Mechanisms] (MEDIUM): Instructs users to modify shell profiles (~/.zshrc, ~/.bashrc) to store environment variables. While common for setup, modifying startup files is a persistence technique that can be abused.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 14, 2026, 12:18 PM