NYC
skills/assistant-ui/skills/tools/Gen Agent Trust Hub

tools

Fail

Audited by Gen Agent Trust Hub on Feb 15, 2026

Risk Level: HIGHCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [Indirect Prompt Injection] (HIGH): The skill documentation encourages the creation of tools that ingest untrusted LLM output and pass it directly to sensitive browser APIs without sanitization.
  • Ingestion points: Tool arguments in makeAssistantTool (e.g., text for clipboard, url for opening pages) are derived directly from the LLM's interpretation of possibly attacker-controlled data.
  • Capability inventory: navigator.clipboard.writeText, window.open, localStorage.setItem in references/make-tool.md.
  • Boundary markers: Absent. There are no examples or requirements for the LLM to use delimiters or ignore instructions within the data being processed.
  • Sanitization: Absent. Examples show direct usage of LLM-provided strings in critical browser functions (e.g., window.open(url, "_blank")).
  • [Data Exfiltration] (MEDIUM): The open_url and fetch tool examples can be weaponized to exfiltrate sensitive data. If an attacker can inject a malicious URL into the tool's arguments, the agent may inadvertently send data to an external server.
  • [Command Execution] (MEDIUM): The ability for the LLM to trigger window.open allows for arbitrary browser navigation. This can be used to facilitate phishing attacks or cross-site scripting (XSS) by directing the user to a malicious domain.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 15, 2026, 08:42 PM