skills/langwatch/skills/scenarios/Gen Agent Trust Hub

scenarios

Pass

Audited by Gen Agent Trust Hub on Apr 25, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it reads untrusted data from the local codebase and git history to guide agent behavior.\n
  • Ingestion points: Codebase files (package.json, pyproject.toml), git history (commit messages), and documentation files (READMEs, comments).\n
  • Boundary markers: Absent; there are no specific instructions or delimiters provided to the agent to ignore instructions embedded in the ingested codebase data.\n
  • Capability inventory: File system access for reading and writing scripts, package installation via pip and npm, and shell command execution for running tests with pytest and vitest.\n
  • Sanitization: Absent; no sanitization or validation of the ingested content is specified before it is used in the test generation and execution pipeline.\n- [COMMAND_EXECUTION]: The skill instructs the agent to perform command-line operations to install necessary dependencies and execute the generated test suites. This involves running package managers (pip, npm) and test runners (pytest, vitest).\n- [EXTERNAL_DOWNLOADS]: The skill fetches documentation and guides from the official langwatch.ai website to assist the agent in its tasks. These downloads originate from the vendor's own infrastructure.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 25, 2026, 06:42 PM