langsmith-fetch
Fail
Audited by Gen Agent Trust Hub on Mar 13, 2026
Risk Level: HIGHCREDENTIALS_UNSAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [CREDENTIALS_UNSAFE]: The skill explicitly instructs the agent to execute
echo $LANGSMITH_API_KEY, which prints sensitive authentication secrets to the standard output. This makes credentials visible in agent logs, session transcripts, and shell history. - [COMMAND_EXECUTION]: The troubleshooting section provides commands to append export statements directly to shell profile files like
~/.bashrcand~/.zshrc. Modifying startup scripts is a persistence mechanism that ensures specific code runs in every new session, which is a high-risk behavior for an AI skill. - [EXTERNAL_DOWNLOADS]: The skill requires the installation of a Python package
langsmith-fetch. This package is not part of the official LangChain library (which uses thelangsmithname) and refers to a repository that does not exist under the claimed organization, indicating a potential typosquatting or dependency confusion risk. - [PROMPT_INJECTION]: The skill ingests untrusted data from LangSmith traces (Workflow 2 and 4) and interpolates this content into its reasoning process without clear boundary markers or sanitization, creating a surface for indirect prompt injection.
- [DATA_EXFILTRATION]: The skill automates the collection and local storage of execution traces and memory logs (Workflow 3), which aggregates sensitive internal agent state into predictable directory structures that could be targeted for exfiltration.
Recommendations
- AI detected serious security threats
Audit Metadata