skills/jeremylongshore/claude-code-plugins-plus-skills/langchain-security-basics/Gen Agent Trust Hub
langchain-security-basics
Pass
Audited by Gen Agent Trust Hub on Mar 13, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: No malicious patterns or security risks were detected. The skill is designed to educate users on implementing security measures within LangChain applications.
- [CREDENTIALS_UNSAFE]: The skill provides defensive guidance against hardcoding credentials. It illustrates the danger of hardcoded API keys and provides safe alternatives using environment variables (
python-dotenv) and Cloud Secrets Managers (Google Cloud Secret Manager). - [PROMPT_INJECTION]: The skill includes code snippets for preventing prompt injection attacks. It demonstrates input sanitization using regular expressions to redact common injection payloads and recommends using structured message templates to isolate user input from system instructions.
- [COMMAND_EXECUTION]: The skill demonstrates safe command execution practices. It provides a
safe_shelltool example that uses a strict whitelist of allowed commands (ls,cat, etc.) and utilizesshlex.splitandsubprocess.runwith a timeout and restricted working directory to mitigate risks. - [DATA_EXFILTRATION]: The skill provides an example of output validation using Pydantic models to ensure that LLM responses do not inadvertently contain sensitive patterns like API keys or Social Security Numbers.
Audit Metadata