LangChain Fundamentals
Audited by Socket on Mar 3, 2026
1 alert found:
MalwareThe examples and guidance are useful for building LangChain agents, but include a high-risk insecure example: the 'calculate' tool uses eval() on untrusted input in both Python and TypeScript examples. This pattern creates a direct remote code execution/vector for command injection if copied into production. Secondary risks include storage of conversation state without operational security guidance and middleware interception points that could leak sensitive data. Recommended remediation: remove or replace eval examples with safe evaluators or expression parsers, add explicit warnings and secure patterns (sandboxing, input validation, principle of least privilege for tools and middleware, encryption and retention policies for persisted data), and call out that agent data is sent to third-party model providers so developers may avoid sending secrets.