langsmith-evaluator

Warn

Audited by Socket on Mar 3, 2026

1 alert found:

Security
SecurityMEDIUM
SKILL.md

This skill/documentation provides legitimate examples for writing and uploading LangSmith evaluators and demonstrates LLM-as-judge and custom code evaluators. There is no evidence of obfuscated or explicitly malicious code. The primary security considerations are standard for integrations that require API keys and that send evaluator inputs (expected responses and agent outputs) to external services (OpenAI and LangSmith). Additional caution is warranted around: (1) protecting API keys and avoiding sending sensitive data in prompts, (2) reviewing any evaluator code before uploading/executing via the provided CLI (to avoid transitive code execution risks), and (3) understanding that LLM-judge evaluations will be influenced by input contents. Overall, the artifact appears functionally consistent with its stated purpose but carries medium-risk supply-chain/credential-exposure characteristics inherent to evaluation tooling that uses third-party LLM services.

Confidence: 85%Severity: 75%
Audit Metadata
Analyzed At
Mar 3, 2026, 05:44 PM
Package URL
pkg:socket/skills-sh/langchain-ai%2Flangchain-skills%2Flangsmith-evaluator%2F@9039700b2d28989226c824fbdce100caf4864a07