langchain-observability

Pass

Audited by Gen Agent Trust Hub on Mar 12, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [PROMPT_INJECTION]: The skill implementation creates a surface for indirect prompt injection by processing untrusted data through various callback handlers. * Ingestion points: External data enters the context through on_chain_start (inputs), on_llm_start (prompts), and on_llm_error (error messages) within the Python code in SKILL.md. * Boundary markers: The code does not utilize delimiters or specific instructions to isolate or label untrusted content. * Capability inventory: The skill is configured with Read, Write, and Edit tool permissions. * Sanitization: No validation, escaping, or sanitization logic is present to filter malicious instructions within the captured prompts or inputs before they are recorded in logs, traces, or metrics.
  • [EXTERNAL_DOWNLOADS]: The skill requires several standard third-party libraries for its functionality, specifically langchain-openai, prometheus-client, langchain-core, opentelemetry-api, opentelemetry-sdk, opentelemetry-exporter-otlp, opentelemetry-instrumentation-httpx, and structlog. These are widely recognized packages within the AI and monitoring ecosystems.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 12, 2026, 01:12 AM