sentry-setup-ai-monitoring

Pass

Audited by Gen Agent Trust Hub on Feb 22, 2026

Risk Level: SAFE
Full Analysis
  • Prompt Injection (SAFE): No malicious instructions or bypass attempts detected.\n- Data Exposure & Exfiltration (SAFE): The skill configures monitoring but contains no hardcoded secrets and includes warnings about capturing PII (Personal Identifiable Information) in prompts.\n- Command Execution (SAFE): Uses simple grep commands to check for installed dependencies in package.json and requirements.txt.\n- External Downloads (SAFE): Refers to standard official packages from PyPI and npm; no unverified third-party sources.\n- Indirect Prompt Injection (SAFE): The skill handles LLM message data for logging purposes only. Ingestion points include gen_ai.request.messages. No dangerous capabilities like eval or file-writing are performed on this data. The skill mitigates risks by requiring explicit user opt-in for prompt recording.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 22, 2026, 02:52 AM