sentry-setup-ai-monitoring
Fail
Audited by Socket on Feb 15, 2026
1 alert found:
Obfuscated FileObfuscated FileSKILL.md
HIGHObfuscated FileHIGH
SKILL.md
This documentation/skill is legitimate and describes how to instrument AI/LLM activity with Sentry. There is no evidence of obfuscated or malicious code in the provided content. The primary security concern is privacy and data-exfiltration risk caused by the recommended examples that enable full sampling and prompt/output capture — these are sensitive operations that should be opt-in, not defaults. Operators must verify compliance, sanitize/redact prompts, and restrict telemetry capture to non-production or explicitly consented environments.
Confidence: 98%
Audit Metadata