ruminate
Audited by Socket on Mar 3, 2026
1 alert found:
MalwareThis skill's stated purpose — mining a user's past assistant conversations and brain to extract recurring patterns and suggestions — legitimately requires reading sensitive local data (conversation archives and the user's brain). That access is coherent with the purpose, but it creates high-risk data exposure and exfiltration paths because the workflow sends extracted content to spawned analysis agents (model: opus) and executes local scripts. The main risks are (1) exposure of private conversations and any embedded credentials to external model runtimes or agents, (2) execution of local shell scripts which could run arbitrary commands if tampered with, and (3) potential for unauthorized writes to the user's brain/skill files if approval enforcement is weak. There are no obvious hardcoded credentials, obfuscation, or explicit network endpoints, and no clear download-then-execute from remote sources in the provided text. Overall this is not deterministically malicious, but it is high-sensitivity and should be treated as SUSPICIOUS: require explicit user consent, in-process redaction of secrets, local-only analysis option, and strict vetting/auditing of any spawned agent runtimes before use.