loki-mode
Audited by Socket on Feb 16, 2026
4 alerts found:
Anomalyx3MalwareThe README documents a high-privilege autonomous agent system that, when run as described (workspace and credential mounts, optional dashboard exposure), creates realistic and significant supply-chain risks: credential exposure, unauthorized repo modifications or CI misuse, and remote control via an exposed dashboard. The README itself is not proof of malware, but the recommended deployment pattern should be treated as risky until the container image and its code are audited. Mitigations: do not mount credential directories into untrusted images, run images as least-privileged users, avoid publishing the dashboard port, use ephemeral/rotated credentials, review image contents and source code, and require manual approval for any pushes/deploys.
The narration describes a high-autonomy software supply-chain workflow that could expedite development but introduces substantial governance and security risks if implemented as-is. While no malware or hardcoded secrets are evident in the text, the dangerous-permissions flag and fully autonomous lifecycle pose non-trivial risks to provenance, audits, and control planes. Recommended mitigations include enforcing explicit human-in-the-loop checks for critical steps, removing or constraining dangerous flags, implementing robust provenance and audit trails, and ensuring memory/state data is tamper-evident and reviewable before shipping.
[Skill Scanner] Installation of third-party script detected This Skill document describes an autonomous agent workflow that legitimately needs LLM provider credentials and repository access to perform autonomous development tasks. However, several operational choices increase supply-chain and data-exfiltration risk: mandatory --yes (skip confirmations), mandatory --bg (long-lived detached process), lack of documented sanitization or review steps, and reliance on third-party provider CLIs without guidance on redaction or enterprise endpoints. There is no direct evidence of intentionally malicious code in this text, but the described capabilities are powerful and could be abused or could leak sensitive code or secrets if the loki CLI or provider integrations are compromised. Recommend treating this as high-trust software: verify the loki-mode package provenance, run it in isolated environments, audit what the agent sends to providers, and avoid granting unnecessary credentials or push-access to remotes. LLM verification: The SKILL.md fragment itself does not contain executable malicious code, obfuscation, or direct network endpoints. However, it instructs the agent to install and invoke a powerful third-party CLI ('loki') and to supply LLM provider API keys. Key risk factors: always-on background mode (--bg) and forced confirmation suppression (--yes) enable unattended, persistent, and potentially destructive operations; lack of integrity checks and minimal provenance for the loki package increases supply-chain
SUSPICIOUS: The skill's stated purpose (fully autonomous PRD→deployed flow) partially matches the capabilities (reading repo, running tests, committing). However, it requires broad filesystem and network access, encourages bypassing permission checks (--dangerously-skip-permissions), and documents enabling prompt injection (LOKI_PROMPT_INJECTION=true). Those design choices are disproportionate and increase the risk of accidental or malicious data exfiltration and automated introduction of harmful commits. There is no clear least-privilege handling of credentials or explicit trusted endpoint list. I recommend treating this skill as high-risk: require explicit human approval, remove or strongly gate any 'skip-permissions' and 'prompt injection' modes, restrict automatic pushes, and audit credentials and network endpoints before running in production.