autonomous-agent-patterns

Fail

Audited by Socket on Feb 21, 2026

4 alerts found:

Obfuscated Filex3Security
Obfuscated FileHIGH
sub-skills/42-visual-agent-pattern.md

This code snippet is not obviously malicious (no hardcoded credentials, no obfuscated payloads, no network exfiltration or process spawning), but it presents a moderate security risk because it uses untrusted LLM output to drive a local actuator (mouse click) without validation, clamping, error handling, or authorization. An adversarial or compromised LLM can cause unwanted or destructive UI interactions. Recommended mitigations: validate and strictly schema-check LLM JSON, enforce numeric and viewport bounds (and clamp), require explicit confirmation/authorization for clicks in sensitive regions, implement robust exception handling and logging, treat LLM outputs as advisory (human-in-the-loop) for any high-impact actions, and fix sync/async API misuse and missing imports.

Confidence: 98%
SecurityMEDIUM
sub-skills/61-mcp-server-pattern.md

The code fragment implements a high-risk primitive: it converts untrusted natural-language descriptions into Python code via an LLM, writes that code to disk, and hot-loads it via a Server abstraction without any shown validation, sandboxing, or approval. While the file contains no explicit malicious payloads, this pattern enables arbitrary code execution and is therefore a serious supply-chain and runtime security risk. Immediate mitigations should include sanitizing the name used for paths, preventing path traversal, requiring review or automated checks of generated code, and executing generated servers in isolated, least-privilege environments.

Confidence: 75%Severity: 82%
Obfuscated FileHIGH
sub-skills/33-sandboxing.md

The code attempts to provide a sandbox but contains multiple critical security weaknesses: use of shell=True with only a base-command allowlist (enabling shell operator injection), unused path-blocking logic, and full environment propagation to child processes. These issues make arbitrary command execution, secret leakage, and data exfiltration feasible if an attacker can control the command string or workspace contents. This is a high-risk design for production or supply-chain contexts. Recommendations: avoid shell=True and call subprocess with args (shell=False), validate and restrict all arguments (not just the first token), enforce path restrictions (use validate_path and blocked_paths), drop unnecessary environment variables before executing, and adopt stronger OS-level isolation (containers, namespaces, seccomp) or execute commands with a least-privilege service.

Confidence: 98%
Obfuscated FileHIGH
sub-skills/23-edit-tool-design.md

The code is a harmless-looking utility for exact text replacement but constitutes a potentially dangerous primitive when called with untrusted inputs. It lacks path validation, atomic writes, backups, symlink protections, TOCTOU mitigations, and error/authorization handling. It does not show signs of direct malicious behaviour (no network I/O, no credential exfiltration, no eval/exec), but it can be abused to overwrite arbitrary writable files by an attacker who can control the 'path' argument or the filesystem (symlink / traversal). Recommended mitigations: restrict and canonicalize path to an allowed directory root, perform safe atomic writes (write tmp file + os.replace), create backups or versioned copies, add exception handling and audit logging, check and mitigate symlinks (os.open with O_NOFOLLOW where available), and ensure callers are authorized to perform edits.

Confidence: 98%
Audit Metadata
Analyzed At
Feb 21, 2026, 10:31 AM
Package URL
pkg:socket/skills-sh/Dokhacgiakhoa%2Fantigravity-ide%2Fautonomous-agent-patterns%2F@172184566e3a432dbf4b7f76f1c31d71009824a4