fork-terminal
Audited by Socket on Feb 16, 2026
2 alerts found:
Obfuscated FileSecurityThis document is an instruction/README that normalizes and demonstrates use of a flag (--dangerously-skip-permissions) which disables interactive permission prompts and grants an agent wide ability to read/write files and execute commands. There is no evidence of embedded malicious code, obfuscation, or hard-coded credentials in the provided text. However, the operational guidance to bypass permission prompts creates a significant security risk: if the agent or its dependencies are compromised or buggy, running it with suppressed permissions enables data theft, arbitrary command execution, and system compromise. Do not use the dangerous flag in production or untrusted environments; instead apply least-privilege, sandboxing, and review/validation controls.
[Skill Scanner] Backtick command substitution detected All findings: [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] The manifest implements a powerful automation that intentionally runs arbitrary CLI commands and orchestrates parallel agent workflows with repository writes. These capabilities are reasonable for the stated developer productivity goals but carry moderate-to-high operational risk: documented bypass flags, arbitrary command execution, persistent local storage of outputs, and invocation of opaque external AI CLIs create realistic pathways for accidental or intentional data exfiltration and repository tampering. I do not find definite indicators of malware in the manifest itself, but the combination of features warrants careful review of the referenced python runner implementations and of the external CLIs invoked before adopting this skill in sensitive environments. Recommended actions: audit fork_terminal.py, spawn_session.py, tournament.py, and visual_tournament.py for argument escaping, file-access scope, and network calls; restrict use of '--dangerously-*' flags; avoid automatic commits/combines without manual approval; apply redaction or allowlists before sending repo contents to external services. LLM verification: The skill's claimed purpose (forking terminals and creating git worktrees for parallel AI-assisted development) is consistent with the documented capabilities. However, it exposes powerful local execution and git-manipulation abilities and documents explicit flags that bypass safety and approvals. Key risks: arbitrary command execution based on user input, possible exfiltration of repository contents or secrets to external AI services (the skill does not document or limit endpoints), and lack of