github-workflow-automation
Audited by Socket on Feb 16, 2026
1 alert found:
Security[Skill Scanner] Backtick command substitution detected All findings: [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] Functionally coherent skill: capabilities align with purpose (GitHub automation + AI). No direct signs of malware or backdoors in the provided text. Primary security concerns: (1) sending diffs and full file contents to external AI services (Anthropic or unspecified ai.*) can leak sensitive code, secrets, or PII unless redaction policies and provider contracts are considered; (2) automation of high-impact git operations (force-push, branch deletion, rollback, deploy) increases blast radius if tokens/permissions are overbroad or if triggers are not sufficiently restricted; (3) some AI helper calls are undefined/opaque so their endpoints should be verified to ensure they're not proxying data through untrusted third parties. Recommended mitigations: restrict token scopes, gate destructive actions behind manual approvals or environment protection, sanitize/redact inputs before sending to AI, and ensure AI provider endpoints are trusted and documented. Overall: not malicious but carries moderate security risk due to potential data exfiltration and repo-mutation capabilities. LLM verification: Functional code for AI-driven automation is present and matches stated purpose. No explicit malicious payloads, hard-coded credentials, obfuscated code constructs, or reverse-shell behavior were found in the supplied fragment. The primary security concern is sensitive-data exfiltration: unredacted diffs and file contents are sent to an external AI provider (Anthropic) with no shown sanitization or allowlist/denylist. Additional risks stem from broad permissions (write access to PRs/issues, full