agentic-actions-auditor
Audited by Socket on Feb 26, 2026
6 alerts found:
Obfuscated Filex3AnomalySecurityx2This is a credible, high-impact supply-chain configuration vulnerability: attacker-controlled GitHub event fields can be placed into environment variables and then referenced by AI prompts without visible '${{ }}' interpolation, enabling silent prompt injection and downstream misuse. The artifact is not intrinsic malware but facilitates attacker influence over AI context and CI actions. Projects using AI actions and workflows should audit env: assignments and prompt texts, block or sanitize any github.event.* values sent to AI agents, and enforce stricter controls for workflows triggered by external input.
This document describes a real and credible supply-chain attack vector (Vector E) that enables prompt injection via attacker-controlled CI/build/test logs passed into AI-driven workflow steps. The artifact itself is not malicious code, but it identifies a vulnerability pattern that can be exploited when workflows interpolate raw CI output into AI prompts (e.g., via ${{ github.event.inputs.error_logs }} or ${{ steps.*.outputs.* }}). Projects that feed full build/test logs into AI actions to 'fix' failures are at significant risk and should sanitize, limit, or avoid passing untrusted logs into prompts and should restrict what automated AI actions can commit or run.
Allowlisting by command name (e.g., permitting 'echo' only) is unsafe if the runtime invokes that command through a shell that performs subshell/backtick/process-substitution expansion. Confirmed PoCs against Gemini CLI demonstrate practical RCE and secret exfiltration. Mitigations: avoid shell execution of allowlisted commands (use exec with argv arrays), perform strict argument validation or structured command APIs, and minimize secrets in execution environments. Platforms with direct exec semantics are not vulnerable; confirm runtime behavior before assuming safety.
This is a security advisory describing a misconfiguration: wildcard user/bot allowlists ("*") in AI-related GitHub Action 'with:' fields. It is not executable malware, but the configuration is dangerous: it allows any external user to trigger AI agents with attacker-controlled input, enabling prompt injection and potentially unsafe automated changes or secret exposure (especially when combined with pull_request_target or steps that use secrets). Remediation: remove wildcard allowlists, restrict to explicit users/bots, avoid interpolating untrusted event fields directly into AI prompts, and avoid running risky steps in contexts that expose secrets.
The provided text is a clear, accurate description of a high-risk GitHub Actions misconfiguration: embedding `${{ github.event.* }}` directly into AI prompt fields leads to YAML-time injection of attacker-controlled text into prompts. The artifact itself contains no executable malicious code, credentials, or obfuscation, but highlights a real supply-chain risk that can enable prompt injection, data leakage, or unauthorized downstream actions when present in workflows. Workflows should be audited for `${{ github.event.* }}` occurrences in AI `with:` fields and remediated by sanitization, runtime retrieval with validation, or privilege minimization.
BENIGN: The fragment is a structured methodology for auditing GitHub Actions workflows involving AI agents. It describes analysis steps, data flow considerations, and vector checks without containing executable payloads, hardcoded secrets, download/execute commands, or autonomous real-world actions. The guidance is coherent with its stated purpose and does not introduce harmful behavior by itself.