fast-meeting
Warn
Audited by Gen Agent Trust Hub on May 1, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection via external data.
- Ingestion points: In Step 1, the skill fetches issue descriptions, labels, and comments from GitLab or GitHub based on issue references (e.g., '#123').
- Boundary markers: The sub-agent prompt template in Step 3 interpolates the
{context}(containing the fetched issue data) directly into the instructions without any delimiters or 'ignore embedded instructions' warnings. - Capability inventory: The skill has high-impact capabilities including writing code to the filesystem (Step 6), executing shell commands for tests (Step 7), pushing branches to remote repositories (Step 7), and creating Merge/Pull Requests (Step 8).
- Sanitization: No sanitization or validation of the fetched issue content is performed before it is used to drive the autonomous decision-making and implementation process.
- [COMMAND_EXECUTION]: The skill performs several potentially dangerous command executions:
- In Step 7, it auto-detects and runs test scripts from the repository (e.g.,
package.jsonscripts,Makefile,pytest). If the repository contains malicious configuration files, this provides an arbitrary command execution vector. - It uses
fuserto check for active processes andgit worktree remove --forceto manage the filesystem, which involves process inspection and file deletion outside the immediate repository root. - [DATA_EXFILTRATION]: While the skill primarily pushes code to the project's own remote, the autonomous nature of the pipeline (Step 6 and 7) means that if the implementation logic is compromised by a malicious issue description, it could be instructed to include code that exfiltrates environment variables or secrets during the 'Implementation' or 'Test' phases.
Audit Metadata