compile
Warn
Audited by Gen Agent Trust Hub on Apr 6, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: In scripts/compile.sh, the llm_call function unsafely interpolates environment variables like AGENTOPS_COMPILE_MODEL into a Python script string expanded by the shell, allowing arbitrary Python code execution via crafted variables. Additionally, measurement commands in references/flywheel-diagnostics.md for checking broken references are vulnerable to command injection if processed against malicious markdown links.
- [DATA_EXFILTRATION]: The skill's core functionality involves reading markdown artifacts from the local .agents/ directory and sending their contents to remote LLM APIs (api.anthropic.com, api.openai.com, or a user-defined Ollama endpoint). While consistent with the stated purpose, this involves transmitting local project data to external services.
- [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection as it processes untrusted data from multiple files in the .agents/ directory and includes them in prompts sent to a compilation LLM without sanitization.
- Ingestion points: Artifact files located in .agents/learnings/, .agents/patterns/, and .agents/research/.
- Boundary markers: Files are delimited in the prompt using --- FILE: --- headers.
- Capability inventory: The compilation result is written to the local filesystem at .agents/compiled/.
- Sanitization: No sanitization or filtering of artifact content is performed before interpolation into the prompt.
Audit Metadata