backend-to-frontend-handoff-docs
Audited by Gen Agent Trust Hub on Feb 12, 2026
The skill backend-to-frontend-handoff-docs is implemented entirely through natural language instructions and markdown templates in README.md and SKILL.md. It does not contain any executable code, scripts, or commands that would trigger direct security concerns such as arbitrary command execution, data exfiltration, or privilege escalation.
Threat Category Assessment:
-
Prompt Injection: No direct prompt injection patterns were found within the skill's own instructions. However, the skill's core function involves processing user-provided "Completed API code". This introduces an INFO level risk of Indirect Prompt Injection (Category: PROMPT_INJECTION). If the input backend code contains hidden or malicious instructions (e.g., in comments or string literals), it could potentially influence the AI's behavior. This is an inherent risk for any skill that processes user-supplied content. The skill's instructions explicitly state "NO CHAT OUTPUT" and direct the AI to "Produce the handoff document only", which significantly mitigates the potential impact of such an injection by limiting the AI's response channels.
-
Data Exfiltration: The skill's output is directed to a local, relative path:
.claude/docs/ai/<feature-name>/api-handoff.md. This is a designated workspace location and does not involve sending data to external servers or accessing sensitive system files. No network operations or sensitive file reads were detected. -
Obfuscation: No obfuscation techniques (Base64, zero-width characters, homoglyphs, URL/hex/HTML encoding) were found within the skill's definition files.
-
Unverifiable Dependencies: The skill does not install any external packages or fetch scripts from untrusted or trusted external sources. It operates purely based on its internal instructions and user-provided input.
-
Privilege Escalation: No commands like
sudo,chmod, ordoaswere found. -
Persistence Mechanisms: No attempts to establish persistence (e.g., modifying
.bashrc,crontab,authorized_keys) were detected. -
Metadata Poisoning: The skill's name and description are benign and do not contain any malicious instructions.
-
Time-Delayed / Conditional Attacks: No conditional logic based on dates, times, usage counts, or environment variables was found.
Conclusion:
The skill itself is well-constrained and does not exhibit any malicious behaviors. The only identified concern is the informational risk of indirect prompt injection from user-supplied code, which is a general consideration for any AI processing user input. The skill's strict output control helps manage this risk. Therefore, the overall verdict is SAFE.