user-feedback
Pass
Audited by Gen Agent Trust Hub on Mar 10, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The skill contains surfaces for indirect prompt injection where untrusted user data is interpolated directly into LLM prompts.
- Ingestion points: The
turnvariable inextract_from_textand thefeedbacksJSON string incategorize(SKILL.md) are derived from user input. - Boundary markers: Absent. The prompts do not use delimiters (like triple quotes or XML tags) or system instructions to tell the model to ignore instructions embedded within the user data.
- Capability inventory: The skill interacts with a database (
self.db), calls LLM generation methods (model.generate,self.llm.generate), and includes methods for deployment (self.deploy()) and fine-tuning (self.finetune()). - Sanitization: Absent. There is no evidence of input validation or escaping before the data is placed into the prompt templates.
Audit Metadata