faion-llm-integration

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • Prompt Injection (LOW): The skill utilizes prompt templates (e.g., in prompt-basics/README.md) that interpolate user-provided data into instructions, creating a surface for indirect prompt injection attacks.\n
  • Evidence Chain (Category 8):\n
  • Ingestion points: User data is interpolated into prompt templates in prompt-basics/README.md and openai-api-integration/README.md.\n
  • Boundary markers: The use of delimiters (e.g., ---) is suggested in the prompt-basics/README.md documentation but not strictly enforced across all examples.\n
  • Capability inventory: The skill is configured with powerful tools including Bash, Write, and Edit as specified in SKILL.md.\n
  • Sanitization: Robust mitigation patterns and code for content moderation and injection detection are provided as core methodology in guardrails-basics/README.md.\n- Command Execution (LOW): The skill is permitted to use the Bash and Task tools as configured in SKILL.md. While these are high-risk capabilities, they are consistent with the primary purpose of a Machine Learning engineering skill and are accompanied by safety best practices and guardrail documentation.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:27 PM