llm-ops
Pass
Audited by Gen Agent Trust Hub on Apr 17, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill contains standard Python implementations for LLM interactions. No malicious code, obfuscation, or persistence mechanisms were found.
- [PROMPT_INJECTION]: The skill provides a Retrieval-Augmented Generation (RAG) implementation that ingests external data. 1. Ingestion points: content_text and query arguments in SKILL.md. 2. Boundary markers: Basic text separators (e.g., 'Contexto:'). 3. Capability inventory: Remote LLM invocation via the anthropic client. 4. Sanitization: None present in the provided code snippets.
Audit Metadata