llm-ops
Pass
Audited by Gen Agent Trust Hub on Mar 10, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill serves as a legitimate technical guide for LLM-Ops. It includes boilerplate code for interacting with the Anthropic API and ChromaDB, following standard implementation patterns.
- [PROMPT_INJECTION]: The skill implements a RAG (Retrieval-Augmented Generation) pattern which is susceptible to indirect prompt injection if the source documents contain malicious instructions.
- Ingestion points: Functions
rag_queryandevaluate_responseinSKILL.mdinterpolate external data and model outputs directly into the final prompt. - Boundary markers: The prompts use simple textual labels (e.g., 'Contexto:', 'PERGUNTA:', 'RESPOSTA ATUAL:') to separate data but do not employ robust XML-style delimiters or explicit instructions to ignore embedded commands.
- Capability inventory: The skill utilizes the
anthropicPython client to execute LLM queries based on the constructed prompts. - Sanitization: There is no evidence of input sanitization or validation in the provided code snippets.
Audit Metadata