AI Engineer

Pass

Audited by Gen Agent Trust Hub on May 5, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [SAFE]: The skill provides documentation and tutorial code for LLM application development. No malicious patterns, obfuscation, or unauthorized access attempts were identified.
  • [EXTERNAL_DOWNLOADS]: The skill references a wide range of standard, reputable libraries and cloud services used in AI engineering, including LangChain, LangGraph, Pinecone, Weaviate, and Anthropic. These are well-known industry resources.
  • [COMMAND_EXECUTION]: Provides examples for defining agent tools, such as a mathematical calculator. The implementation correctly uses Python's ast module to parse and safely evaluate expressions, which is a recommended security practice to avoid the risks associated with arbitrary code execution via eval().
  • [PROMPT_INJECTION]: The skill covers patterns for agents and RAG systems that ingest external data, which is an inherent surface for indirect prompt injection. The documentation provides examples using clear boundary markers (e.g., 'Context:', 'Question:') and structured output schemas, which help mitigate these risks.
Audit Metadata
Risk Level
SAFE
Analyzed
May 5, 2026, 06:02 AM