local-ai-models

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFENO_CODEEXTERNAL_DOWNLOADS
Full Analysis
  • Indirect Prompt Injection (SAFE): The skill provides Swift templates in references/mlx-swift/advanced-patterns.md that process text inputs for structured extraction and tool calling. These code snippets interpolate text directly into prompts without boundary markers or sanitization. While this represents a vulnerability surface, it is a standard characteristic of LLM application development and is addressed within the skill's primary educational purpose. Ingestion points: StructuredGenerationService.extractStructuredInfo(from:). Boundary markers: Not present in code snippets. Capability inventory: Local model inference and tool execution. Sanitization: Not present.
  • Unverifiable Dependencies & Remote Code Execution (SAFE): The setup and quantization guides recommend installing standard Python packages (mlx-lm) and adding Swift packages from the MLX project. While the ml-explore GitHub organization is not on the explicit whitelist provided, it is the official repository for Apple's MLX framework and is a community-standard resource. Downloads from huggingface are to a trusted organization. Sources: github.com/ml-explore (GitHub), huggingface.co/mlx-community (Hugging Face).
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:25 PM