foundation-models

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • [Prompt Injection] (SAFE): No malicious injection or bypass patterns detected. The documentation explicitly advises against interpolating user input into system instructions to prevent security risks.
  • [Data Exposure & Exfiltration] (SAFE): No hardcoded credentials, sensitive file paths, or unauthorized network operations were found. The skill focuses on on-device AI processing which minimizes data exposure.
  • [Unverifiable Dependencies] (SAFE): All code snippets use standard Apple system frameworks and do not trigger external package installations or remote script executions.
  • [Indirect Prompt Injection] (LOW): As a framework for processing natural language, it inherently creates a surface for indirect prompt injection via user-supplied prompts and tool inputs. Evidence: 1. Ingestion points: LanguageModelSession.respond(to:) in getting-started.md; 2. Boundary markers: Advice against interpolation in getting-started.md; 3. Capability inventory: Tool calling enables local code execution (e.g., WeatherService in tool-calling.md); 4. Sanitization: Framework handles guardrail violations as noted in troubleshooting.md.
  • [Dynamic Execution] (SAFE): The skill uses standard Swift macros (@Generable) and protocol-based tool calling, which are safe, compile-time or type-safe patterns for extensible behavior.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 05:32 PM