lm-studio-subagents

Pass

Audited by Gen Agent Trust Hub on Mar 13, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill uses the official OpenAI Python client to communicate with a local LM Studio server at http://localhost:1234. This is a standard development pattern for local LLM usage.
  • [EXTERNAL_DOWNLOADS]: The skill references the official LM Studio website (lmstudio.ai) and well-known model identifiers from the LM Studio community. These are trusted sources for the skill's specific purpose of local inference.
  • [CREDENTIALS_UNSAFE]: While an 'api_key' is present in code snippets, it uses a dummy value ('lm-studio') required for compatibility with the OpenAI client when connecting to local servers. No real secrets or sensitive keys are exposed.
  • [INDIRECT_PROMPT_INJECTION]: The skill processes untrusted external data (emails, support tickets, contracts) via LLM prompts.
  • Ingestion points: Found in Task D (process_batch) and Examples 1-3 in SKILL.md where external file/variable data is passed to the LLM.
  • Boundary markers: Absent. The skill does not use specific delimiters to separate user data from instructions.
  • Capability inventory: The skill only returns LLM-generated text (summaries, classifications). It does not contain capabilities for subprocess execution, file writing, or network requests triggered by LLM output.
  • Sanitization: Absent. The skill assumes raw data is safe for local model processing. Risk is minimal as output stays local.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 13, 2026, 09:15 PM