sglang-deepseek-v31-optimization

Pass

Audited by Gen Agent Trust Hub on Apr 23, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill content is entirely documentation-based, providing technical guidance for optimizing DeepSeek V3.1 in the SGLang framework.\n- [SAFE]: External links and references point to official project repositories on GitHub (sgl-project/sglang), which are well-known and reputable sources in the AI infrastructure community.\n- [SAFE]: No instances of prompt injection, data exfiltration, obfuscation, or malicious command execution were identified in the instructions or reference materials.\n- [PROMPT_INJECTION]: The skill documents the handling of untrusted data (model-generated tool calls) by the DeepSeekV31Detector, which constitutes a surface for indirect prompt injection. This is addressed through documented sanitization and boundary marker practices.\n
  • Ingestion points: Model outputs processed by python/sglang/srt/function_call/deepseekv31_detector.py and related streaming parsers.\n
  • Boundary markers: Explicit use of structural tags such as <|tool▁calls▁begin|> and <|tool▁sep|> to isolate untrusted model output.\n
  • Capability inventory: The skill focus is limited to serving logic and parser validation, minimizing potential impact.\n
  • Sanitization: The documentation describes fixes for JSON serialization issues and hardening of structural tag triggers to ensure robust parsing of model-supplied arguments.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 23, 2026, 07:50 AM