skills/uv-xiao/pkbllm/uv-tensorrt-llm/Gen Agent Trust Hub

uv-tensorrt-llm

Pass

Audited by Gen Agent Trust Hub on Feb 27, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill documentation describes processing user-supplied prompts through the LLM.generate method and the trtllm-serve API, which constitutes an indirect prompt injection surface where external data could contain malicious instructions.
  • Ingestion points: Prompt input strings in SKILL.md and the chat completions API request bodies in references/serving.md.
  • Boundary markers: Not specified in the provided examples.
  • Capability inventory: The skill is limited to performing model inference and network serving; it does not demonstrate broad system access or arbitrary code execution capabilities based on user input.
  • Sanitization: No specific input validation or sanitization routines are documented.
  • [COMMAND_EXECUTION]: The documentation includes standard operational commands for environment setup, such as pip install for dependency management and docker pull for retrieving official NVIDIA images. It also details the use of the trtllm-serve command-line utility for production serving.
  • [EXTERNAL_DOWNLOADS]: The skill facilitates the download of pre-trained model weights from HuggingFace and infrastructure components from the official NVIDIA Docker registry. These are well-known technology services and are documented neutrally as part of the standard deployment workflow.
  • [REMOTE_CODE_EXECUTION]: The skill involves the dynamic compilation of optimized inference engines from model definitions at runtime via the trtllm-serve utility. This runtime compilation is a core architectural requirement of the TensorRT-LLM library to achieve target performance on specific GPU hardware.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 27, 2026, 12:41 PM