hf-model-inference
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
- [Indirect Prompt Injection] (LOW): The skill defines an API surface that ingests untrusted external data (JSON payloads) to be processed by an LLM or ML model.
- Ingestion points: The
/predictendpoint in Phase 3 accepts user-provided JSON. - Boundary markers: None specified to delimit user input from model instructions within the inference pipeline.
- Capability inventory: The skill executes model inference via the
transformerslibrary based on input data. - Sanitization: The skill explicitly recommends input validation (checking fields, types, and empty strings) in Phase 3, Step 2, which mitigates basic malformed data attacks.
- [External Downloads] (LOW): The workflow involves downloading significant external dependencies (
transformers,torch,flask) and model weights from HuggingFace. - Evidence: Phase 1 and Phase 2 describe installing packages and fetching models via the
pipelineAPI. - Trust Status: Packages and models are sourced from reputable repositories (PyPI, HuggingFace), qualifying for a severity downgrade per [TRUST-SCOPE-RULE].
- [Network Operations] (LOW): The skill instructs the agent to bind the Flask service to
0.0.0.0. - Evidence: Phase 4, Step 1:
app.run(host='0.0.0.0', port=5000). - Risk: This makes the service accessible on all network interfaces, which may expose the inference endpoint to the local network or public internet depending on the environment configuration.
- [Command Execution] (LOW): The skill utilizes standard package managers and testing tools.
- Evidence: Usage of
pip,uv, andcurlfor setup and verification. - Risk: These are standard administrative actions for the stated purpose of the skill.
Audit Metadata