skills/near/agent-skills/near-ai/Gen Agent Trust Hub

near-ai

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [PROMPT_INJECTION] (LOW): The implementation of WeatherAgent in rules/arch-agent-structure.md is susceptible to Indirect Prompt Injection (Category 8). \n
  • Ingestion points: The user_input provided to the run method is directly interpolated into a prompt for the LLM to parse intent. \n
  • Boundary markers: The prompt template (Parse this user request: "{user_input}") lacks delimiters or clear instructions to isolate untrusted user data from the intent-parsing logic. \n
  • Capability inventory: The agent has the capability to execute local methods like get_current_weather using arguments derived from the LLM's output. \n
  • Sanitization: The code uses json.loads and direct dictionary unpacking (**intent["parameters"]) to invoke tools without validating the schema, keys, or data types of the parameters generated by the AI. \n- [EXTERNAL_DOWNLOADS] (LOW): The skill documentation requires the nearai Python package and utilizes near.ai endpoints for AI inference. While these are necessary for the skill's primary purpose, they are not included in the predefined trusted repository or domain lists. \n- [CREDENTIALS_UNSAFE] (SAFE): The code in rules/ai-inference-endpoints.md correctly demonstrates using os.getenv for sensitive credentials such as NEAR_PRIVATE_KEY, which is the standard practice for avoiding hardcoded secrets.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:16 PM