skills/wellapp-ai/well/phasing/Gen Agent Trust Hub

phasing

Pass

Audited by Gen Agent Trust Hub on Feb 12, 2026

Risk Level: LOWNO_CODE
Full Analysis

The SKILL.md file provided is a markdown document that describes a process for 'Phasing' implementation slices. It details steps for gathering scores, calculating combined scores, grouping into phases, generating an ASCII timeline, and documenting checkpoints. The skill's content is entirely instructional and descriptive; it does not contain any executable code (e.g., shell commands, Python scripts, JavaScript), external script references, or package installations.

Threat Category Analysis:

  • Prompt Injection: No patterns indicative of prompt injection (e.g., 'IMPORTANT: Ignore', 'Override', 'jailbroken', 'DAN') were found. The use of 'Do NOT show' in the output format is a benign instruction for the AI's output generation, not an attempt to bypass safety.
  • Data Exfiltration: The skill does not contain any commands or functions that could read sensitive files or make network requests to exfiltrate data.
  • Obfuscation: No obfuscation techniques such as Base64 encoding, zero-width characters, homoglyphs, or URL/hex/HTML encoding were detected.
  • Unverifiable Dependencies: There are no npm install, pip install, git clone, or other references to external, unverifiable code or scripts. References to 'Related Skills' (dependency-mapping, gtm-alignment) are internal to the agent's ecosystem, not external dependencies.
  • Privilege Escalation: As there are no executable commands, there is no possibility for privilege escalation.
  • Persistence Mechanisms: No commands or configurations for establishing persistence (e.g., modifying .bashrc, creating cron jobs) are present.
  • Metadata Poisoning: The name and description fields are benign and do not contain any malicious instructions.
  • Indirect Prompt Injection: While any skill that processes external data (like scores or slice names from other skills) could theoretically be susceptible to indirect prompt injection if those inputs were maliciously crafted, this skill itself does not introduce specific vulnerabilities for this. It merely describes a process for the LLM to follow for text generation based on provided data. This is a general LLM risk, not a specific flaw in this skill's definition.
  • Time-Delayed / Conditional Attacks: No conditional logic or time-based triggers are present.

Conclusion: The skill is a purely descriptive, no-code instruction set for the AI. It does not perform any actions that could lead to security vulnerabilities. Therefore, it is classified as SAFE.

Audit Metadata
Risk Level
LOW
Analyzed
Feb 12, 2026, 02:19 PM