ai-news-digest

Fail

Audited by Gen Agent Trust Hub on Feb 15, 2026

Risk Level: HIGHPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
  • Category 8: Indirect Prompt Injection (HIGH): The skill processes untrusted data from multiple external RSS/Atom feeds and HTML pages.
  • Ingestion points: External URLs defined in references/sources.yaml and references/sources.md.
  • Boundary markers: Absent. The LLM prompts in assets/summarize-prompt.md directly interpolate external content ({title_raw}, {summary_raw}, {content}) without delimiters or instructions to ignore embedded commands.
  • Capability inventory: The skill uses an LLM for summarization and translation, and supports writing output to the file system using the --out parameter (as described in SKILL.md).
  • Sanitization: No evidence of sanitization or filtering for the external content before it is processed by the LLM.
  • Category 2/4: Untrusted Network Operations (HIGH): The command-line interface in SKILL.md references an --insecure flag that disables SSL certificate verification (python run.py --day yesterday --insecure). Disabling SSL is a dangerous practice that exposes the user to man-in-the-middle (MITM) attacks when fetching data from the 20+ configured external sources.
  • Category 4: Unverifiable Dependencies (LOW/MEDIUM): The SKILL.md documentation recommends installing unversioned third-party packages such as Pillow, pyyaml, anthropic, and openai. While these are standard libraries, the lack of version pinning increases the risk of supply chain attacks or breaking changes.
  • Category 2: Sensitive Path Access (INFO): The scripts/render_image.py file searches for font files across standard system paths on macOS, Linux, and Windows. This is a legitimate functional requirement for image rendering and does not constitute a malicious exposure.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 15, 2026, 10:49 PM