aile-requirement-analysis

Pass

Audited by Gen Agent Trust Hub on Mar 7, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection as it processes untrusted data from external sources to drive its workflow.
  • Ingestion points: The skill reads Jira Story descriptions, acceptance criteria, and comments via the jira_get_issue tool.
  • Boundary markers: The instructions lack explicit delimiters or warnings to ignore embedded instructions within the ingested Jira content.
  • Capability inventory: The skill has the ability to write to the local filesystem (analysis.md), commit to Git, post external comments via jira_add_comment, and upload files to Google Drive using the google-drive skill.
  • Sanitization: No filtering or sanitization of the input data from Jira is specified before the LLM processes it.
  • [SAFE]: The skill implements strong security controls for its operational tasks.
  • Credentials for Jira (API Tokens) are explicitly mandated to be handled via environment variables rather than being hardcoded.
  • Cloud storage operations are restricted to the official google-drive skill, prohibiting the use of custom or unauthorized scripts for network communication.
  • All external interactions target well-known and trusted services including Jira (Atlassian) and Google Drive.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 7, 2026, 03:08 PM