japanese-tutor
Audited by Gen Agent Trust Hub on Feb 12, 2026
Detailed Analysis:
-
Indirect Prompt Injection (HIGH): The skill is designed to ingest user-provided PDF and DOCX files (
scripts/parse_pdf_gemini.py,scripts/parse_docx.py). The content of these files is then processed by the agent, and for PDFs, sent to Google's Gemini API. This creates a significant attack surface for indirect prompt injection, where malicious instructions embedded within a user's document could manipulate the agent's subsequent actions or responses. The skill's instructions explicitly state that it will "Explain the content to the user" and "Identify the tasks/questions" from these parsed documents, making it highly susceptible to interpreting malicious instructions as legitimate tasks. -
Persistence Mechanism (HIGH, exacerbates Indirect Prompt Injection): The
SKILL.mdexplicitly states that the skill will "Append new vocabulary toreferences/vocab.md", "Append new grammar toreferences/grammar.md", and "create/updatereferences/lesson_X.md" based on the ingested material. If an indirect prompt injection occurs via a malicious PDF/DOCX, the malicious instructions or data could be written into these reference files, making the injection persistent across sessions and potentially affecting future interactions with the agent. -
Data Exfiltration (MEDIUM): The
scripts/parse_pdf_gemini.pyscript uploads user-provided PDF files to Google's Gemini API (genai.upload_file). While Google is a trusted service, this action involves sending potentially sensitive user data (the content of the PDF) to an external third-party service. Users should be aware that their document content will be transmitted to Google for processing. The script also accessesGEMINI_API_KEYfrom environment variables, which is a good practice for credential handling, but its presence enables this data transfer. -
Unverifiable Dependencies (MEDIUM): The
scripts/parse_docx.pyscript importsdocx(likelypython-docx), andscripts/parse_pdf_gemini.pyimportsgoogle.generativeai. These are external Python libraries that are not provided within the skill's files. Whilegoogle-generativeaiis from a trusted organization (Google), andpython-docxis a common library, their exact versions and integrity cannot be verified directly from the skill's provided files. This introduces a dependency risk.
No direct Prompt Injection, Privilege Escalation, Persistence (other than content), Obfuscation, or Time-Delayed attacks were found in the skill's own code or instructions. The greet.py script uses datetime.datetime.now().hour for conditional greetings, which is benign.
- AI detected serious security threats