skill-seekers
Fail
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: HIGHDATA_EXFILTRATIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- DATA_EXFILTRATION (HIGH): The logic defined in
src/skill_seekers/mcp/git_repo.py(specifically theinject_tokenmethod) automatically prepends sensitive authentication tokens to git URLs using the formathttps://TOKEN@github.com/.... This is a high-risk pattern because if the tool is directed to a malicious or non-whitelisted domain, the user's private token will be transmitted to the external server. Furthermore, tokens embedded in URLs are frequently leaked through shell history, log files, and monitoring systems. - COMMAND_EXECUTION (MEDIUM): The skill extensively uses
subprocess.runand other command execution APIs (as documented insrc/skill_seekers/cli/enhance_skill_local.pyandsrc/skill_seekers/mcp/tools/packaging_tools.py) to interact with the local operating system, terminal applications, and external CLI tools likegitandclaude. While these are part of its primary codebase analysis function, the ability to execute commands in an environment that processes untrusted external data is a significant risk factor. - PROMPT_INJECTION (LOW): The skill has a large surface for Indirect Prompt Injection. It ingests untrusted data from URLs, GitHub repositories, and PDF files through modules such as
codebase_scraper.pyandgithub_fetcher.py. This content is then used to construct prompts for LLMs during 'enhancement' phases (e.g.,_build_enhancement_promptinclaude.py). A malicious repository or document could contain hidden instructions that manipulate the output or behavior of the agent during skill generation. - Ingestion points:
codebase_scraper.py,github_fetcher.py, andpdf_extractor_poc.pycollect content from arbitrary external sources. - Boundary markers: No delimiters or safety instructions are mentioned in the API documentation for prompt construction.
- Capability inventory: The skill can execute subprocesses, write to the filesystem (
config_manager.py), and perform network operations (requests). - Sanitization: No evidence of sanitization or validation of external content before it is interpolated into LLM prompts.
Recommendations
- AI detected serious security threats
Audit Metadata