github-codebase-search
Fail
Audited by Gen Agent Trust Hub on Mar 17, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONCREDENTIALS_UNSAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The script
scripts/github-codebase-search.pyusesbunx @morphllm/morphmcp@latestto download and execute code from an external provider at runtime. This introduces a risk of executing unverified or malicious code from a third-party source. - [CREDENTIALS_UNSAFE]: The script passes the
MORPH_API_KEYdirectly as a command-line argument using the--envflag. On many operating systems, environment variables and command-line arguments are visible to all users via process listing tools likeps. - [COMMAND_EXECUTION]: The tool invokes external binaries and scripts (
mcporter,bunx) using thesubprocess.runfunction, interpolating user-supplied search queries and repository information into the command string. - [PROMPT_INJECTION]: The documentation explicitly instructs the agent not to read the source code of the scripts (
DO NOT read script source code), which prevents the agent from verifying the tool's behavior or identifying potential security issues. - [PROMPT_INJECTION]: The skill retrieves and processes untrusted data from public GitHub repositories, creating a surface for indirect prompt injection.
- Ingestion points: GitHub repository content fetched via the MorphLLM API in
scripts/github-codebase-search.py. - Boundary markers: The skill lacks explicit instructions to the model to ignore any instructions found within the retrieved code snippets.
- Capability inventory: The script has the ability to execute shell commands and download packages via
subprocess.run. - Sanitization: There is no evidence of sanitization or safety checks on the content retrieved from external repositories.
Recommendations
- AI detected serious security threats
Audit Metadata