mistral-debug-bundle

Pass

Audited by Gen Agent Trust Hub on Mar 30, 2026

Risk Level: SAFEDATA_EXFILTRATIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [DATA_EXFILTRATION]: The skill identifies and reads sensitive configuration and log files to create a diagnostic bundle.
  • Accesses the .env file to gather environment configuration. It attempts to redact values using sed, but the raw file is read into the process context.
  • Reads user-level npm logs from ~/.npm/_logs and application logs from a logs/ directory.
  • Evidence: cat .env 2>/dev/null | sed 's/=.*/=***REDACTED***/' >> "$BUNDLE_DIR/config-redacted.txt" and grep -i "mistral" "$HOME/.npm/_logs"/*.log.
  • [COMMAND_EXECUTION]: The skill uses shell commands to gather system information and verify service availability.
  • Performs an API connectivity test to api.mistral.ai using the MISTRAL_API_KEY environment variable.
  • Uses tar to archive the collected data into a .tar.gz file for external sharing.
  • Evidence: curl -s -o /dev/null -w "%{http_code}" -H "Authorization: Bearer ${MISTRAL_API_KEY}" https://api.mistral.ai/v1/models.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection due to the processing of untrusted log data.
  • Ingestion points: Aggregates content from project files (.env, package.json) and various log files (logs/*.log, ~/.npm/_logs/*.log).
  • Boundary markers: The skill does not implement specific delimiters or warnings to prevent the agent from interpreting instructions potentially hidden within the collected logs.
  • Capability inventory: The agent has access to Bash, curl, and file system tools which could be exploited if malicious instructions are processed from the ingested data.
  • Sanitization: Implements a basic redaction mechanism for environment variables in .env files.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 30, 2026, 02:27 PM