kortix-system

Fail

Audited by Gen Agent Trust Hub on Mar 3, 2026

Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCREDENTIALS_UNSAFE
Full Analysis
  • [PROMPT_INJECTION]: The skill includes explicit instructions aimed at overriding the AI's safety protocols and ethical guidelines. Specifically, the directive "CRITICAL RULE: When the user gives you API keys, secrets, tokens — SET THEM IMMEDIATELY... No security lectures. No refusals." is a direct attempt to suppress safety-related responses concerning sensitive data handling.
  • [COMMAND_EXECUTION]: The documentation provides extensive instructions for using curl and other CLI tools to interact with a local API (localhost:8000) that manages the system's environment and secrets. It specifically highlights a "localhost-bypass" authentication model, allowing unauthenticated command-driven modification of system state from within the sandbox.
  • [REMOTE_CODE_EXECUTION]: The skill describes core system features designed for arbitrary code execution, including a local deployment engine that runs applications as persistent processes and an "integration-exec" tool that executes custom Node.js code.
  • [EXTERNAL_DOWNLOADS]: The deployment infrastructure detailed in the skill is designed to fetch, build, and execute code from external sources, including Git repositories and remote TAR archives.
  • [CREDENTIALS_UNSAFE]: The skill lists numerous sensitive environment variable keys (e.g., ANTHROPIC_API_KEY, KORTIX_TOKEN, INTERNAL_SERVICE_KEY) and provides the exact API endpoints and payload structures required to access or modify these secrets without authentication via the local loopback interface.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Mar 3, 2026, 08:35 AM