paperclip-ai-orchestration

Warn

Audited by Gen Agent Trust Hub on Mar 16, 2026

Risk Level: MEDIUMREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONCREDENTIALS_UNSAFEPROMPT_INJECTION
Full Analysis
  • [REMOTE_CODE_EXECUTION]: The skill recommends using npx paperclipai onboard --yes, which downloads and executes an external package directly from the npm registry at runtime.
  • [EXTERNAL_DOWNLOADS]: The installation process involves cloning a remote repository from GitHub (https://github.com/paperclipai/paperclip.git) and installing dependencies from external registries using pnpm install.
  • [COMMAND_EXECUTION]: The documentation provides examples of AI agents executing arbitrary shell commands through a bash runtime, such as pnpm test --filter=sync.
  • [CREDENTIALS_UNSAFE]: The skill documentation instructs users to configure sensitive environment variables, including DATABASE_URL, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY, within a .env file.
  • [PROMPT_INJECTION]: The platform processes external, untrusted inputs (tasks, goals, and messages) and provides them to agents with system-level execution capabilities. The provided examples lack boundary markers or sanitization logic, establishing a surface for indirect prompt injection.
  • Ingestion points: API endpoints for creating companies, goals, tasks, and messages (SKILL.md).
  • Boundary markers: None documented in the provided API or prompt examples.
  • Capability inventory: Agents support bash runtimes and can execute shell commands (SKILL.md).
  • Sanitization: No input validation or sanitization mechanisms are described in the skill.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 16, 2026, 04:39 AM