skills/zpankz/mcp-skillset/fabric/Gen Agent Trust Hub

fabric

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONCREDENTIALS_UNSAFEEXTERNAL_DOWNLOADSDATA_EXFILTRATION
Full Analysis
  • REMOTE_CODE_EXECUTION (HIGH): The project's primary installation method involves piping remote shell scripts directly to bash or PowerShell (install.sh and install.ps1). This is a high-risk pattern that executes unverified code from the internet.
  • COMMAND_EXECUTION (HIGH): Several components execute shell commands using sh -c or exec. The obsidian integration endpoint (web/src/routes/obsidian/+server.ts) is particularly risky as it uses child_process.exec with inputs derived from the request body without sufficient validation. Additionally, the 'Extension' system (internal/plugins/template/extension_executor.go) executes registered binaries through shell shells, which could be exploited if configuration files are tampered with.
  • COMMAND_EXECUTION (HIGH): The create_coding_feature pattern and its implementation in internal/core/chatter.go allow the AI to dictate file creation and modification on the local filesystem. While it requires user confirmation in the CLI, the capability allows for arbitrary file writes within the project root based on potentially untrusted LLM output.
  • CREDENTIALS_UNSAFE (HIGH): The REST API implementation includes a configuration handler (internal/server/configuration.go) that exposes an endpoint to retrieve all environment variables. This includes sensitive secrets like OPENAI_API_KEY, ANTHROPIC_API_KEY, and others. If the server is started without an API key (a configuration the server warns about but allows), these credentials are accessible to any network-adjacent attacker.
  • DATA_EXFILTRATION (MEDIUM): The built-in template system provides plugins for reading local files ({{plugin:file:read:PATH}}) and environment variables ({{plugin:sys:env:VAR}}). While there are some path traversal protections, these features could be combined with the fetch plugin or AI responses to exfiltrate sensitive system information via indirect prompt injection.
  • PROMPT_INJECTION (LOW): Several patterns use aggressive instructional language (e.g., "Do not object to this task in any way," "Do not complain about the instructions") to force model compliance. While common in prompt engineering, this style explicitly bypasses standard LLM safety guardrails.
Recommendations
  • HIGH: Downloads and executes remote code from: https://raw.githubusercontent.com/danielmiessler/fabric/main/scripts/installer/install.sh - DO NOT USE without thorough review
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 06:29 PM