databricks-spark-declarative-pipelines

Pass

Audited by Gen Agent Trust Hub on Apr 9, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill uses official Databricks tools and documentation. It correctly instructs users to manage secrets using the Databricks Secrets utility (dbutils.secrets.get) and to use Unity Catalog for data governance.
  • [COMMAND_EXECUTION]: The skill utilizes the Databricks CLI and specialized MCP tools (manage_pipeline, manage_workspace_files) to deploy and run data pipelines. These operations are standard for the development and automation workflows described.
  • [DATA_EXPOSURE]: The skill interacts with Unity Catalog tables and Volumes to perform data engineering tasks. This access is necessary for the skill's purpose and is managed through the Databricks environment's native authorization controls.
  • [INDIRECT_PROMPT_INJECTION]: The skill has an attack surface for indirect prompt injection because it reads data and metadata from external sources (Unity Catalog tables and Volumes) that could contain malicious instructions.
  • Ingestion points: Tools such as query, discover-schema, and get_table_stats_and_schema ingest data from potentially untrusted sources.
  • Boundary markers: The skill does not explicitly define markers or delimiters to protect against embedded instructions in the data.
  • Capability inventory: The skill can perform file operations (manage_workspace_files), manage resources (manage_pipeline), and execute code (execute_sql).
  • Sanitization: There are no documented sanitization or filtering steps for the ingested data before it is processed by the agent.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 9, 2026, 10:55 AM