spark-declarative-pipelines
Pass
Audited by Gen Agent Trust Hub on Feb 19, 2026
Risk Level: SAFECOMMAND_EXECUTION
Full Analysis
- [Command Execution] (LOW): The skill utilizes MCP tools like 'execute_sql' and 'create_or_update_pipeline' to interact with Databricks. This is consistent with its role as a development and orchestration tool.
- [Indirect Prompt Injection] (LOW): 1. Ingestion points: Data is read from cloud storage and streaming sources (Kafka/Kinesis) as shown in '10-mcp-approach.md' and '2-streaming-patterns.md'. 2. Boundary markers: Absent; templates use direct SQL/Python calls without explicit instruction isolation for source data. 3. Capability inventory: File '10-mcp-approach.md' defines capabilities for 'execute_sql' and 'create_or_update_pipeline'. 4. Sanitization: Relies on native Spark SQL handling; no additional sanitization logic is included in the skill templates.
- [Dynamic Execution] (SAFE): The skill involves generating and deploying Python and SQL scripts, which is the primary intended function for Databricks pipeline management.
Audit Metadata