data-quality-frameworks

Pass

Audited by Gen Agent Trust Hub on Feb 27, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill contains legitimate architectural patterns and code samples for data quality monitoring and validation using established frameworks.
  • [EXTERNAL_DOWNLOADS]: References the installation of 'great_expectations', a well-known and widely used open-source library for data validation. This is a standard dependency for the stated purpose and originates from a trusted technology ecosystem.
  • [CREDENTIALS_UNSAFE]: Appropriately uses environment variable placeholders (e.g., '${SLACK_WEBHOOK}') in configuration files for sensitive values like Slack webhooks, adhering to security best practices for secret management.
  • [INDIRECT_PROMPT_INJECTION]: The skill describes pipelines for processing external data, which technically presents an ingestion surface. However, the logic is inherently defensive.
  • Ingestion points: Data sources referenced in quality_pipeline.py and orders_checkpoint.yml (e.g., 'warehouse' datasource).
  • Boundary markers: None explicitly defined in the generic templates, though the Great Expectations framework enforces schema constraints.
  • Capability inventory: Subprocess execution for data validation and SQL query execution via dbt.
  • Sanitization: The primary function of the skill is to provide a validation and sanitization layer for data quality, which mitigates risks of processing malformed or malicious data.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 27, 2026, 09:06 AM