mlflow-genai-foundation

Pass

Audited by Gen Agent Trust Hub on Mar 8, 2026

Risk Level: SAFE
Full Analysis
  • [PROMPT_INJECTION]: The skill defines patterns for evaluating and tracing agent responses which process untrusted user input (Indirect Prompt Injection). This represents a vulnerability surface inherent to agent monitoring, but the skill specifically mitigates this by instructing users to implement MLflow's built-in safety scorers and guidelines adherence metrics.
  • Ingestion points: Agent request objects in ResponsesAgent.predict() and evaluation_data processed by mlflow.genai.evaluate().
  • Boundary markers: Encourages the use of GuidelinesAdherence with explicit behavioral constraints for LLM-based evaluation judges.
  • Capability inventory: The skill utilizes MLflow's standard model logging and tracing capabilities, and the Databricks SDK for metadata management.
  • Sanitization: Promotes the use of the ResponsesAgent framework which enforces structured schema validation for inputs and outputs.
  • [EXTERNAL_DOWNLOADS]: The skill references established, well-known libraries including mlflow, pandas, and databricks-sdk in its configuration snippets and requirements. These originate from official, trusted sources within the Databricks ecosystem and do not constitute an external dependency risk.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 8, 2026, 02:33 AM