databricks-docs
Databricks Documentation Reference
This skill provides access to the complete Databricks documentation index via llms.txt - use it as a reference resource to supplement other skills and inform your use of MCP tools.
Role of This Skill
This is a reference skill, not an action skill. Use it to:
- Look up documentation when other skills don't cover a topic
- Get authoritative guidance on Databricks concepts and APIs
- Find detailed information to inform how you use MCP tools
- Discover features and capabilities you may not know about
Always prefer using MCP tools for actions (execute_sql, manage_pipeline, etc.) and load specific skills for workflows (databricks-python-sdk, databricks-spark-declarative-pipelines, etc.). Use this skill when you need reference documentation.
How to Use
Fetch the llms.txt documentation index:
URL: https://docs.databricks.com/llms.txt
Use WebFetch to retrieve this index, then:
- Search for relevant sections/links
- Fetch specific documentation pages for detailed guidance
- Apply what you learn using the appropriate MCP tools
Documentation Structure
The llms.txt file is organized by category:
- Overview & Getting Started - Basic concepts and tutorials
- Data Engineering - Lakeflow, Spark, Delta Lake, pipelines
- SQL & Analytics - Warehouses, queries, dashboards
- AI/ML - MLflow, model serving, GenAI
- Governance - Unity Catalog, permissions, security
- Developer Tools - SDKs, CLI, APIs, Terraform
Example: Complementing Other Skills
Scenario: User wants to create a Delta Live Tables pipeline
- Load
databricks-spark-declarative-pipelinesskill for workflow patterns - Use this skill to fetch docs if you need clarification on specific DLT features
- Use
manage_pipeline(action="create_or_update")MCP tool to actually create the pipeline
Scenario: User asks about an unfamiliar Databricks feature
- Fetch llms.txt to find relevant documentation
- Read the specific docs to understand the feature
- Determine which skill/tools apply, then use them
Related Skills
- databricks-python-sdk - SDK patterns for programmatic Databricks access
- databricks-spark-declarative-pipelines - DLT / Lakeflow pipeline workflows
- databricks-unity-catalog - Governance and catalog management
- databricks-model-serving - Serving endpoints and model deployment
- databricks-mlflow-evaluation - MLflow 3 GenAI evaluation workflows
More from databricks-solutions/ai-dev-kit
databricks-python-sdk
Databricks development guidance including Python SDK, Databricks Connect, CLI, and REST API. Use when working with databricks-sdk, databricks-connect, or Databricks APIs.
132python-dev
Python development guidance with code quality standards, error handling, testing practices, and environment management. Use when writing, reviewing, or modifying Python code (.py files) or Jupyter notebooks (.ipynb files).
68skill-test
Testing framework for evaluating Databricks skills. Use when building test cases for skills, running skill evaluations, comparing skill versions, or creating ground truth datasets with the Generate-Review-Promote (GRP) pipeline. Triggers include "test skill", "evaluate skill", "skill regression", "ground truth", "GRP pipeline", "skill quality", and "skill metrics".
53databricks-config
Manage Databricks workspace connections: check current workspace, switch profiles, list available workspaces, or authenticate to a new workspace. Use when the user mentions \"switch workspace\", \"which workspace\", \"current profile\", \"databrickscfg\", \"connect to workspace\", or \"databricks auth\".
26databricks-app-python
Builds Python-based Databricks applications using Dash, Streamlit, Gradio, Flask, FastAPI, or Reflex. Handles OAuth authorization (app and user auth), app resources, SQL warehouse and Lakebase connectivity, model serving integration, foundation model APIs, LLM integration, and deployment. Use when building Python web apps, dashboards, ML demos, or REST APIs for Databricks, or when the user mentions Streamlit, Dash, Gradio, Flask, FastAPI, Reflex, or Databricks app.
22databricks-unity-catalog
Unity Catalog system tables and volumes. Use when querying system tables (audit, lineage, billing) or working with volume file operations (upload, download, list files in /Volumes/).
22