fabric-cli
Fabric CLI
Guidance for using fab to programmatically manage Fabric & Power BI service
- Install via
uv tool install ms-fabric-cli(getuvviawinget install uvorbrew install uv) - Fabric CLI is for working with the Cloud environment and not local files; it works with Power BI Pro, PPU, or Fabric; you DO NOT need a Fabric SKU to use the Fabric CLI
[!IMPORTANT] Any time you encounter errors, user preferences or learnings when using the Fabric cli, ALWAYS note these down in the user memory rules, i.e.
.claude/rules/fabric-cli.mdfor future improvement. This is ONLY for generic learnings and not for item- or task-specific learnings.
When to use this skill
- Use whenever the user mentions "Fabric" or "Power BI"
- Use when user asks about Power BI workspaces, deployment, tenants, publishing, download, permissions, or data
Critical general rules
- IMPORTANT: The first time you use
fabrun check that it is up to date to the latest version and runfab auth status; If user isn't authenticated, ask them to runfab auth login - Always use
fab --helpandfab <command> --helpthe first time you use a command to understand its syntax - You must search the skill /references/ for relevant reference files that explain certain commands, examples, scripts, or workflows before you start using
fab - Before first use, ask the user if they have Fabric admin access, sensitivity labels or DLP policies, any API restrictions, or preferences for Fabric/Power BI API usage; remind user to add this to memory files
- If workspace or item name is unclear, ask the user first, then verify with
fab lsorfab existsbefore proceeding - Ensure that you avoid removing or moving items, workspaces, or definitions, or changing properties without explicit user direction
- If a command is blocked in your permissions and you try to use it, stop and ask the user for clarification; never try to circumvent it
- Create output directories before export:
fab exportdoes not create intermediate directories;mkdir -pthe output path first or the command fails with[InvalidPath]
Use -f (force) for non-interactive use
The fab CLI prompts for confirmation, so you you must always append -f to prevent this UNLESS sensitivity labels are enabled, in which case you must ask the user. Do this for the commands:
fab get -q "definition"; sensitivity label confirmationfab export; sensitivity label confirmationfab import; overwrite confirmationfab cp/fab cp -r; overwrite and sensitivity label confirmationfab rm; delete confirmationfab assign/fab unassign; capacity/domain assignment confirmationfab mv; rename/move confirmation
Quickstart guide
You must read and understand the common list of operations with simple examples
- Check the commands, syntax, and auth status:
fab --helpandfab auth status - Check if the item exists if the user gave the workspace and item name:
fab exists "spaceparts-dev.Workspace/spaceparts-otc-full.SemanticModel" - Find the workspace:
fab ls - Find the item:
fab ls "Workspace Name.Workspace" - Check the commands for that item:
fab descto get itemTypesfab desc .<ItemType>for commands i.e.fab desc .SemanticModel
- What's in that item; what's it for; what is it?:
- Full TMDL definition:
fab get "spaceparts-dev.Workspace/spaceparts-otc-full.SemanticModel" -q "definition" -f - Search a specific measure / table / column:
fab get "ws.Workspace/Model.SemanticModel" -q "definition" -f | rga -i "Sales Amount"
- Full TMDL definition:
- Get files, tables, or table schemas:
- List lakehouse files:
fab ls "ws.Workspace/LH.Lakehouse/Files" - List lakehouse tables:
fab ls "ws.Workspace/LH.Lakehouse/Tables" - Table schema:
fab table schema "ws.Workspace/LH.Lakehouse/Tables/gold/orders"
- List lakehouse files:
- Query data (always prefer the wrapper scripts over raw
fab api/duckdb/sqlcmd; they resolve IDs, hosts, and auth for you):- Semantic model (DAX):
python3 scripts/execute_dax.py "ws.Workspace/Model.SemanticModel" -q "EVALUATE TOPN(10, 'Orders')" - Lakehouse or warehouse (DuckDB + Delta against OneLake):
python3 scripts/query_lakehouse_duckdb.py "ws.Workspace/LH.Lakehouse" -q "SELECT * FROM tbl LIMIT 10" -t gold.orders - Lakehouse SQL endpoint, warehouse, or SQL database (T-SQL via
sqlcmd+azsession):python3 scripts/query_sql_endpoint.py "ws.Workspace/LH.Lakehouse" -q "SELECT TOP 10 * FROM dbo.orders"
- Semantic model (DAX):
- Set properties for an item or workspace:
fab set "ws.Workspace/Item.Notebook" -q displayName -i "New Name"orfab set "ws.Workspace" -q description -i "Production environment" - Review or manage permissions:
- Item ACL:
fab acl ls "ws.Workspace/Model.SemanticModel"thenfab acl set "ws.Workspace/Model.SemanticModel" -I user@contoso.com -R Read - Workspace roles:
fab acl ls "ws.Workspace"thenfab acl set "ws.Workspace" -I user@contoso.com -R Member
- Item ACL:
- Deploy items to Fabric:
fab import "ws.Workspace/New.Notebook" -i ./local-path/Nb.Notebook -f - Download items from Fabric:
fab export "ws.Workspace/Nb.Notebook" -o ./backup -f(alwaysmkdir -p ./backupfirst) - Copy or move items between workspaces:
fab cp "dev.Workspace/Item.Notebook" "prod.Workspace" -forfab mv "ws.Workspace/Old.Notebook" "ws.Workspace/New.Notebook" -f - Open item in Fabric via browser:
fab open "spaceparts-dev.SpaceParts/Amazing Report.Report" - Using Fabric or Power BI APIs:
fab api -A powerbi "groups/<ws-id>/datasets/<model-id>/refreshes" -X post -i '{"type":"Full"}'orfab api "workspaces/<ws-id>/items" - Using Azure CLI (advanced) when Fabric CLI doesn't suffice:
- T-SQL over any SQL-capable item ; use
scripts/query_sql_endpoint.py(reusesaz loginviaActiveDirectoryAzCli; full walkthrough in querying-data.md) - Pass a Key Vault secret to a consumer without ever reading, echoing, or persisting it:
az login --service-principal -u <appId> -t <tenantId> --password "$(az keyvault secret show --vault-name <vault> --name <secret> --query value -o tsv)"; command substitution pipes the secret directly into the child process arg list, never stdout, a file, or a named shell variable - Full fab-vs-az decision matrix: fab-vs-az-cli.md
- T-SQL over any SQL-capable item ; use
Essential Concepts
For information about any concepts related to Power BI or Fabric you must search or fetch via the microsoft-learn MCP server (or the pbi-search CLI as an alternative) and ask the user questions with the AskUserQuestion tool; NEVER guess or make assumptions.
Workspaces
- Workspaces are containers for items like Notebooks (and other ETL items), Lakehouses (and other data items), SemanticModels, Reports (and other consumption items), and OrgApps.
- Workspaces can be assigned to different things:
- Deployment Pipelines for lifecycle management (Dev, Test, Prod, etc.)
- Domains for governance and tenant structuring
- Capacities for licensing and resources (Fabric or Premium capacities only; PPU and Pro work differently)
- Git repositories for Source Control via Git integration
Key Patterns
Pay special attention to each of the following areas when using the Fabric CLI
Path Format
Fabric uses filesystem-like paths with type extensions:
"WorkspaceName.Workspace/ItemName.ItemType"
You must quote paths with spaces and punctuation:
"Workspace Name.Workspace/Semantic Model Name.SemanticModel"
For lakehouses this is extended into files and tables:
WorkspaceName.Workspace/LakehouseName.Lakehouse/Files/FileName.extension or /WorkspaceName.Workspace/LakehouseName.Lakehouse/Tables/TableName
For Fabric capacities you have to use fab ls .capacities
Examples:
"Production Workspace.Workspace/Sales Report.Report"Data.Workspace/MainLH.Lakehouse/Files/data.csvData.Workspace/MainLH.Lakehouse/Tables/dbo/customers
Common Item Types
.Workspace- Workspaces.SemanticModel- Power BI datasets.Report- Power BI reports.Notebook- Fabric notebooks.DataPipeline- Data pipelines.Lakehouse/.Warehouse/.SQLDatabase- Data artifacts.SparkJobDefinition- Spark jobs.AISkill- Fabric Data Agents.MirroredDatabase/.MirroredWarehouse- Mirrored databases.Environment- Spark environments.UserDataFunction- User data functions
Full list: You must use fab desc or fab desc .<ItemType> to check syntax and types if the user asks about an item type not listed above.
JMESPath Queries
Filter and transform JSON responses with -q:
# Get single field
-q "id"
-q "displayName"
# Get nested field
-q "properties.sqlEndpointProperties"
-q "definition.parts[0]"
# Filter arrays
-q "value[?type=='Lakehouse']"
-q "value[?contains(name, 'prod')]"
# Get first element
-q "value[0]"
-q "definition.parts[?path=='model.tmdl'] | [0]"
Using fab api
fab has an api escape hatch that lets you use any API even if it doesn't have primary commands.
Variable Extraction Pattern
To use fab api you need item IDs. Extract them like this:
WS_ID=$(fab get "ws.Workspace" -q "id" | tr -d '"')
MODEL_ID=$(fab get "ws.Workspace/Model.SemanticModel" -q "id" | tr -d '"')
# Then use in API calls
fab api -A powerbi "groups/$WS_ID/datasets/$MODEL_ID/refreshes" -X post -i '{"type":"Full"}'
Admin APIs (Requires Admin Role)
Don't use admin commands or APIs if the user doesn't have Admin access. Here's some examples:
# Find semantic models by name (cross-workspace)
fab api "admin/items" -P "type=SemanticModel" -q "itemEntities[?contains(name, 'Sales')]"
# Find all notebooks
fab api "admin/items" -P "type=Notebook" -q "itemEntities[].{name:name,workspace:workspaceId}"
# Find all lakehouses
fab api "admin/items" -P "type=Lakehouse"
# Common types: SemanticModel, Report, Notebook, Lakehouse, Warehouse, DataPipeline, Ontology
For full admin API reference (cross-workspace discovery, tenant settings read/update, capacity/domain/workspace overrides, activity events): admin.md
Error Handling & Debugging
# Show response headers
fab api workspaces --show_headers
# Verbose output
fab get "Production.Workspace/Item" -v
# Save responses for debugging
fab api workspaces -o /tmp/workspaces.json
Common workflows
These are the most common workflows you'll encounter in Fabric
Finding or exploring workspaces, items, or metadata
| Command | Purpose | Example |
|---|---|---|
fab ls |
List workspaces / items | fab ls "Sales.Workspace" -l |
fab exists |
Check if a path exists | fab exists "Sales.Workspace/Model.SemanticModel" |
fab get |
Get item details | fab get "Sales.Workspace" -q "id" |
fab desc |
Supported commands per type | fab desc .SemanticModel |
Flags:
-l(long listing)-a(show hidden items)-q(JMESPath filter)-v(verbose output)-o(save response to file)
Fabric discovery follows a drill-down pattern:
- Browsing:
- List workspaces:
fab ls - List items in a workspace:
fab ls "ws.Workspace" -l - Confirm a path exists:
fab exists "ws.Workspace/Item" - Check what commands an item type supports:
fab desc .<ItemType>
- List workspaces:
- Inspection:
- Get item details:
fab get "ws.Workspace/Item" - Pull a single field:
fab get "ws.Workspace" -q "id"
- Get item details:
- Cross-workspace search:
- Rich metadata, no admin required:
scripts/search_across_workspaces.py - Downstream reports for a given model:
scripts/get-downstream-reports.py - Tenant-wide admin APIs: admin.md
- Rich metadata, no admin required:
Check references before exploring:
Querying data
| Command | Purpose | Example |
|---|---|---|
fab get -q "definition" |
Get model schema | fab get "ws.Workspace/Model.SemanticModel" -q "definition" -f |
fab api -A powerbi |
Execute DAX | fab api -A powerbi "groups/<ws-id>/datasets/<model-id>/executeQueries" -X post -i '{"queries":[{"query":"EVALUATE..."}]}' |
fab ls |
Browse files / tables | fab ls "ws.Workspace/LH.Lakehouse/Files" |
fab table schema |
Lakehouse table schema | fab table schema "ws.Workspace/LH.Lakehouse/Tables/sales" |
fab cp |
Upload / download OneLake file | fab cp ./local.csv "ws.Workspace/LH.Lakehouse/Files/" |
duckdb + delta_scan |
Query Delta tables (requires DuckDB) | duckdb -c "... delta_scan('abfss://<ws-id>@onelake.../<lh-id>/Tables/schema/table')" |
duckdb + read_csv/json |
Query raw files (requires DuckDB) | duckdb -c "... read_csv('abfss://.../Files/data.csv')" |
Flags:
-A fabric|powerbi|storage|azure(API audience)-X get|post|put|delete|patch(HTTP method)-i(JSON body or file)-f(skip sensitivity prompt on definition pulls).
Fabric exposes three query paths depending on the source; always prefer the wrapper scripts — they resolve IDs, hosts, and auth for you:
- Semantic models (DAX):
- Find model fields first:
fab get "ws.Workspace/Model.SemanticModel" -q "definition" - Query:
scripts/execute_dax.py
- Find model fields first:
- Lakehouses / Warehouses via Delta over OneLake (DuckDB):
- Query a single table:
scripts/query_lakehouse_duckdb.py(usetblas a placeholder and pass-t schema.table) - Multi-table joins or raw files in
Files/: pass--sqlwith your owndelta_scan()/read_csv/read_json_autocalls - Optionally scaffold a Direct Lake model instead:
scripts/create_direct_lake_model.py
- Query a single table:
- Lakehouse SQL endpoint, Warehouse, or SQL Database (T-SQL via
sqlcmd):- Query any SQL-capable item:
scripts/query_sql_endpoint.py(auto-detects host per item type, reusesaz loginviaActiveDirectoryAzCli) - Prefer this over DuckDB when you need
INFORMATION_SCHEMA,sys.*metadata, CTEs, or window functions
- Query any SQL-capable item:
Check references before writing queries:
Changing metadata or access (descriptions, tags, endorsement, properties, bindings, permissions)
| Command | Purpose | Example |
|---|---|---|
fab set |
Update property | fab set "ws.Workspace/Item" -q displayName -i "New Name" |
fab mv |
Rename / move item | fab mv "ws/Old.Notebook" "ws/New.Notebook" -f |
fab acl ls |
List permissions | fab acl ls "ws.Workspace" |
fab acl set |
Grant permission | fab acl set "ws.Workspace" -I <objectId> -R Member |
fab acl rm |
Revoke permission | fab acl rm "ws.Workspace" -I <upn> |
fab label set |
Set sensitivity label | fab label set "ws/Nb.Notebook" --name Confidential |
Flags:
-q <field>+-i <value>(set a single property)-I(object ID or UPN forfab acl)-R Admin|Member|Contributor|Viewer(role forfab acl set)-f(skip confirmation; ask user first if sensitivity labels are in play)
Metadata and access changes fall into a few groups:
- Properties (displayName, description, sensitivity config):
- Native update:
fab set "<path>" -q <field> -i "<value>" - Capture current state first so you can revert:
fab get -v -o /tmp/before.json
- Native update:
- Endorsement, certification, and tags (no first-class
fabcommands):- Patch via
fab apiwith item-specific endpoints - Tag workflow: tags.md
- Endorsement patterns: reference.md
- Patch via
- Folder placement:
- Move items between workspace subfolders: folders.md
- Access control and sensitivity labels:
- Grant / revoke:
fab acl set,fab acl rm - Set sensitivity label:
fab label set - Verify the principal first:
az ad user show - Never change permissions or labels without explicit user confirmation
- Grant / revoke:
- Bindings:
- Rebind a thin
.Reportto a different.SemanticModel: reports.md - Semantic model source rebinds (e.g. swap a lakehouse): semantic-models.md
- Rebind a thin
Check references before changing metadata:
Working with workspaces
| Command | Purpose | Example |
|---|---|---|
fab mkdir |
Create workspace / item | fab mkdir "New.Workspace" -P capacityname=MyCapacity |
fab assign |
Attach capacity / domain | fab assign .capacities/cap.Capacity -W ws.Workspace -f |
fab unassign |
Detach capacity / domain | fab unassign .capacities/cap.Capacity -W ws.Workspace |
fab start / fab stop |
Resume / pause capacity | fab start .capacities/cap.Capacity |
fab cp -r |
Fork workspace | fab cp "dev.Workspace" "prod.Workspace" -r -f |
fab rm |
Soft-delete (see recovery) | fab rm "ws/Item.Type" -f |
Flags:
-P key=value(creation params forfab mkdir)-W(target workspace forfab assign/fab unassign)-r(recursive copy/move)-bpc(block on path collision forfab cp)-f(skip confirmation)
Workspace-scope operations fall into a few groups:
- Create and provision:
- Create workspace:
fab mkdir "<Name>.Workspace" -P capacityname=<cap> - Attach capacity or domain:
fab assign .capacities/<cap>.Capacity -W <ws>.Workspace - Planning context, create/get/set surface, large storage format, Spark pools, OneLake defaults, Git: workspaces.md
- Create workspace:
- Copy, fork, download:
- Duplicate a workspace in-tenant:
fab cp -r "dev.Workspace" "prod.Workspace" - Dry-run the source tree first:
fab ls "dev.Workspace" - Full local snapshot (items + lakehouse files):
scripts/download_workspace.py
- Duplicate a workspace in-tenant:
- Permissions:
- Inspect / grant / revoke:
fab acl ls | set | rm - Tenant-wide governance audit: use the
audit-tenant-settingsskill from thefabric-adminplugin
- Inspect / grant / revoke:
- Connections and gateways (bound to, but outside, the workspace):
- Credential types (WorkspaceIdentity, SPN, Basic), OAuth2 limits: connections.md
- Datasource binding, credential rotation: gateways.md
- Folders inside a workspace:
- Layout, nesting, conventions: folders.md
Check references before modifying workspaces:
Executing or scheduling jobs (notebooks, notebook cells, pipelines, semantic model refresh)
| Command | Purpose | Example |
|---|---|---|
fab job run |
Run synchronously | fab job run "ws/ETL.Notebook" -P date:string=2025-01-01 |
fab job start |
Run asynchronously | fab job start "ws/ETL.Notebook" |
fab job run-list |
List executions | fab job run-list "ws/Nb.Notebook" |
fab job run-status |
Check status | fab job run-status "ws/Nb.Notebook" --id <job-id> |
fab job run-cancel |
Cancel a job | fab job run-cancel "ws/Nb.Notebook" --id <job-id> -w |
fab api -A powerbi .../refreshes |
Trigger semantic model refresh | fab api -A powerbi "groups/<ws-id>/datasets/<model-id>/refreshes" -X post -i '{"type":"Full"}' |
Flags:
-P key:type=value(parameters, type isstring|int|bool)--id(job run ID)-w(wait on cancel)--timeout(overall timeout for synchronous runs)--polling_interval(status poll cadence)
Jobs map to different endpoints depending on item type:
- Notebooks and pipelines:
- Run synchronously:
fab job run "ws/ETL.Notebook" -P date:string=2025-01-01 - Run asynchronously:
fab job start "ws/ETL.Notebook" - Check status:
fab job run-status "ws/Nb.Notebook" --id <job-id> - List history:
fab job run-list "ws/Nb.Notebook" - Python / PySpark kernels, Livy sessions, cell-level CRUD: notebooks.md
- Run synchronously:
- Semantic model refresh (not exposed as
fab job):- Trigger:
fab api -A powerbi "groups/<ws-id>/datasets/<model-id>/refreshes" -X post -i '{"type":"Full"}' - Check current run before starting a new one (409 if already running):
fab api -A powerbi "groups/<ws-id>/datasets/<model-id>/refreshes?\$top=1" - Enhanced refresh, incremental policies, partition targeting: semantic-models.md
- Trigger:
- Dataflow refresh:
- Gen1 and Gen2 have different endpoints: dataflows.md
- Scheduling:
- Per-item schedules via the scheduler API: notebooks.md, reference.md
Check references before running jobs:
Fabric admin operations (auditing, management)
| Command | Purpose | Example |
|---|---|---|
fab api "admin/items" |
Cross-workspace item search | fab api "admin/items" -P "type=SemanticModel" -q "itemEntities[?contains(name,'Sales')]" |
fab api "admin/workspaces" |
Workspace inventory | fab api "admin/workspaces" |
fab api "admin/tenantsettings" |
Tenant settings | fab api "admin/tenantsettings" |
fab api "admin/capacities" |
Capacity inventory | fab api "admin/capacities" |
fab api -X post .../update |
Update tenant setting | fab api -X post "admin/tenantsettings/<name>/update" -i body.json |
Flags:
-P key=value(query params, e.g.type=SemanticModel)-q(JMESPath filter)-X post+-i(write ops)--show_headers(inspectRetry-Afteron 429)
Admin-scope work is gated behind the Fabric / Power BI admin role. Confirm access first with fab api "admin/capacities" 2>&1 | head -5; if it errors, stop rather than retry.
Two entry points cover most admin tasks:
- Governance audits (tenant settings, delegated overrides, Entra SG scoping):
- Use the
audit-tenant-settingsskill from thefabric-adminplugin. It owns the curated metadata baseline, the audit + change-detection script, delegated-override enumeration, and the Entra SG investigation workflow. - Invoke it whenever the question combines tenant posture with group membership, override scope, or drift against the baseline.
- Use the
- Raw admin APIs (cross-workspace search, activity events, artifact access, item search):
- Patterns in admin.md
- Rate limit: 25 write requests / minute; honor
Retry-Afteron 429 - Print the exact command and wait for user confirmation before any destructive admin operation
Check references before admin work:
- admin.md
- permissions.md for workspace / item ACL exposure audits
Definitions and deployment (item definitions, deployment pipelines, git integration, cicd)
| Command | Purpose | Example |
|---|---|---|
fab get -q "definition" |
Read raw definition | fab get "ws/Model.SemanticModel" -q "definition" -f |
fab export |
Export item to local | fab export "ws/Nb.Notebook" -o ./backup -f |
fab import |
Import item from local | fab import "ws/Nb.Notebook" -i ./backup/Nb.Notebook -f |
fab cp |
Copy between workspaces | fab cp "dev/Item" "prod.Workspace" -f |
fab api "deploymentPipelines" |
Deployment pipelines API | fab api "deploymentPipelines" -q "value[]" |
Flags:
-o(output path forfab export)-i(input path or JSON body forfab import)--format(definition format for export / import)-f(skip overwrite and sensitivity prompts)
Every Fabric item has a serializable definition. Move definitions between environments depending on scope:
- Single item:
- Round-trip locally:
fab exportthenfab import(alwaysmkdir -pthe output directory first;fab exportdoes not create intermediate directories and fails with[InvalidPath]) - Same-tenant shortcut, no local hop:
fab cp "dev/Item" "prod.Workspace"
- Round-trip locally:
- Semantic model as PBIP (TMDL + blank report):
- Power BI Desktop and git-ready format:
scripts/export_semantic_model_as_pbip.py
- Power BI Desktop and git-ready format:
- Full workspace snapshot (items + lakehouse files):
- Backups, offline analysis, cross-tenant forks:
scripts/download_workspace.py
- Backups, offline analysis, cross-tenant forks:
- Promotion between Dev, Test, Prod:
- Fabric deployment pipelines API (covers all item types)
- Power BI pipelines API (Power BI items only, but finer-grained deploy flags like
allowPurgeData,allowTakeOver) - When to use each, selective deploy, LRO polling: deployment-pipelines.md
- Git integration (connect workspace to repo, branch, commit, update from git):
- Workspace git section in workspaces.md
Check references before deploying:
- import-download-deploy.md ; export / import / copy / move, PBIP round-trips, migration patterns, rebinding gotchas
- deployment-pipelines.md
- semantic-models.md
- reports.md
- paginated-reports.md
- notebooks.md
- workspaces.md
Related skills
audit-tenant-settings(in thefabric-adminplugin) ; Fabric governance workflow covering tenant settings, delegated overrides (capacity / domain / workspace), and the Entra security groups those settings reference. Read-only; holds the curated metadata baseline and the audit + change-detection script.
Gotchas
- IMPORTANT: DON'T try to use
fab lson items that aren't data items (.Lakehouse, .Warehouse, etc); usefab lsto find workspaces and items, and usefab getto look at definitions - ALWAYS Use the
-fflag when usingfab get,fab import,fab export, etc. as described above - ONLY fallback to
fab apiwhen a command doesn't exist
References
Skill references:
- Import, Download, and Deploy - Export / import / copy / move items, PBIP round-trips, dev-to-prod migration patterns
- Querying Data - Query semantic models in DAX and lakehouses or warehouses in SQL with DuckDB
- Lakehouses - Endpoints, file/table operations, OneLake paths
- Warehouses - Create, browse, query via DuckDB, load data
- SQL Databases - Create, browse, query via DuckDB, auto-mirroring
- Semantic Models - TMDL, DAX, refresh, storage mode
- Reports - Export, import, visuals, fields
- Paginated Reports - RDL upload, export-to-file, datasources, parameters
- Notebooks - Python/PySpark kernels, metadata, cell CRUD, Livy execution, scheduling
- Workspaces - Create, manage, permissions
- Permissions - Sharing and distribution, workspace roles, item permissions, apps, embed, B2B, deployment pipeline permissions, licensing and capacity SKUs
- Deployment Pipelines - CI/CD, deploy stages, selective deploy, LRO polling
- Dataflows - Gen1 and Gen2, refresh, publish, admin
- Dashboards - Tiles, clone (dashboards are not reports)
- Org Apps - Read-only API for distributed content packages
- Scorecards - Goals, check-ins, status rules (Preview API)
- Gateways - Datasources, credentials, dataset binding
- Folders - Organize items into folders via API; includes best practices for structuring workspaces
- Tags - Create, apply, and audit tenant/domain tags on items and workspaces via
fab api(no nativefab tagcommand) - fab vs az CLI - When to use which; capacity, networking, Key Vault, monitoring, CMK, CI/CD
- Admin APIs - Cross-workspace search, tenant operations, governance
- API Reference - Capacities, domains, misc API patterns
- Connections - Create, update, list connections programmatically; credential types (WorkspaceIdentity, SPN, Basic); OAuth2 limitations
- Full Command Reference - All commands detailed
Scripts (scripts that you can execute):
- search_across_workspaces.py ; cross-workspace item search via DataHub V2 API; filters by type, owner, storage mode, last visited, capacity SKU
- get-downstream-reports.py ; find all reports connected to a given semantic model across accessible workspaces (no admin required)
- execute_dax.py ; execute DAX queries against semantic models; output as table, csv, or json
- query_lakehouse_duckdb.py ; query lakehouse or warehouse Delta tables via DuckDB against OneLake (reuses
az login); output as table, csv, or json - query_sql_endpoint.py ; query lakehouse SQL endpoint, warehouse, or SQL database via
sqlcmd(reusesaz loginthroughActiveDirectoryAzCli); output as table, csv, or json - create_direct_lake_model.py ; create a Direct Lake semantic model from lakehouse tables
- export_semantic_model_as_pbip.py ; export a semantic model as a PBIP project (TMDL definition + blank report)
- download_workspace.py ; download a full workspace with all item definitions and lakehouse files
See scripts/README.md for detailed usage, arguments, and examples. Always search the scripts/ folder before writing a new helper; a script may already exist for the task.
External references (request markdown when possible):
- fab CLI: GitHub Source | Docs
- Microsoft: Fabric CLI Learn
- APIs: Fabric API | Power BI API
- DAX: dax.guide - use
dax.guide/<function>/e.g.dax.guide/addcolumns/ - Power Query: powerquery.guide - use
powerquery.guide/function/<function> - Power Query Best Practices
More from data-goblin/power-bi-agentic-development
tmdl
Direct TMDL file authoring and BIM-to-TMDL conversion for semantic models in PBIP projects. Automatically invoke when the user asks to "edit TMDL", "add a measure in TMDL", "TMDL syntax", "fix formatString", "fix summarizeBy", "TMDL indentation", "convert BIM to TMDL", "add a column description", "create a calculated column in TMDL", or mentions .tmdl file editing or BIM-to-TMDL migration.
14create-pbi-report
Step-by-step workflow for creating complete Power BI reports from scratch using pbir CLI. Covers model discovery, report creation, page layout, theme setup, visual placement, field binding, filtering, formatting, validation, and publishing. Automatically invoke when the user asks to "create a new report", "build a report from scratch", "make a dashboard", "set up a report with KPIs", "create an executive dashboard", "add pages and visuals to a new report".
13pbip
Expert guidance for the Power BI Project (PBIP) file format; project structure, cross-cutting operations (renames, forking), and PBIX extraction/conversion. Automatically invoke when the user mentions PBIP, PBIX, .pbip/.pbism/.platform files, or asks about "PBIP project structure", "PBIP vs PBIX", "thin report vs thick report", "rename a table", "cascade rename", "fork a PBIP project", "convert pbix to pbip", "extract pbix", "what files are in a PBIP", "PBIP encoding", "definition.pbir", or discusses project-level file structure and post-rename verification.
12pbir-format
Format reference for Power BI Enhanced Report (PBIR) JSON schemas and patterns. Automatically invoke when the user asks about PBIR JSON structure, visual.json properties, PBIR expressions, objects vs visualContainerObjects, theme inheritance, conditional formatting patterns, extension measures, bookmarks, field references, filter formatting, query roles, PBIR page structure, report wallpaper, or any PBIR metadata format question.
12review-report
Actionable feedback on the quality, usage, and effectiveness of Power BI reports. Automatically invoke when the user asks to "review a report", "audit a report", "report usage analysis", "report health check", "find unused reports", "check if a report is being used", "assess report performance", "evaluate report quality".
11c-sharp-scripting
Writing and executing C# scripts and macros against Power BI semantic models using Tabular Editor 2/3. Automatically invoke when the user mentions "C# script", "Tabular Editor script", "TOM scripting", "MacroActions.json", "XMLA", or asks to "automate model changes", "bulk update measures", "create calculation groups", "write a macro", "format DAX expressions", "manage model metadata".
11