powerbi-mcp-server-tester
PowerBI MCP Server Tester
Automated testing framework for the PowerBI MCP server that validates functionality, Azure AD authentication, and PowerBI tool implementations using MCP Inspector and Playwright automation.
Prerequisites
Before testing, ensure:
-
Environment Variables: PowerBI credentials must be set in
.envfilePOWERBI_TENANT_ID- Azure AD tenant IDPOWERBI_CLIENT_ID- Application (client) IDPOWERBI_CLIENT_SECRET- Client secret value
-
MCP Inspector: Installed globally via npx (no manual installation needed)
-
Playwright MCP: Must be enabled as an MCP server
-
PowerBI MCP Server: Must be in the powerbi-mcp project directory with server code in
src/
Testing Workflow
Step 1: Verify Prerequisites
Ask the user to confirm:
- "Please confirm your PowerBI environment variables (POWERBI_TENANT_ID, POWERBI_CLIENT_ID, POWERBI_CLIENT_SECRET) are set in your
.envfile" - "Should I test all PowerBI tools or focus on specific ones (workspaces, datasets, queries)?"
Step 2: Launch MCP Inspector
Start the MCP Inspector with the PowerBI MCP server (assumes you're in the powerbi-mcp directory):
# Background task - don't block on this
npx @modelcontextprotocol/inspector uv run run-server
Wait 5-10 seconds for Inspector to start, then read the output to get the Inspector URL.
Step 3: Automated Testing with Playwright
Use Playwright MCP to:
-
Navigate to Inspector
- Open the Inspector URL from the background task output
-
Verify Connection
- Click "Connect" button
- Wait for "Connected" status
- Verify server name and version appear
-
List Available Tools
- Click "List Tools" button
- Count and record all available tools
- Identify which tools are read-only (GET/list/read operations)
-
Test Each Read-Only Tool For each read-only tool:
- Click on the tool to view its details
- Review parameters and their types
- If the tool has optional parameters, test with defaults first
- If the tool has required parameters, use sample values or ask user for test values
- Click "Run Tool" button
- Wait for result (success or error)
- Record the outcome
-
Take Screenshot
- Capture screenshot of successful test results
- Save as
mcp-test-results-<timestamp>.png
-
Close Browser
- Clean up Playwright session
Step 4: Generate Test Report
Create a comprehensive test report including:
# PowerBI MCP Server Test Report
**Server**: powerbi-mcp
**Date**: <timestamp>
**Tester**: Claude (powerbi-mcp-server-tester skill)
## Summary
- Total Tools: <count>
- Read-Only Tools Tested: <count>
- Tests Passed: <count>
- Tests Failed: <count>
- Success Rate: <percentage>%
## Connection Test
- ✅/❌ Server Started
- ✅/❌ Inspector Connected
- ✅/❌ Authentication Successful
- Server Version: <version>
## Tool Test Results
### <tool-name-1>
- **Status**: ✅ PASS / ❌ FAIL
- **Parameters Tested**: <list>
- **Response Time**: <ms>
- **Result**: <brief summary>
- **Notes**: <any observations>
### <tool-name-2>
...
## Issues Found
<List any errors, warnings, or unexpected behaviors>
## Recommendations
<Suggest improvements or areas needing attention>
## Test Artifacts
- Screenshot: `mcp-test-results-<timestamp>.png`
- Inspector Output: `<task-output-file>`
Safety Guidelines
CRITICAL - Read-Only Testing Only:
- ✅ DO Test:
get_*,list_*,read_*,query_*,fetch_*,show_* - ❌ NEVER Test:
create_*,update_*,delete_*,remove_*,modify_*,write_*,post_*,put_*,patch_*
Before running any tool:
- Check the tool name for destructive keywords
- Review the tool description for side effects
- When in doubt, skip the tool and note it in the report
If a tool's safety is unclear, ask the user: "The tool <name> might be destructive. Should I test it? (I recommend skipping unless you're certain it's read-only)"
Example Usage
User: "Test my PowerBI MCP server"
Claude:
- Confirms environment variables are set
- Launches Inspector with PowerBI server
- Tests
get_workspaces,get_datasets,get_dataset(skipsquery_datasetif it requires specific IDs) - Generates report showing all 4 tools, connection success, test results
- Provides screenshot and recommendations
Troubleshooting
Inspector won't start:
- Check server path is correct
- Verify all dependencies installed (
uv sync) - Review error output from background task
Connection fails:
- Verify environment variables are set correctly
- Check authentication credentials are valid
- Review server logs for auth errors
Tools fail with "not found" errors:
- May need specific IDs (workspace, dataset, etc.)
- Ask user for sample IDs to use in testing
- Skip tools that require production data
Playwright errors:
- Ensure Playwright MCP is enabled
- Check browser has time to load (add delays if needed)
- Review element selectors match current Inspector UI
Advanced Features
Custom Test Parameters
Ask user to provide test data:
"For testing get_dataset, I need a dataset ID. Can you provide one from your workspace?"
Focused Testing
Test only specific tools:
# User specifies: "Only test the workspace tools"
# Claude tests: get_workspaces, list_workspaces, etc.
Continuous Testing
For CI/CD integration, save report to file and return exit code based on success rate.
Implementation Notes
- Use Bash tool with
run_in_background=truefor Inspector - Use Playwright MCP for all browser automation
- Wait appropriate delays for async operations (5s for page load, 3s for tool execution)
- Handle errors gracefully and include in report
- Always close browser and stop background tasks when done