test-mobile-app
Mobile App Testing Skill
This skill enables Claude to perform end-to-end mobile application testing:
- Analyze the app structure and infer user-facing functionality
- Generate use cases from an end-user perspective
- Write concrete test scenarios with expected results
- Execute tests via Appium + Android emulator (or interpret results statically)
- Produce a structured HTML/Markdown test report
Phase 1 — App Analysis
What to collect
Before generating use cases, gather as much context as possible:
- Source code (Android/Java/Kotlin, iOS/Swift, React Native, Flutter)
- APK file — use
androguardto extract Activity list, permissions, Manifest - Screenshots — analyze UI from images
- Description — what the app does, target audience
APK Analysis (Android)
Read scripts/analyze_apk.py for full script. Quick usage:
python3 scripts/analyze_apk.py path/to/app.apk
Outputs: package name, activities, permissions, strings → feeds into use case generation.
Source Code Analysis
If source is available, scan for:
- Screen/Activity/Fragment/Page names → each is a potential use case surface
- Navigation graphs (React Navigation, NavController)
- API endpoints called (network requests)
- Form fields, validation logic
- Authentication flows
Phase 2 — Use Case Generation
Methodology
Think from the perspective of a real end user — not a developer. Ask: "What would a person actually do with this app?"
Use case format:
UC-<N>: <Short Title>
Actor: End User
Precondition: <What must be true before this action>
Steps:
1. <action>
2. <action>
...
Expected outcome: <what the user sees/gets>
Priority: High / Medium / Low
Use Case Categories to Always Cover
- Onboarding — first launch, tutorial, permissions prompt
- Authentication — registration, login, logout, password reset
- Core Feature Flow — the primary value action of the app (1-3 flows)
- Data Entry — any form: required fields, validation, error states
- Navigation — bottom nav, back button, deep links
- Empty States — what happens when there's no data
- Error Handling — no internet, server error, invalid input
- Settings / Profile — change preferences, update data
- Notifications — if the app uses push notifications
- Accessibility — basic: is text readable, are tap targets big enough
Aim for 15–30 use cases depending on app complexity.
Phase 3 — Test Scenario Writing
For each use case, write a test scenario:
TEST-<N>: <Title>
Related UC: UC-<N>
Type: Functional | UI | Regression | Smoke
Steps:
1. Launch app
2. <specific action with exact input data>
3. ...
Assertions:
- Element <locator> is visible
- Text "<expected>" is displayed
- Screen navigates to <ScreenName>
- No crash / error dialog
Expected Result: PASS / FAIL criteria
Test Types to Include
| Type | When to use |
|---|---|
| Smoke | Quick sanity — does app launch, core screens load? |
| Functional | Does feature X work correctly? |
| UI/Visual | Are elements present, correctly labeled, accessible? |
| Edge Case | Empty fields, special characters, very long strings |
| Regression | After a change — did existing features break? |
Phase 4 — Test Execution
Environment Setup
Read references/setup-appium.md for full Appium + emulator setup.
Quick check:
python3 scripts/check_environment.py
This verifies: adb, emulator, Appium server, Python client.
Running Tests
# Run all tests
python3 scripts/run_tests.py --apk path/to/app.apk --output results/
# Run smoke tests only
python3 scripts/run_tests.py --apk path/to/app.apk --suite smoke --output results/
# Run on specific device
python3 scripts/run_tests.py --apk path/to/app.apk --device emulator-5554 --output results/
Test Execution Without Emulator (Static Mode)
If no emulator is available, Claude can:
- Analyze source code / screenshots statically
- Write all test scenarios
- Mark execution status as
MANUAL_REQUIRED - Generate a report with all test cases ready to be run manually
Use --static flag:
python3 scripts/run_tests.py --static --output results/
Phase 5 — Report Generation
python3 scripts/generate_report.py --results results/ --output test_report.html
Report includes:
- Summary: total tests, passed, failed, skipped
- Per-test details: steps, assertions, actual vs expected, screenshots
- Use case coverage matrix
- Issues found (with severity: Critical / Major / Minor)
- Environment info (device, OS, app version)
Read references/report-template.md for report structure details.
Workflow Summary
1. Receive app (APK / source / description / screenshots)
↓
2. Run analyze_apk.py OR inspect source code
↓
3. Generate use cases (UC-1...UC-N) — show to user, ask for feedback
↓
4. Write test scenarios (TEST-1...TEST-N) — derive from use cases
↓
5. Check environment (check_environment.py)
↓
6a. Emulator available → run_tests.py → capture results
6b. No emulator → static mode → mark for manual execution
↓
7. generate_report.py → HTML report → present to user
Important Notes
- Always show use cases to the user before writing tests — they know their app best.
- Locators: Prefer
accessibility id>resource-id>xpath. Never use index-based xpath. - Waits: Always use explicit waits (
WebDriverWait), nevertime.sleep. - Screenshots: Capture on every assertion failure automatically.
- Crash detection: After every interaction, check for crash dialogs (
scripts/crash_detector.py). - Language: Generate use cases and reports in the language the user is using.