skills/enesberber/istqb-test-automation/istqb-test-automation-engineer

istqb-test-automation-engineer

SKILL.md

ISTQB Test Automation Engineer (CTAL-TAE v2.0)

Overview

This skill covers the full ISTQB Advanced Level Test Automation Engineering (CTAL-TAE) syllabus v2.0. It guides AI coding agents and testers through automation architecture, tool evaluation, framework design, CI/CD integration, reporting, verification, and continuous improvement.

Syllabus scope: 21 contact hours | 8 chapters | 40 learning objectives | K2–K4


When to Apply

Trigger Scenarios

Scenario Primary Rules Command
Building a new TAF from scratch generic-test-automation-architecture, taf-layering, design-patterns-in-automation design-automation-architecture
Evaluating or selecting an automation tool tool-selection-criteria, tool-evaluation-process evaluate-and-select-tools
Implementing BDD tests behavior-driven-development, taf-layering
Implementing DDT data-driven-testing
Setting up Page Object Model page-object-model, taf-layering
Integrating tests into CI/CD cicd-pipeline-integration, configuration-management-testware implement-cicd-integration
Diagnosing test failures test-failure-analysis, root-cause-analysis-automated-tests conduct-automation-review
Improving an existing TAF scripting-improvement-strategies, test-histogram-analysis continuous-improvement-analysis
Applying design patterns design-patterns-in-automation, solid-principles-in-automation
Deploying automation (pilot) pilot-and-deployment, deployment-risk-mitigation

Preconditions

  • Access to the SUT (or SUT specification)
  • Agreed test strategy and test levels in scope
  • Team skills and tooling constraints identified

Quick Decision Trees

What Automation Approach to Use?

Is the test stakeholder-facing (acceptance criteria)?
  ├── YES → BDD with Gherkin (behavior-driven-development.md)
  └── NO  → Is there lots of repetitive data variation?
               ├── YES → DDT (data-driven-testing.md)
               └── NO  → Will non-technical testers author tests?
                            ├── YES → KDT (keyword-driven-testing.md)
                            └── NO  → Structured scripting with POM (taf-layering.md)

How to Select a Tool?

Does a tool already work for the team / project?
  ├── YES → Only evaluate if there is a specific gap
  └── NO  → Map SUT interfaces (web / API / mobile / desktop)
               ├── Identify language the team knows
               ├── Score candidates: compatibility, skill, CI, cost, community
               ├── Build PoC with top 2
               └── Document and decide (tool-evaluation-process.md)

When to Improve Automation?

Pass rate < 95%?
  └── YES → Failure analysis → scripting-improvement-strategies.md

Execution time > budget?
  └── YES → Parallel execution → cicd-pipeline-integration.md

Maintenance > 1 day/sprint?
  └── YES → POM / DDT refactoring → maintainability-factors.md

Flaky tests > 2%?
  └── YES → Histogram + RCA → test-histogram-analysis.md

Knowledge Areas

Chapter Topic Key Rules Impact
1 Introduction & Objectives test-automation-overview, sdlc-models-and-automation, tool-selection-criteria HIGH
2 Preparing for Automation infrastructure-configuration, test-environments, tool-evaluation-process HIGH
3 Architecture generic-test-automation-architecture, taf-layering, page-object-model, design-patterns-in-automation, data-driven-testing, behavior-driven-development CRITICAL
4 Implementing pilot-and-deployment, deployment-risk-mitigation, maintainability-factors CRITICAL
5 CI/CD Integration cicd-pipeline-integration, configuration-management-testware, contract-testing CRITICAL
6 Reporting & Metrics data-collection-methods, test-failure-analysis, logging-levels, test-progress-reporting HIGH
7 Verification environment-verification, root-cause-analysis-automated-tests, static-analysis-automation-code HIGH
8 Continuous Improvement test-histogram-analysis, ai-ml-in-test-automation, schema-validation, sut-alignment-strategies, scripting-improvement-strategies HIGH

Critical Anti-Patterns

1. Monolithic Scripts (No Layering)

Problem: Test scripts contain raw locators, API calls, and assertions all mixed together. A single SUT change breaks dozens of tests.

Signs: find_element calls in test files; same locator string in 10+ places.

Fix: Apply taf-layering.md + page-object-model.md. Establish POM before scaling.

2. Hard-Coded Test Data (Instead of DDT)

Problem: Test data embedded in scripts. Adding a test case requires a developer. Data changes require editing multiple files.

Signs: send_keys("standard_user") in 20 test methods.

Fix: Extract to CSV/JSON files. Apply data-driven-testing.md.

3. Skipping the Pilot Phase

Problem: Full-scale TAF built without validation. Architecture flaws discovered after large investment.

Signs: 200 tests written before any CI integration; team cannot maintain the suite.

Fix: Always pilot with 10–20 tests first. See pilot-and-deployment.md.

4. Ignoring Maintainability

Problem: Flaky tests tolerated; locators hard-coded; no code review for scripts. Suite becomes unusable within months.

Signs: Pass rate trending down; maintenance >3 days/sprint; time.sleep() everywhere.

Fix: Enforce linting rules; apply SOLID principles; regular histogram reviews. See maintainability-factors.md.


Common Patterns

Page Object Model

# Adaptation layer: page object owns locators
class LoginPage:
    _USERNAME = (By.ID, "user-name")
    _PASSWORD = (By.ID, "password")
    _SUBMIT   = (By.ID, "login-button")

    def login(self, username, password) -> "HomePage":
        self._driver.find_element(*self._USERNAME).send_keys(username)
        self._driver.find_element(*self._PASSWORD).send_keys(password)
        self._driver.find_element(*self._SUBMIT).click()
        return HomePage(self._driver)

# Test script: no locators, just domain language
def test_successful_login(driver):
    home = LoginPage(driver).login("standard_user", "secret_sauce")
    assert home.product_count() > 0

Data-Driven Testing

@pytest.mark.parametrize("username,password,expected", [
    ("standard_user",   "secret_sauce", "Products"),
    ("locked_out_user", "secret_sauce", "Error: locked out"),
    ("invalid_user",    "wrong_pass",   "Error: credentials"),
])
def test_login(driver, username, password, expected):
    result = LoginPage(driver).login(username, password)
    assert expected in result.page_content()

BDD with Given-When-Then

Scenario: Successful login redirects to products page
  Given the login page is displayed
  When I login as "standard_user" with password "secret_sauce"
  Then I should see the "Products" page

Detailed Instructions

Step 1 — Assess the SUT and Define Automation Strategy

  1. Identify SUT interfaces (web, API, mobile, database)
  2. Map to test levels (unit, integration, API, E2E)
  3. Select scripting approach for each level (BDD, DDT, structured)
  4. Reference: test-automation-overview.md, sdlc-models-and-automation.md

Step 2 — Evaluate and Select Tools

  1. Define weighted evaluation criteria
  2. Score candidate tools against criteria
  3. Build PoC with top 2 tools
  4. Document recommendation
  5. Reference: tool-selection-criteria.md, tool-evaluation-process.md
  6. Command: evaluate-and-select-tools.md

Step 3 — Design the TAF Architecture

  1. Define three-layer structure (scripts, business logic, adaptation)
  2. Select design patterns (POM, Facade, Singleton, Flow)
  3. Apply SOLID principles
  4. Establish directory structure and naming conventions
  5. Reference: generic-test-automation-architecture.md, taf-layering.md, design-patterns-in-automation.md, solid-principles-in-automation.md
  6. Command: design-automation-architecture.md

Step 4 — Prepare Infrastructure and Environments

  1. Configure test environments (local, CI, staging)
  2. Establish testability improvements with the development team
  3. Set up secrets management and configuration
  4. Reference: infrastructure-configuration.md, test-environments.md

Step 5 — Pilot and Deploy

  1. Select 10–20 representative test cases for pilot
  2. Build pilot suite with chosen TAF structure
  3. Integrate into CI pipeline
  4. Measure pilot success criteria
  5. Expand incrementally
  6. Reference: pilot-and-deployment.md, deployment-risk-mitigation.md

Step 6 — Integrate into CI/CD Pipeline

  1. Map test types to pipeline stages
  2. Configure quality gates
  3. Set up test reporting (Allure)
  4. Configure flaky test quarantine
  5. Reference: cicd-pipeline-integration.md, configuration-management-testware.md
  6. Command: implement-cicd-integration.md

Step 7 — Monitor and Verify

  1. Configure data collection (logs, screenshots, metrics)
  2. Analyse test failures using classification framework
  3. Run environment verification before test runs
  4. Apply static analysis in CI
  5. Reference: data-collection-methods.md, test-failure-analysis.md, environment-verification.md, static-analysis-automation-code.md
  6. Command: conduct-automation-review.md

Step 8 — Continuous Improvement

  1. Run monthly histogram analysis
  2. Identify fragile tests and apply RCA
  3. Measure improvement metrics
  4. Plan and prioritise TAF improvements
  5. Reference: test-histogram-analysis.md, scripting-improvement-strategies.md, sut-alignment-strategies.md
  6. Command: continuous-improvement-analysis.md

Inputs / Outputs

Inputs:

  • SUT specification or access to the SUT
  • Test strategy and coverage requirements
  • Team skills and tool constraints
  • Existing TAF (for improvement scenarios)

Outputs:

  • Tool evaluation report and recommendation
  • TAF directory structure with layer-separated code
  • CI/CD pipeline configuration
  • Test results with Allure reports
  • Failure analysis classifications
  • Improvement backlog

Reference Index

Rule Files (36)

File Chapter Impact K-Level
rules/test-automation-overview.md 1 HIGH K2
rules/sdlc-models-and-automation.md 1 HIGH K2
rules/tool-selection-criteria.md 1 HIGH K2
rules/infrastructure-configuration.md 2 HIGH K2
rules/test-environments.md 2 MEDIUM K2
rules/tool-evaluation-process.md 2 CRITICAL K4
rules/generic-test-automation-architecture.md 3 CRITICAL K2
rules/taf-layering.md 3 CRITICAL K3
rules/capture-playback.md 3 MEDIUM K3
rules/linear-scripting.md 3 MEDIUM K3
rules/structured-scripting.md 3 HIGH K3
rules/test-driven-development.md 3 HIGH K3
rules/data-driven-testing.md 3 CRITICAL K3
rules/keyword-driven-testing.md 3 HIGH K3
rules/behavior-driven-development.md 3 CRITICAL K3
rules/design-patterns-in-automation.md 3 CRITICAL K3
rules/solid-principles-in-automation.md 3 HIGH K3
rules/page-object-model.md 3 CRITICAL K3
rules/pilot-and-deployment.md 4 CRITICAL K3
rules/deployment-risk-mitigation.md 4 CRITICAL K4
rules/maintainability-factors.md 4 HIGH K2
rules/cicd-pipeline-integration.md 5 CRITICAL K3
rules/configuration-management-testware.md 5 HIGH K2
rules/contract-testing.md 5 HIGH K3
rules/data-collection-methods.md 6 HIGH K3
rules/test-failure-analysis.md 6 CRITICAL K4
rules/logging-levels.md 6 MEDIUM K2
rules/test-progress-reporting.md 6 HIGH K2
rules/environment-verification.md 7 HIGH K3
rules/root-cause-analysis-automated-tests.md 7 CRITICAL K3
rules/static-analysis-automation-code.md 7 HIGH K2
rules/test-histogram-analysis.md 8 HIGH K3
rules/ai-ml-in-test-automation.md 8 HIGH K3
rules/schema-validation.md 8 HIGH K3
rules/sut-alignment-strategies.md 8 HIGH K3
rules/scripting-improvement-strategies.md 8 HIGH K4

Command Files (5)

File Purpose
command/evaluate-and-select-tools.md K4 tool selection workflow
command/design-automation-architecture.md TAF architecture design
command/implement-cicd-integration.md CI/CD pipeline integration
command/conduct-automation-review.md Review and verify existing TAS
command/continuous-improvement-analysis.md Metrics-driven improvement

Reference Files (3)

File Content
references/glossary.md ISTQB TAE terminology definitions
references/syllabus-mapping.md All 40 LOs traced to rule files
references/learning-objectives.md Complete LO list with K-levels

How to Use

For rules: Each rule file covers one ISTQB concept. Read the rule before implementing the concept. CRITICAL-impact rules are mandatory; HIGH-impact rules are strongly recommended.

For commands: Commands are step-by-step workflows. Use them when you need to complete a specific automation engineering task (evaluate a tool, design an architecture, set up CI, etc.).

For references: Use glossary.md to check ISTQB terminology. Use syllabus-mapping.md to find the rule for a specific learning objective. Use learning-objectives.md for exam preparation.


Examples

Example 1 — Web UI Automation (SauceDemo)

# 1. Page Object (adaptation layer)
class LoginPage:
    _USERNAME = (By.ID, "user-name")
    _PASSWORD = (By.ID, "password")
    _SUBMIT   = (By.ID, "login-button")

    def __init__(self, driver): self._driver = driver

    def login(self, username, password) -> "HomePage":
        self._driver.find_element(*self._USERNAME).send_keys(username)
        self._driver.find_element(*self._PASSWORD).send_keys(password)
        self._driver.find_element(*self._SUBMIT).click()
        return HomePage(self._driver)

# 2. DDT test script (test definition layer)
@pytest.mark.parametrize("user,pwd,expected", [
    ("standard_user", "secret_sauce", "Products"),
    ("locked_out_user", "secret_sauce", "Error: locked out"),
])
def test_login(driver, user, pwd, expected):
    result = LoginPage(driver).login(user, pwd)
    assert expected in result.page_content()

Example 2 — API Test Suite with Schema Validation

import jsonschema

PRODUCT_SCHEMA = {
    "type": "object",
    "required": ["id", "name", "price"],
    "properties": {
        "id":    {"type": "integer"},
        "name":  {"type": "string"},
        "price": {"type": "number", "minimum": 0}
    }
}

def test_get_product_schema(api_client):
    response = api_client.get("/products/1")
    assert response.status_code == 200
    jsonschema.validate(response.json(), PRODUCT_SCHEMA)  # structure
    assert response.json()["id"] == 1                     # value

Example 3 — CI/CD Pipeline Integration (GitHub Actions)

name: CI
on: [push, pull_request]
jobs:
  unit: {runs-on: ubuntu-latest, steps: [{run: pytest tests/unit -q}]}
  api:
    needs: unit
    runs-on: ubuntu-latest
    steps: [{run: pytest tests/api -q, env: {BASE_URL: http://localhost:8080}}]
  e2e:
    needs: api
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - run: pytest tests/e2e -m smoke -q
        env: {BASE_URL: "${{ secrets.STAGING_URL }}", HEADLESS: "true"}
      - uses: actions/upload-artifact@v4
        if: always()
        with: {name: allure-results, path: allure-results/}

References


Notes

  • Syllabus version: CTAL-TAE v2.0 (released 2023)
  • Related ISTQB syllabi: CTFL (Foundation), TAS (Test Automation Specialist), CT-MBT (Model-Based Testing)
  • Exam format: 40 questions, 90 minutes, 65% pass mark
  • K4 objectives (exam emphasis): Tool evaluation (TAE-2.3.1), deployment risk (TAE-4.2.1), failure analysis (TAE-6.2.1), scripting improvement (TAE-8.5.1)
  • This skill does not cover CTFL prerequisites (assumed knowledge)
Weekly Installs
1
First Seen
Mar 14, 2026
Installed on
amp1
cline1
opencode1
cursor1
kimi-cli1
codex1