momentic-result-classification
Momentic result classification (MCP)
Momentic is an end-to-end testing framework where each test is composed of browser interaction steps. Each step combines Momentic-specific behavior (AI checks, natural-language locators, ai actions, etc.) with Playwright capabilities wrapped in our YAML step schema. When these tests are run, they produce results data that can be used to analyze the outcome of the test. The results data contains metadata about the run as well as any assets generated by the run (e.g. screenshots, logs, network requests, video recordings, etc.). Your job is to use these test results to classify failures that occurred in Momentic test runs.
Instructions
- Given a failing test run, analyze why the test run failed. Often you'll need to look beyond the current run to understand this, looking at past runs of the same test, or other context provided by the Momentic MCP tools
- After analyzing why the run failed, bucket the failure into one of the below categories, explaining the reasoning for choosing the specific category.
Helpful MCP tools
momentic_get_run — Returns some metadata about the run and the path to the full run results. Use the metadata to help you parse through the run results (e.g. which attempt to look at, which step failed, etc.)
momentic_list_runs — Recent runs for a test so you can compare the result of past runs over time. Always pass a branch name so that it's more likely you're looking at the same version of the test.
Background
Test run result structure
When momentic tests are run via the CLI, the results are stored in a "run group". The data for this run group is stored in a single directory within the momentic project. By default, the directory is called test-results, but can be changed in momentic project settings or on a single run of a run group. The run group results folder has the following structure:
test-results/
├── metadata.json data about the run group, including git metadata and timing info.
└── runs/ On zip for each test run in the run group.
├── <runId_1>.zip a zipped run directory containing data about this specific test run. Follows the structure described below.
└── <runId_2>.zip
When unzipped, run directories have the following structure:
<runId>/
├── metadata.json run-level metadata.
└── attempts/<n>/ one folder per attempt (1-based n).
├── metadata.json attempt outcome and step results.
├── console.json optional browser console output.
└── assets/
├── <snapshotId>.jpeg before/after screenshot for each step (see attempt metadata.json for snapshot ID).
├── <snapshotId>.html before/after DOM snapshot for each step (see attempt metadata.json for snapshot ID).
├── har-pages.log HAR pages (ndjson).
├── har-entries.log HAR network entries (ndjson).
├── resource-usage.ndjson CPU/memory samples taken during the attempt.
├── <videoName> video recording (when video recording is enabled).
└── browser-crash.zip browser crash dump (only present on crash).
When getting run results via the momentic MCP, tools such as momentic_get_run will return links to the MCP working directory (default .momentic-mcp). This directory will contain unzipped run result folders, following the structure above, named run-result-<runId>.
Steps snapshot
The metadata.json file includes a stepsSnapshot property which shows the state of the test steps at the time of execution. Use this property if you suspect that the test has changed between runs, or to validate that the test has been setup properly
Element locators
Certain step types that interact with elements have a "target" property, or locator, that specifies which element the step should interact with.
Locator caches
Locators identify elements by sending the page state html/xml to an llm as well as a screenshot. The llm identifies which element on the page the user is referring to. Momentic will attempt to "cache" the answer from the llm so that future calls don't require AI calls. On future runs, the page state is checked against the cached element to determine whether the element is still usable, or the page has changed enough such that another AI call is required.
A locator cache can bust for a variety of reasons:
- the element description has changed, in which case we'll always bust the cache
- the cached element could not be located in the current page state
- the cached element was located in the page state, but fails certain checks specified on the cache entry, such as requiring a certain position, shape, or content.
You can find the cacheBustReason on the trace property in the results for a given step. The cache property is also listed on the results, showing the full cache saved for that element.
Identifying bad caches
Sometimes the element that was cached is not the element that the user intended to target. This can cause failures or unexpected behaviors in tests. In these cases, it helps to verify exactly why the wrong cache was saved in the first place. Use the runId property of the targetUpdateLoggerTags on the incorrect cache to get the details of the original run, calling momentic_get_run with this runId. This will return the run where the cache target was updated.
Using past runs
You MUST look at past runs of the same test when understanding why a test failed. Looking at past runs helps you identify:
- When did this test start failing?
- What differed vs the last passing run?
- Did the same action behave differently on an earlier run?
Use step results and screenshots on past runs to answer these questions. Do NOT rely only on summaries from momentic_get_run or momentic_list_runs to understand what happened in a test run. You MUST look at the specific run details, including step results and screenshots, to determine the behavior of past runs.
When looking at past runs, use the following workflow:
- Call the
momentic_list_runstool to identify the runs you want more detail on. - Call
momentic_get_runfor that specific run to get the run details.
Multi-attempt runs
When momentic_list_runs shows a passing run with attempts > 1, treat it as a partial failure worth investigating, not a clean passing run. Pull the first attempt's step results and failure messages to understand what was going wrong before the retry succeeded.
Flakiness and intermittent failures
- In order to consider a test flaky or failing intermittently, it must be intermittently failing for the same app and test behavior.
- Just because a test failed once does NOT mean that it's flaky - it could have failed because of an application change. You need to determine whether or not there was an application or test change between runs by analyzing the screenshots and/or browser state in the results.
- IMPORTANT: You cannot make assumptions about flakiness or intermittent failures without verifying whether there was an application or test change that caused the failure
Test temporality
- Any past results may not necessarily match today’s test file. The test may have changed, meaning the result was on a different version of the test.
- Looking at the stepsSnapshot property of the attempt metadata.json can help you determine whether the test has changed.
Identifying related vs unrelated issues
- Use test name and description to determine what the test is intending to verify
- Failures outside that intent are unrelated, otherwise consider them related.
- Any failures in setup or teardown steps are pretty much always considered unrelated
Bug vs change
- Bug: something very clearly went wrong when it shouldn't have, such as an error message appearing. It's obvious just by looking at a single step or two that this is a bug.
- Change: any other behavior changes in the application
Formal classification output
- Exactly one category id — no new labels, no multi-label.
- Ground your decision in data. Be sure that you've fully investigated the run before assigning the category.
Reasoning: <a few sentences tied to summary, past runs, and intent>
Category: <one id from the list>
Category ids
Use these strings verbatim:
NO_FAILURE— Nothing failed; all attempts passed.RELATED_APPLICATION_CHANGE— Related to intent; expectation drift / change, not a clear defect.RELATED_APPLICATION_BUG— Related to intent; clearly incorrect behavior.UNRELATED_APPLICATION_CHANGE— Outside intent; not a clear bug.UNRELATED_APPLICATION_BUG— Outside intent but clearly broken.TEST_CAN_BE_IMPROVED— Test/automation issue (race, vague locator or assertion).INFRA— Rare or external (browser crash, resource pressure, rate limits, flaky environment).PERFORMANCE— Load/responsiveness (stuck spinner, assertion timeouts) when not pure infra.MOMENTIC_ISSUE— There was an issue with momentic itself, the platform running the test (e.g. an AI hallucination, data issues, incorrectly redirecting to the wrong element).