15 June 2022
For end-to-end UI testing there are a few case which can go untested with any amount of automated testing unless they are compared manually every time.
There are many cases where details like
These a few of many cases in the view port requires manual intervention as they cannot be detected with automated scripts.
Automated scripts can assure existence of element but cannot easily assert elements correct position.
Good testing is difficult. Perfect is unimaginable.
We have clear agenda that we are only asserting UI Application code. That said, it is a black box testing of the UI code based on assumption that if nothing changes in API response our application should render UI as per the expectation. The expectation in our case is set via baseline images.
If there is a change in API response then update the snapshots and baseline screenshots to set new expectation.
{
"name": "test name",
"outputFile": "file name to store the output - checkpoint, diff",
"url": "url for test page",
"snapshots": [
{
"url": "url of the xhr to intercept",
"query": "query params(optional)",
"response": [
"file name in the snapshots folder to proxy this network request response"
] | "fileName"
}
],
"usesAgent": "(can add) select one from config template",
"fullPage": "boolean - to take full page screen shot",
"disableNavigationToBaseURL": "should not navigate back to the previous route - global"
"actions": [
{
"type": "on of the keys in the array [click, text].",
"id": "xPath, id, className",
"wait": "explicit wait millis",
"skipTask": "skip the task assigned to actions via runner",
"disableNavigationToBaseURL": "should not navigate back to the previous route"
}
],
}