You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+188-10
Original file line number
Diff line number
Diff line change
@@ -40,11 +40,11 @@ For a simple run of all the test files in normal mode, try
40
40
pytest
41
41
```
42
42
43
-
To run the tests in snap mode (to save the UI texts to the dynamic file)
43
+
To run the tests in snap mode (to save the Response JSON to the test data file or to save the response image to the stored image)
44
44
```
45
45
snap=1 pytest
46
46
```
47
-
Once the changes are saved to the file run the tests with `pytest` to get the test running against the saved data. To verify this feature I intentionally added two locator texts which will be changing continuously.
47
+
Once the changes are saved to the file run the tests with `pytest` to get the test running against the saved data.
48
48
49
49
To Run the tests in parallel mode or multi thread run for the available test files, try (To have parallel run you need to have atleast 2 tests inside your folder structure)
50
50
@@ -68,15 +68,15 @@ allure serve reports/allure
68
68
69
69
70
70
## Reports
71
-
For better illustration on the testcases, allure reports has been integrated. Allure reports can also be integrated with jenkins to get a dashboard view. Apart from allure, pytest's default reporting such as html file has been added to the `reports/` folder.
71
+
For better illustration on the testcases, allure reports has been integrated. Allure reports can also be integrated with jenkins to get a dashboard view. Apart from allure, pytest's default reporting html file has been added to the `reports/` folder.
72
72
73
-
If there is a failure while comparing the images, allure report will have all the files attached to it. The difference between the two images is generated in run time and attached to the allure report for our referenece.
73
+
If there is a failure while comparing the images, allure report will have all the files attached to it. The difference between the two images is generated in run time and attached to the allure report for our reference.
74
74
75
75

76
76
77
77
78
78
## Jenkins Integration with Docker images
79
-
Get any of the linux with python docker image as the slaves in jenkins and use the same for executing the UI automation with this framework (Sample docker image - `https://hub.docker.com/_/python`). From the jenkins bash Execute the following to get the testcases to run,
79
+
Get any of the linux with python docker image as the slaves in jenkins and use the same for executing tests with this framework (Sample docker image - `https://hub.docker.com/_/python`). From the jenkins bash Execute the following to get the testcases to run,
80
80
81
81
```
82
82
#!/usr/bin/python3
@@ -104,18 +104,159 @@ pipeline {
104
104
}
105
105
```
106
106
107
+
# Break down into end to end tests
107
108
109
+
## Creating a test file
108
110
111
+
* Tests can be created directly within the `Tests/` folder with the file prefix as `test_` so that those files alone will be taken during test run. This is configured in `pytest.ini` file.
109
112
113
+
```
114
+
[pytest]
115
+
markers =
116
+
sanity: sanity tests marker
117
+
regression: regression tests marker
118
+
snap: Snap feature enabled for this case, should have separate file for validating the response
119
+
plain: Snap feature is not recommended since the expected JSON has some custom values
120
+
python_files=*.py
121
+
python_functions=test_*
122
+
addopts = -rsxX
123
+
-q
124
+
-v
125
+
--self-contained-html
126
+
--html=reports/html_report.html
127
+
--cov=Tests
128
+
--alluredir reports/allure
129
+
--clean-alluredir
130
+
```
131
+
132
+
* Do import the needed modules inside the test file. Since we have imitated karate framework's approach of testing, we actually need to just use the commands to test the REST API endpoints. Those commands and the features can be discussed below.
133
+
134
+
```
135
+
import allure
136
+
import pytest
137
+
from Library.api import Api
138
+
from Library.images import Img
139
+
```
140
+
141
+
* Do set the URL on which you want your automation suite to run in the `/Data/GlobalData/global_data.yml` file. You can also set add other project level data in this file and then call those by using `Var` method.
142
+
143
+
```
144
+
URL: https://naresh.free.beeceptor.com
145
+
timeout: 10
146
+
tolerance: 0.01
147
+
```
148
+
In this project I have set the URL on which automation is going to run, maximum timeout which is allowed, and the tolerance which is allowed while comparing the images.
149
+
150
+
In order to change the URL against which the suite is running, one could always set the environment variable while executing the suite. Always environment variable gets the higher precedence, so even if we have URL set in the global variable data, the URL which we give from command line will be taken for execution.
151
+
```
152
+
URL=https://customurl.inruntime.com pytest
153
+
```
154
+
155
+
* While starting to draft a test case, do add the following tags to the test case, which will be helpful in reporting part.
156
+
157
+
```
158
+
@allure.feature("Sample get request") # Title for the test case
159
+
@allure.severity('Critical') # Set the severity for the case
160
+
@pytest.mark.regression # Custom pytest marker to run the test cases with ease on demand
161
+
@pytest.mark.snap # Custom pytest marker to run the test cases with ease on demand
162
+
```
163
+
164
+
### Simple test case with an endpoint
165
+
166
+
For a very simple basic get request and to validate the response code we could do,
167
+
168
+
```
169
+
Api.get("/name")
170
+
Api.verify_response_code(200)
171
+
```
172
+
On calling only these two methods from the `Api` library, all the allure report actions, attaching the request and the response file to the reports, and asserting the response code of the response is taken care off.
173
+
174
+
### Simple test case with validating the response with test data
175
+
176
+
To validate the response json with a test data, one could do the following,
Here, we are trying to take the sample.yml file under `/Data/DynamicData/` folder and then fetch the data for the key `test_sample_get_request_001`.
184
+
After getting the data from the stored file, we will compare that with the response data and generate the allure reports along with necessary attachments.
185
+
186
+
The YAML file will be looking like,
187
+
```
188
+
test_sample_get_request_001:
189
+
age: 20
190
+
name: Naresh
191
+
```
192
+
193
+
While fetching the key from a yaml file, the above file structure will return the data in JSON format. This in turn gives us the edge while creating the test data. One can always save the key value in direct JSON format as well.
In either way JSON parser will get the values in JSON format. Whereas when we use `snap` mode, the file will be saved in the first format which we can see in detail below.
110
205
206
+
### Simple test case with validating the response with test data and ignoring few keys
111
207
208
+
While validating an api response, we may encounter a scenario where we don't want to validate few keys. In such scenario one can do the following,
The above code will validate the response status code, response json values except `age` key. If you want to have more keys that are supposed to be ignored, have that in the comma separated format,
This will ignore the keys `age` and `name` while validating the response with the stored data.
112
223
224
+
### Simple test case with validating the response with test data and custom markers
113
225
226
+
While validating an api response, we may encounter a scenario where we need to validate whether a key is present or not but not the value for that key. In that case one can always have that marked in their test data with the unique markers specified with `$` symbol.
The above combination will validate the response as,
241
+
1. Whether `age` key is present without Null value in it.
242
+
2. And `name` is present with the exact same value `Naresh` in it.
115
243
244
+
We can also make the validation so specific for the `age` field in the above example by mentioning that value corresponds to `age` should be a `number`. To achieve this we need to have the following combination.
Apart from the above two there are multiple markers available which are listed as follows,
119
260
120
261
Marker | Description
121
262
------ | -----------
@@ -127,15 +268,52 @@ Marker | Description
127
268
`$string` | Expects actual value to be a string
128
269
`$uuid` | Expects actual (string) value to conform to the UUID format
129
270
271
+
### Test cases with validation of images
272
+
273
+
In few scenarios if we need to validate the image file from the response, first we need to hit the endpoint and get the image URL, after which we need to download the image from the URL and store that in temporary folder, and then compare the image with the stored image. To do this,
The above code will save a value from the response json through `Api.get_params_from_response`. If the URL is present inside the nested json one can always give the path to the image url using comma separated value like,
After getting the Image URL, we need to download it and save it in the temporary folder under `reports/images`. We are also supposed to send the name for the downloading image file. All the download and comparison of images are happening in png format. We need a change in framework if we want to compare images with some other format.
287
+
288
+
Now after downloading, directly give the image name against which we need to compare the downloaded image. The stored image must be under the folder `/Data/Images/`.
289
+
290
+
The method `Img.is_equal` takes care of all the allure reporting part, attaching the images to the report and if there is a mismatch between the images, difference between two images also will be attached to the allure report. as mentioned in the above allure report topic.
291
+
292
+
### Test cases with validation of images along with tolerance
293
+
294
+
In few scenarios if we need to validate the image file from the response along with the allowed tolerance. The above method will result in failure even if there is a minute change in the image file. To validate the images along with tolerance one has to change,
This will take the tolerance level from the global data file and validate. Its always recommended to use same tolerance level across the project, but in few cases if one need to have custom tolerance level to an image compare one has to do,
The above code will validate the images with 0.5 percent tolerance level.
309
+
130
310
131
-
132
-
133
-
### Data sets:
311
+
## Data sets:
134
312
135
313
In order to have distinguished set of data I have used three types of data.
136
314
137
315
***Global** - Global configuration for the whole project. Here mode of run, browsers to use, browser configurations etc., are specified.
138
-
***Test Data** - This is to store the module level data. Ideally for each test file we need to have a test data file, but that depends on the requirement.
316
+
***Static Data** - This is to store the module level data. Ideally for each test file we need to have a test data file, but that depends on the requirement.
139
317
***Dynamic Data** - This is to store the dynamic data. Files in this folder are supposed to change when we run with `snap=1 pytest`. This is separated from the other data files so that other static files are not disturbed during the run.
140
318
***Images** - This folder is to store all the image files that are needed to compare with the response Image files
0 commit comments