It seems the key value pairs are constant in the json. Just a quick question, If I want to add any additional info about a test case in some string format or so, which of the keys mentioned in the json format I can use?
Please review the updated design document. Below are the changes done; based on comments received along with some additional ones:
Updated the terms with string specifics and angular braces for values (<>)
Added explanation of string specifics of json file
Added multiple test cases & test suites to template
Added example file
testparam was a repeated as test_conf, hence removed testparam
Total test summary subsection and its fields added
‘measurements’ replaced with ‘test_run_info’ since it was a misnomer for use cases where the test gave out a string format of information of test execution
Hi Sarita,
Inside test summary, we have result summary, I suggest to have total_run, succeeded, failed, errored etc as separate keys instead of a string.
And I have few questions
“tests with failures” means failed, errored and timedout tests or only failed tests.
In “test_run_info”, can a testwriter append with custom values per testcase for e.g.,
def test_t1(self):
test-steps
self.test_run_info.append({custom:value}) ?
How the requirements info will be fetched per testcase?
Hi @saritakh, Thanks for working on the design. It looks good to me. Have few minor comments.
-typo “dictinary” at few places
-tags can be specified at test suite level as well, right? I have seen a few tests like that. how will they be represented in JSON format?
I agree with Vishwa having separate keys for total_run, Pass, failed, error, skipped and timeout.
Can you add examples for test_run_info? I think I am not completely clear on how to use that?
Also JSON file will not contain the actual output of the test but only test info, correct?
@saritakh
Thanks for posting the design document. Follow are my comments.
Can we make the epoch time in readable format for run_id like 083120181546_run something like this. We can also have one more field like total_time_in_epoch to purely store overall time needed in seconds.
Instead of having status as string can we have a sub dictionary for it something like
“test_results_by_type”: {
“run”: 5,
“succeeded”: 3,
“failed”: 1,
…
}
also can this summary be created per test suite level as well?
In test_summary dictionary can we have test_start_duration and test_end_duration instead of just having the overall test_duration. This way it is more helpful. Otherwise we need to parse each and every test case results individual dictionaries to achieve the same.
It would be great to provide an example for test_run_info dictionary. Also please provide an example for requirements dictionary.
Thanks @vishwaks for your inputs, please find my replies below:
Updated the field name explanation for tests_with_failures accordingly
Below would be a sample of usage of set_test_measurements() method
def test_t1(self):
test-steps
self.set_test_measurements({custom_fieldname1:value1,custom_fieldname2:value2})
Thanks @anamika for your inputs, please find my replies:
updated the typos
Ultimately the tags end up getting applied to the test case when specified at test suite level. Here all the tests will have the same value for “tags” field
updated methods should clarify how the data gets added to the json report.
The new method PBSTestSuite.add_to_test_run_data() satisfies the requirement mentioned by you. So now, one can create a dictionary in test code keeping in mind the hierarchy of the json report template; with whatever data to be filled; that is needed to be added to json report in string formats and passed to this method.
Thanks @suresht for your comments,
I just now updated the document and missed your points by few minutes. As I see, the new changes satisfies your second comment.
Regarding comments 1, 3 & 4; I will get back to you on them as early as possible.
So in requirements section data will be populated by requirements decorator, in that case I have an question, in future requirements decorator might get more parameters and multiple logical operators support(=, !=, <,> etc), in requirements section in json info can logical operators will be populated? If not we have to I suggest to add it.
–> what attributes it is going to set? Do you have the list or it takes as an argument? A test writer might want to access that data inside the test as well. Do we need another helper method like get_test_measurement?
Interface: PBSTestSuite.add_to_test_run_data()
–> examples please. Also specify if this takes any arguments.
Thanks @suresht for your inputs, please find my replies below and let me know of your views:
I think epoch time in readable format is not necessary since epoch time is just taken as a run identifier number in order to differentiate between 2 test run reports. Otherwise I do not think this data is needed anywhere else. I prefer to keep it as is.
2.
Already updated as sub dictionary.
I dont think it would be necessary to create summary at test suite level; also if added it would become duplicate data; since each test case gives out its result separately. Of course in the cases where the failure numbers are high in a test run, the test suites should be run individually in order to debug.
3.
I think test start time and test end time belong more to be part of test logs than part of test report. test duration is specified for the whole test run and test start & end time is given for individual test cases in order to search the test logs in case of failed test case.
4.
I have added an example for both the methods being called in test case code.
Interface: PBSTestSuite.set_test_measurements()
–> what attributes it is going to set? Do you have the list or it takes as an argument? A test writer might want to access that data inside the test as well. Do we need another helper method like get_test_measurement?
Sarita ==>
Updated the details that it accepts a dictionary. The dictionary can contain any key value pairs and is completely upto the test writer to fill it up with any type of these pairs.
I dont think that we need a helper method to get this data since it would already be part of the array and dictionary, which is locally accessible.
Interface: PBSTestSuite.add_to_test_run_data()
–> examples please. Also specify if this takes any arguments.
Sarita ==>
Updated the method name as add_additional_data_to_report() along with details that it accepts a dictionary
I have added an example of the method usage int est code.