Design for PTL test execution report in JSON format

Thanks @vishwaks,
Yes, the logical operators of requirements will be populated in the json report when @requirements decorator is enhanced in future.

@saritakh, Please find my replies for the above.

  1. I agree with you that run-id is just used to differentiate between the runs. But if we have multiple or several hundreds of runs then it is always good to have human readable format where we include timestamp it is easy to identify with the name itself. We can also have another filed as I mentioned earlier with pure epoch time.
  2. I think from the performance perspective if we actually parse the individual sub dictionaries for all the test cases vs if we have this information handy at test suite level which is less costly. This avoids multiple passes of parsing the dictionary.
  3. I always feel instead of just having duration it would be better to have test start time and end time. This is more useful if we want to debug and get the right metrics out of this JSON dictionary.
  4. Thanks for adding the example.

Thanks,
Suresh

Thanks @suresht,

Please find my replies for the same:

Regarding run id:
I have updated the same, I preferred it to be numeric.

Regarding test suite level test summary:
Can you mention a use case where this might be needed?
In my understanding, when ‘m’ number of test suites are run in one pbs_benchpress command and if ‘n’ test cases fail, I would want to rerun the ‘n’ individually. This list of failed ‘n’ test cases are available at the “test_summary” dictionary (“tests_with_failures”); from where I can pick the test case names. The failed test suites list is also available at the “test_summary” field.(“test_suites_with_failures”)

Regarding test start and end time:
I have updated the same along with duration

Thanks @saritakh for making the suggested changes.

As we know one of the use cases for consolidating all test results into standard JSON dictionary is to use this as input to tools which generate some metrics or dashboards or compare the results across multiple runs etc. I thought summary at test suite level is handy when it comes to quickly find out how much time is taken to run a specific test suite without parsing the JSON dictionary again. This information is very much useful for test suites like “Performance test cases”, “Load testing” etc.
I was trying to think with respect to avoiding the parsing of the JSON dictionary multiple times to get what we desired.

Thanks,
Suresh

Hi Sarita, thanks for making all changes. Few more comments.

-" add_additional_data_to_report(" name is too big. how about just calling it as " add_data_to_report"

  • Also in your example you have called add_additional_data_to_report inside test_submit_job but it got listed as part of the test suite summary. did I misread it? I was expecting it to be reported under test_submit_job.
  • Thinking about Suresh’s point above, do we need to provide a tool to read from JSON? I understand if we do then it would be out of scope of this project.

I don’t think we need to provide any tool, unless PBS/PTL itself requires (Which is not as of now). I believe we should let user/admin write tool to parse this JSON as he/she is best person who know what information needed from this JSON data.

Yes, PTL’s job is to make sure it dumps all available and “usefull” information in JSON report.

I would like to keep it add_additional_data_to_report as this method is to add “additional” data in to JSON. And if we change it to “add_data_to_report” then user might think that he/she has to populate all data of JSON report which is not.

Thanks @suresht ,
In my opinion chances of having multiple test suites in a single test run for performance/load testing are less; it usually would involve one or very less number of test suites in a single pbs_benchpress run. If I were to test, then I would run them as a series of pbs_benchpress runs with one test suite in each run.
I would prefer to leave the design as is; test suite level summary can be taken up as an enhancement on need basis in the future.

Thanks @anamika ,

Regarding the method name add_additional_data_to_report()
I feel it should remain the same. I had some alternatives against which I selected this one since it meant exactly what it did (alternatives were: add_extra_data_to_report / add_more_data_to_report)

Regarding the additional data added to report in the example test:
I think you have misread it, set_test_measurements() puts data into “measurements” field of json test report and add_additional_data_to_report() merges the provided dictionary with the json report, based on hierarchy provided. Hence, the data placement varies in both the cases in the given example.

Let me know of any more clarifications.

Thanks @saritakh,
As we are focussed on scalability enhancements one by one consecutively I think multiple such test suites increase going forward. I am thinking if the current design itself has a room to accommodate/flexible to handle such a requirement it would be great and also thinking of avoiding the multiple parses of the same dictionary to actually get what we desired. But however if you are planning to take this as a further enhancement I am fine with it.

Thanks Sarita for clarifying. I found the language little confusing. May be you can change “that would be merged with the json data and provided in the test report” to something like “that would be merged with the json data and provided in the test report for overall test run”.

Also if I have ‘python_version’ set in every test case to a different value then what will be printed?
And why this needs to be called inside test case and not from setUp or teardown etc …

@saritakh: Thanks for updating EDD. Looks good to me for current requirements.

Thanks @anamika,

I have updated the related section as per your comments and clarifications. Please review the same now and let me know of any comments.

@All,

I have updated the design with minor changes based on comments received. Please have a look and let me know of any comments or grant sign off:

  • updated definition of method add_additional_data_to_report()

Thank you @vishwaks,
Please do have a look at latest minor change too.

Thanks for making the changes Sarita. Looks good to me.

@saritakh Here is my comments:

  • machine_info-><hostname>->pbs_install_type: We need to add one more value here (None in python, null in JSON) for no pbs installation
  • PBSTestSuite.set_test_measurements(dict{}): We need to mention that this method should be called from test case only as it will put data under testcase section in JSON
  • PBSTestSuite.add_additional_data_to_report(dict{}): We need to mention where this data will be added in JSON (aka in “additional_data” section parallel to “test_summary” section inside JSON)
  • Need to swap value in example JSON report because as per document “platforms” will have uname output not OS_info:
    "platform":"Linux",  
    "OS_info": "CentOS Linux release 7.5.1804 (Core)",
    
  • Also replace “OS_info” with “os_info”
  • add “.py” at end of value in "file": "tests/functional/pbs_smoketest",
  • I thought we will be adding additinal data in to its own section instead at top level in JSON right? I would suggest to put under its own section so it will be easy to parse and by looking at JSON file we can say that these are additinal data (set by user/admin/test not by PTL framework)

Thanks @hirenvadalia,
I have made the said changes. Regarding adding the additional data as a separate dictionary, I agree and feel it is better. I initially thought if test writer wants to add data at run level/test suite level/test case level; he would construct the dictionary accordingly and it would be overlapping this dictionary with report at the respective hierarchy level. But I think that would make it complicated to form the dictionary taking care of hierarchy.
I have now added it as a separate section called “additional_data”. This way it is easily parse-able and the respective test suite names or test case names can be indexed at test writer’s choice.

Please let me know of any more comments.

@saritakh Thanks for updating design. It looks good to me.

@saritakh : The latest design looks good to me.