Tunable test parameters

I have posted an Design Document for adding a decorator in PTL to have modifiable test parameters specific to test cases. Please review and provide suggestions.

Design Doc:

@visheshh how will it work if you wish to pass two different values in two different tests but the parameter name is same? If it does not work that way then do you recommend using unique parameter names to avoid collision?

Hi @arungrover, In that case i would recommend test developer should make it unique to avoid collision . if they are not unique it will only be effective if only one test case is run at a time. I will make a note of it in Design doc.

Now that I think more about it, this would be a wise recommendation to give but test writers may not know what parameters other tests are relying on (especially when the name collision happens across testsuites).

How about we specify keys mangled with the class names in the benchpress command.
Like a “test t1” in “Testsuite class A” has testparam decorator set to “@testparam(param1:p1, param2:p2)” and in the benchpress command users will specify “-p A.param1=x1:A.param2=x2”

This way we can just handle it internally and not worry about collisions.

@arungrover The idea sounds good. I had not thought about the collision across testsuites as i was focusing on Performance/Load tests using this decorator felt the testsuite’s would be always be run individually. If people use tags or run multiple testsuites, at that level i don’t think the test runner would try to modify the test specific parameters rather run with default values. But that is a possibility to be considered but i do not think it would be used much frequently.

The end users for modifying the test params i beleive are the one’s who run these tests on their dev builds or while testing we want to scale up on cloud/big machines or scale down on dev vm’s.

@visheshh : Instead of decorator and passing those parameter values from -p option. Can we create a separate config file (json) with same name as testsuite name.
Ex: pbs_smoktest.py pbs_smoketest.json
The config will have all the test parameters with default values. The test suite should load the config file at the start.

When user wants to run test suite with different values , he can just edit the config file and run pbs_benchpress. With this approach user don’t need to remember or type the parameters at command line. This will also take care name space collision and user can run multiple test suites or tags.

Let me know your thoughts.

I like this idea: using json config file

@kjakkali Idea of having an optional JSON Conf file for the tests makes sense. We can store the values for testsuite and for each test cases in a hierarchical level. Then we could have same names for the variables and there would be no collision.

I thought about a config file too, but I think the Idea here was to dynamically change the value of parameters when we invoke the benchpress command. If to change a parameter we have to maintain/modify a config file then why not have that configuration as part of the testsuite itself? Why do we have to create and maintain a separate configuration file for it?

I guess, @visheshh can answer what is the motivation behind this proposal. I’d say if it is acceptable to change a file specify different test parameters then we should just have the test configurations inside the testsuite itself.

@arungrover Motivation behind the proposal is to have test parameters modifiable without changing the test file . I know both test file and JSON file are files . But having it in a JSON file makes it easily machine readable and will help test management tools to provide an option to read it and it can be made modifiable at execution time.
Saying that I agree having it in -p is quickly accessible and i feel it is most user friendly method for running single tests. I still feel most of the time when we try to scale up/down we run a single test case or test suite at the most.
But if we plan to run more test suites and tags and when running through a test management tool having conf in JSON file would be more helpful to avoid collisions.

I feel while using json file as input would be beneficial but this idea is not very well defined. It will raises questions on where will we keep these files, will it contain one testsuite or multiple, who will own these files, what will be its structure etc.
Can you flush out this approach more before we decide which way to go?

Hi @arungrover @kjakkali My idea is to have one json file per testsuite at the same location where test file exists.
pbs_smoketest.py will have a pbs_smoketest.json
Structure should be something like :
“TestSmoketest”: {
“test_t1”: {
“x”: “y”,
“a”: “b”
“test_t2”: {
“x”: “z”,
“a”: “c”

@visheshh : This looks good to me.

I still feel that it is better not not to give input in a file. Test maintenance becomes difficult with additional configuration file. You will have to log results of the tests and their config json files.

You will have to make sure in the test that the type and value of parameters is what the test expects (this is probably true whether you give json or not). I feel the configuration is better contained inside the testsuite itself or as a command line option. Another option is to read it out of the environment but doing that will also make debugging ambiguous because it will be hard to tell what test read out of the environment.

The configs can be added to the ptl_test_results.json file.
Merge all of the configurations and add it to the object stored in that file.
Or, just add the configuration to each testcase’s object in the json file. (i think this one is better)

@arungrover : I am fine with both approaches. The only advantage i see in JSON is easy to use.The user don’t need to remember tunable parameters and no typos while mentioning in command line options. We don’t need any extra command line options.

When user creates a new test suite

  • user will add a json file with default values.
  • test should read json file at the start
  • once test finishes it should record the tunable parameters in ptl_test_resutls.json at test case level

If user wants to run test case with different values he just need to update json file locally and run the test case without any changes in command line.

As of now, the test configuration parameters passed in ‘-p’ or ‘–param-file’ are saved at pbs_benchpress run level hierarchy; not at per test suite / test case level in the ptl_test_results.json file.

In my opinion; cases where we tune test parameters in single run are not as common as default run. If we tune, we would be doing in multiple runs. Even if we tune and run, it is much more uncommon to run multiple tuned tests together in one run. Also, a json file per test suite becomes costly for maintenance. One workaround for that would be a template json file where the user fills in test suite name, test case name, key value pairs etc. We already have paramfile option in case of long list of parameters.
I would prefer command line option.

@vstumpf @saritakh, I was planning to have it per test case inside measurements as test_config .
As the test config is stored in self.conf . It can be added under measurements in teardown.

I don’t really understand what you mean by “test parameters”. The document doesn’t mention motivation, can you please edit the doc and mention what problem(s) this change will solve and give a few examples explaining the same?