Right now, there seem to be multiple test suites/PTL files which test similar functionality, For example - there are 4 different test suites related to qstat, 2 different test suites related to qsub, etc., and maybe it was actually prudent to divide them up, but right now we are missing concrete guidelines for when a developer should decide to create a new test suite for their test case(es) instead of using an existing suite.
So, I wanted to seek suggestions from the community on what they think is the right approach here to prevent test suite explosion.
Here’s what I propose:
If your code change (PR) includes tests which test a section of the code that’s not covered by any existing test, then create a new, generic test suite for the tests.
If your PR includes multiple tests which need a common, specialized setup, then they should be put in a dedicated test suite for better performance.
If your tests seem to test the same feature that another test suite does, but the test suite is “named” very specifically to what its existing test cases cover, then rename the test suite to broaden its scope so that your test cases can fit.
For all other cases, use an existing test suite to add your test cases to.
@agrawalravi90 I agree that we have too many pbs_issue.py files/test suites and it is difficult to find the test suites that I might want to run to exercise a code change. But on the other hand, I can understand why it is happening…when I add a feature I want to have all of my tests in one place so it’s easy to run all of my new tests.
To help me understand your proposal, let’s use an example:
I am adding a new feature, as part of my feature qsub and qstat will also be changed. My feature is something that changes the default behavior of qsub and qstat (i.e. no specialized situation).
Would your proposal mean that I would have to add tests to pbs_qsub.py and pbs_qstat.py in addition to pbs_myfeature.py?
If so, that would make running the tests to test my feature more difficult since tests are spread across multiple files. In the future, fixes to the feature would also mean the next developer will not know which tests to run to fully test the feature.
If not, that would mean anyone who runs pbs_qsub.py will not test my features changes to qsub. This would be status quo today.
Who decides which option fits the situation? If it is left up to us, many of us will choose the simplest path (1 new file) and we’re right back where we are.
Sorry for the delay in replying back Lisa, I totally missed your comment.
Sounds like a use case for when one is developing a feature and wants to run just their tests. But
I think there are a few things you can do to achieve the same:
Add a temporary tag to your tests so that you can run them all with that tag. Once you are satisfied with your tests, you can remove the tag from the tests.
create a temporary test suite during development and house all your tests cases there. Once you are satisfied with your tests, you can move them to existing, relevant test suite(s).
What do you think?
Nope, it would mean that you’d ONLY be adding tests to pbs_qsub.py and pbs_qstat.py.
I’m not sure I understand. Are you saying that the guidelines I proposed might not cover all possible situations?
@agrawalravi90 Thanks for the responses. Personally, I’m not a fan of the temporary tags idea because those tags would definitely need to be removed before the final merge into the code base. But up until that point, they would be needed for developer testing as changes are made due to comments. But since PTL can be told to run a specific test from a specific test suite, there is a way to run specific tests, so we don’t have to do the temporary tags. It will mean that folks will have to list out each of the suites/tests they want to run.
I think to make things easier for people to know where to put new tests, it would help if the existing test suites/files were consolidated into 1 test suite/file. For example, you mention 4 qstat test suites, it would be difficult for someone to know which test suite to add their new qstat test to…if there were just 1 test suite it would be easy to know where a new qstat test should go. What do you think?
I see what you mean, and I agree, it would indeed be difficult for somebody to know which test suite to put their test into unless we had very intuitive/unambiguous test suite names, which was actually one of the motivations for this discussion. But, it might be difficult to adhere to that as some tests need specialized setup, and while one can actually just destroy the default that they get from PTL and create something from scratch, it would slow things down as PTL would create a default setup only to have it be destroyed and recreated by the test. So, such tests would benefit from having a dedicated test suite to themselves.
So, thinking about this more, maybe it’s okay to have multiple different test suites for related changes, as long as they are tagged appropriately? This would probably need some enhancements to PTL’s tagging infrastructure, but this will enable users to create new tests however it suits them, and we’ll still have a reliable way of filtering tests using tags. What do you think?
Thanks for your response Ian, one of the things that the original proposal said was:
That would eventually end up creating multiple test suites for the same feature just because the tests need a special setup. So, it could again lead to the scenario where the test writer is confused about where to put a test case as there are multiple suites. Did you mean that we could stick with the original proposal, except this bullet, and use a permanent tag?
I am updating an existing test suite and changing few tests to be multi-node. I am confused whether I should create a new test suite for the multi-node tests or leave them in the same file. I looked at our documentation and could not find any guidance for adding tests. I again looked at this thread and if I read the original proposal then this should go to a different file. But then there are discussions on using tags (which I like personally).