Modify Custom Resource value on job execution

Hello,

Assume we have a queue (queueA) that has a custom resource called “condo” and this queue has a “max_run_res.condo” settings of 10.

Now assume job submissions do not specify “condo=N” in their select statements.

How would you go about enforcing a restriction on queueA such that the total number of nodes with the custom resource “condo” which are running jobs from queueA is limited to 10?

Put another way, whenever a job lands on a node that provides that condo resource, the scheduler must account for that node as “running_res.condo” node.

Please let us know if this possible and what approaches we could explore.

Thanks,
Siji

It would help to know what you are actually trying to accomplish. Could you explain at a higher level what the goal of this would be? Are they single node jobs? Are the jobs exclusive to a node? Are you just trying to limit the number of nodes running jobs from that queue?

Sure, I can provide some more detail.

So from a high-level, yes, we are looking to limit the number of nodes running jobs from the queue.

Now we’ve tried “set queue queueA max_run_res.condo = [o:PBS_ALL=10]” in our qmgr settings with the expectation that when a node that provided the condo resource began running a job from this queue, we would see the “resources_assigned.condo” go up by one.

However, the “resources_assigned.condo” never increased so the “set queue queueA max_run_res.condo = [o:PBS_ALL=10]” was ineffective in limiting the number of “condo” nodes this queue could consume. This is the issue we are trying to solve.

And I’m guessing we experienced this behavior simply because the select statements were not explicitly requesting the condo resource. Is this correct?

Our goal in this exercise is to force the scheduler to consider a “condo” node assigned to this queue once a job from queueA is running on it so that “set queue queueA max_run_res.condo = [o:PBS_ALL=10]” can then truly work for us.

And we’d like to do this without making users request the condo resource in their select statements as that really complicates things for us.

This means limit the number of condo custom resource that can be consumed in the queueA to 10 (might be tailored for a specific user, project, group or generic users)

This can handled by a queuejob hook, which will check if the job is submitted to queueA , then append the condo request in the chunk or job wide resource request.

PBS Pro’s standard unit of resource is ncpus, mem, host , arch (by default)
Would you like to limit jobs from queueA to any 10 nodes or a specific 10 nodes ?

  • specific 10 nodes is easy to handle, one can use Qlists
  • any dynamically selected first set of 10 nodes when the first job is run in that queueA and make sure the rest of the jobs choose the same 10 nodes is bit tricky ( based on the placement selected by each job request, if all the job request are exclusive , then it is easy )

If you know 10 specific nodes that provide condo resource, then you can restrict to 10 nodes.

The reason being , when you have this configuration “set queue queueA max_run_res.condo = [o:PBS_ALL=10]”

  • 1 condo resource does not equate to 1 compute node
  • a user might request qsub -l select=1:ncpus=1:condo=10 and this job would not consume 10 nodes or block 10 nodes for this job.

The solution that might help:

  • 4.9.2.2.i Procedure to Associate Vnodes with Multiple Queues from this guide High-performance Computing (HPC) and Cloud Solutions | Altair
  • also depends whether the compute nodes in the cluster have the same number of cores, with this runjob hook can be devised to check whether limits of the queue has reached, if reached reject the job, else run the job.
  • you can have an external script that finds the total number of nodes used by the queue. Use this script in the server_dyn_res , use queuejob hook to request this server_dyn_res based on the select and placement statement of the job request .

Thank you
[ edit in italics ]

Adarsh,

Thanks for sharing your recommendations for our scenario!

We will consider carefully and implement a scheduling criteria that works for us.

Thanks,
Siji