Problems with resources_max.nodes

I cant seem to enforce a maximum restriction/limit on the number of nodes allowed for a single job.

If I am not mistaken, this could be done either on queue or server level, by configuring (in qmgr):

sudo qmgr -c "set queue prod resources_max.nodes = 3"
sudo qmgr -c “set server resources_max.nodes = 3”

However in any case (even with both enforced), the system happily starts jobs with 4 or more nodes as per:
qsub -I -l nodes=4

I have tried to strip down the config to a bare minimum, but the problem persists.
Do any of you know if this feature is working?

From the logs, I can gather that the job requests and gets ncpus=1 on each of 4 nodes:

LOG: Job Run at request of on exec_vnode (dn008:ncpus=1)+(dn101:ncpus=1)+(dn102:ncpus=1)+(dn103:ncpus=1)

And it is “counted” as just resources_used.ncpus=4 at the end:

LOG: Exit_status=0 resources_used.cpupercent=0 resources_used.cput=00:00:00 resources_used.mem=0kb resources_used.ncpus=4 resources_used.vmem=0kb resources_used.walltime=00:00:02

Each vnode has ncpus=20, but even specifying “4 vnodes with 20 cpus each” is allowed to run:

qsub -I -l select=4:ncpus=20

I guess I am doing something wrong, although I cannot figure it out. For sure it feels like the resources_max.nodes is somehow broken. If so, is there an alternative?



Hi Bjarne,
Can you try setting limit using resources_max.nodect=3

1 Like

Hey Bjarne,
The reason setting resources_max.nodes did not work is that the nodes resource is a string. Setting resources_max on a string will only do an exact string match. Dilip is correct that setting resources_max.nodect should work for you.

As a warning, nodect is really a chunk count now. This won’t make a difference if you use the old ‘nodes’ syntax(or -lplace=scatter). If you start using the select syntax, resources_max.nodect may not work as desired. A job requesting -lselect=4:ncpus=1 will have a nodect=4. This job will be rejected on submission due to the resources_max. The scheduler would have happily run all 4 chunks on one vnode.


1 Like

Thanks, guys. This helps a lot.

That does the trick. Jobs with -l nodes=3 run fine, while nodes=4 are refused by with

qsub: Job violates queue and/or server resource limits
(qsub exits with code 188).

I can confirm this. A job with, say, -l select=3:ncpus=1 is accepted and scheduled to run on a single node (20 cores on the node), while -l select=4:ncpus=1 is refused before even making it to the scheduler.

Thumbs up to both of you! Kudos!


1 Like