Policy suggestion

Hi, folks

We have a small cluster running openpbs 22.0. We agreed on the fact that the more running jobs the user have at the moment, the lower priority the queued job for that user.
user A have 4 running job, 10 queued job
user B have 3 running job, 12 queued job
user C have 2 running job, 11 queued job
when there is some resources available, say user D finished a job.
according to the policy, the next job would be user C.

I dont see any per-user related thing in job sort formula. My question is, can we have the cluster configured to act like this scenery i mentioned above?


My question is, can we have the cluster configured to act like this scenery i mentioned above?

do you mean the user that has used the cluster less gets more priority then

Please check

  • fairshare policy
  • Otherwise, in a job sort formula consider fairshare_factor

Strictly speaking, fairshare would not be the case, as it is based on the history usage. what we want it exactly the snapshot of the current running jobs per user.
However, it might be the last resort if we cant figure out a way doing so. Also, i cant really tell what would happen if we configure fairshare decay to a small number which might even smaller than the job length.

You can implement a runjob hook that calls external scripts to find out the number of jobs that are running with respect to each user or read a file created by (cronjob) that lists the current snapshot of running jobs by respective user, find out which user’s job has to run next, so if in case the scheduler chooses a different user’s job (who has more jobs running) requeue that job and only allow the user’s job who has less number of running jobs.

queue limits / server limits might also be useful if you would like a fixed number of jobs running for each of the user.

Fair, sound like write a scheduler myself. so the short answer would be ‘not easy to configure the default scheduler’, right?

Thank you

The source code is available, one could add such a policy and release it to the community. Also, the hook infrastructure is provided to customise the order in which one would like to order/run their jobs if default features does not enable the required policy (that is job life cycle), as including everything in the core scheduler would take time.

Untested suggestion:
If you don’t want to go so far as writing your own scheduler, you could come close by defining a resource “jobs_running” with the “r” permission flag that you add to jobs. A separate cron job would ask the server for all running jobs and build a table of running job counts for each user. It then asks the server for all queued jobs and sets the jobs_running resource for each queued job to the appropriate value from the table.

Finally, the job_sort_formula would include the negative of the jobs_running resource (negative to get them to sort in the right order). Jobs where the value hasn’t been set yet should get a low priority so they aren’t picked until the cron job looks at them.

A disadvantage of this is that the cron job is not synchronized with the scheduler, so the scheduler might sometimes use slightly out-of-date information and start the wrong job. There are hacks you could use to get around this, but I wouldn’t go down that route unless the number of mis-schedules becomes a problem.