Queue and associated nodes

Hello,

I have been using SGE until now, but now trying to use PBS Pro with an OpenHPC cluster. Out of the box i am getting a default queue.

# qmgr
Max open servers: 49
Qmgr: print server
#
# Create queues and set their attributes.
#
#
# Create and define queue workq
#
create queue workq
set queue workq queue_type = Execution
set queue workq enabled = True
set queue workq started = True
#
# Set server attributes.
#
set server scheduling = True
set server default_queue = workq
set server log_events = 511
set server mail_from = adm
set server query_other_jobs = True
set server resources_default.ncpus = 1
set server resources_default.place = scatter
set server default_chunk.ncpus = 1
set server scheduler_iteration = 600
set server resv_enable = True
set server node_fail_requeue = 310
set server max_array_size = 10000
set server default_qsub_arguments = -V
set server pbs_license_min = 0
set server pbs_license_max = 2147483647
set server pbs_license_linger_time = 31536000
set server license_count = Avail_Global:1000000 Avail_Local:1000000 Used:0 High_Use:0 Avail_Sockets:1000000 Unused_Sockets:1000000
set server eligible_time_enable = False
set server job_history_enable = True
set server max_concurrent_provision = 5

I have a few questions:

a) The computer nodes are number from c01 to c20 and all are part of workq. How can i allocate only c01 to c10 to workq and c11 to c20 to a new queue called cfdq.

b) Are there any pbs pro setup guides for ABAQUS and ANSYS Fluent?

c) Is there any quick howto/cheat sheet for Pbs Pro administration?

Thanks

Hello this is something I have been using recently. Very simple and crude, but it works:

set node compute-1-0-3 queue = training

users then submit with -W group_list like this:

qsub -I -lselect=1 -lWalltime=00:01:00 -q training -W group_list=itea_lille-kurs

Create and define queue training

create queue training
set queue training queue_type = Execution
set queue training resources_max.walltime = 10:00:00
set queue training acl_group_enable = True
set queue training acl_groups = imf_lille-tma4280
set queue training acl_groups += itea_lille-kurs
set queue training enabled = True
set queue training started = True
Qmgr: print node compute-1-0-3

Create nodes and set their properties.

Create and define node compute-1-0-3

create node compute-1-0-3 Mom=compute-1-0-3
set node compute-1-0-3 state = free
set node compute-1-0-3 resources_available.arch = linux
set node compute-1-0-3 resources_available.host = compute-1-0-3
set node compute-1-0-3 resources_available.mem = 131746108kb
set node compute-1-0-3 resources_available.ncpus = 20
set node compute-1-0-3 resources_available.vnode = compute-1-0-3
set node compute-1-0-3 queue = training
set node compute-1-0-3 resv_enable = True
set node compute-1-0-3 sharing = default_shared

Soln: The below setup would make sure , any jobs submitted to workq would be scheduled on c01-c10 nodes.
Any jobs submitted to cfdq would be scheduled on to c11-c20 nodes

  • qmgr -c “create resource queue_name type=string_array,flag=h”

  • source /etc/pbs.conf ; edit $PBS_HOME/sched_priv/sched_config

    • add queue_name to the resources: “…,queue_name”
  • kill -HUP (PID of PBS Scheduler)

  • qmgr -c “c q workq queue_type=e,enabled=t, started=t”

  • qmgr -c “s q workq default_chunk.queue_name=workq”

  • qmgr -c “c q cfdq queue_type=e,enabled=t, started=t”

  • qmgr -c “s q cfdq default_chunk.queue_name=cfdq”

  • for i in {01…10};do qmgr -c “set node c$i resources_available.queue_name+=workq”;done

  • for i in {11…20};do qmgr -c “set node c$i resources_available.queue_name+=cfdq”;done

Soln: The GUI environment should be integrated with PBS , the configuration files within these applications have to updated to reflect the path to PBS binaries. If you have the batch command line and parameters that go with these applications, then they can be executed using batch + PBS Directives.

Soln: Please refer to this documentation High-performance Computing (HPC) and Cloud Solutions | Altair
Also, please let us know if you are looking for something specific.

Thanks @adarsh and @einjen for your responses. I was wondering how to manage the nodes with the GPU.

I have two nodes c09, c10 with gpus.

a) How can i create a resource for GPUs?
b) How can i make a rule that when GPU’s are requested than the job should go to c09 and c10 compute nodes.

Thanks

Soln 1: Continuation of the above solution
qmgr -c “c q gpuq queue_type=e,enabled=t, started=t”
qmgr -c “s q gpuq default_chunk.queue_name=gpuq”
for i in {09…10};do qmgr -c “set node c$i resources_available.queue_name+=gpuq”;done
Submit jobs to gpuq , then they will go to cn09 and cn10

Soln2:

  1. Create resource
    qmgr -c “c r enablegpu type=boolean,flag=h”

  2. source /etc/pbs.conf ; edit $PBS_HOME/sched_priv/sched_config
    add queue_name to the resources: “…,enablegpu”
    kill -HUP

  3. For all the GPU nodes set the below
    qmgr -c ‘s n c09 resources_available.enablegpu=true’
    qmgr -c ‘s n c10 resources_available.enablegpu=true’

  4. For rest of the nodes which does not have gpus
    qmgr -c ‘s n NODENAME resources_available.enablegpu=false’

  5. submit a job to go on the gpu node as below
    qsug -l select=1:ncpus=1:gpuenable=true – /bin/sleep 100

Thanks, I went with Soln 2. The nodes c09, c10 have 1 gpu each. How can i make the gpu as a consumable resource so that if a job is already using a gpu on c09 and c10, it doesnt start but gets into a queue?

Also, I noticed couple of typos in your instructions.
qsub -l select=1:ncpus=1:enablegpu=true -- /bin/sleep 100
and
only double periods are needed in for i in {09..10};

Soln:

  1. qmgr -c “create resource ngpus type=long,flag=nh”
  2. add “ngpus” to the resources: line of the sched_config file
  3. kill -HUP (PID of the scheduler)
  4. For all the nodes which has GPUs
    qmgr -c “set node GPUNODE resources_available.ngpus=1”
  5. For all the nodes which has no GPUs:
    qmgr -c “set node COMPUTENODE resources_available.ngpus=0”
    6 Submit a job as below:
    qsub -l select=1:ncpus=1:ngpus=1:engablegpu=true – /bin/sleep 100

When it runs on the gpu node, pbsnodes , then it should give the resource_available.ngpus and resources_assigned.ngpus

Cheers

The AG section 5.13.8, “Using GPUs” describes basic and advanced
GPU scheduling.

I have followed the excellent steps described earlier by Adarsh to make two queues. However, whenever I submit my multi-node job to one queue it will get assigned nodes from the other queue…

Here are the steps I followed to make the two queues:

qmgr -c “create resource z_machines type=string_array,flag=h”
source /etc/pbs.conf
qmgr -c “create resource hpblades type=string_array,flag=h”
nano $PBS_HOME/sched_priv/sched_config
ps -ef | grep -i pbs
kill -HUP 30195
qmgr -c “c q z_machinesq queue_type=e,enabled=t, started=t”
qmgr -c “c q hpbladesq queue_type=e,enabled=t, started=t”
qmgr -c “set node z16 resources_available.z_machines+=z_machinesq”
qmgr -c “set node z53 resources_available.z_machines+=z_machinesq”
qmgr -c “set node z54 resources_available.z_machines+=z_machinesq”
qmgr -c “set node b33 resources_available.hpblades+=hpbladesq”
qmgr -c “set node b34 resources_available.hpblades+=hpbladesq”

Here is an example pbsnodes for one hpblades and one z_machines machine:

b33
Mom = lustwzb33
Port = 15002
pbs_version = 18.1.2
ntype = PBS
state = free
pcpus = 28
resources_available.arch = linux
resources_available.host = lustwzb33
resources_available.hpblades = hpbladesq
resources_available.mem = 131732936kb
resources_available.ncpus = 28
resources_available.vnode = b33
resources_assigned.accelerator_memory = 0kb
resources_assigned.hbmem = 0kb
resources_assigned.mem = 0kb
resources_assigned.naccelerators = 0
resources_assigned.ncpus = 0
resources_assigned.vmem = 0kb
resv_enable = True
sharing = default_shared
last_state_change_time = Mon Feb 24 12:48:07 2020
last_used_time = Thu Mar 19 15:29:54 2020

z54
Mom = lustwz54
Port = 15002
pbs_version = 18.1.2
ntype = PBS
state = free
pcpus = 40
resources_available.arch = linux
resources_available.host = lustwz54
resources_available.mem = 131842484kb
resources_available.ncpus = 40
resources_available.vnode = z54
resources_available.z_machines = z_machinesq
resources_assigned.accelerator_memory = 0kb
resources_assigned.hbmem = 0kb
resources_assigned.mem = 0kb
resources_assigned.naccelerators = 0
resources_assigned.ncpus = 0
resources_assigned.vmem = 0kb
resv_enable = True
sharing = default_shared
last_state_change_time = Mon Feb 24 12:48:07 2020
last_used_time = Thu Mar 19 15:29:54 2020

And my submission script:

#!/bin/bash
#PBS -l select=3:ncpus=20
#PBS -l walltime=00:10:00
#PBS -q z_machinesq
#PBS -W sandbox=private

hostname

cat $PBS_NODEFILE

And the output after submitting:

[me@b34 pbs]$ more test.sh.o21
lustwz16
lustwz16
lustwzb33
lustwz54