Job cannot reach 100%cpu when submitting through openPBS

I successfully installed openhpc on CentOS8.3 and I found weird behaviour of openpbs. (my compute node has 32 cpus)

If I directly login to compute node and run command ex. mpiexec.hydra -np 32 vasp_std (Intel MPI). The %cpu can reach ~100% (almost full speed) without any problem.

However, If I submit job through openpbs via pbs script, the %cpu will be reduce ~50% and the job will run at half speed even I set to use all cpus (#PBS -l nodes=1:ppn=32)

These results are from the same job and same compute node. It seems that openPBS block the total %cpu usage. I don’t know how to solve this problem.

Could you please help me to solve weird behaviour ?

This is the result when I run directly on compute node

I can get ~100% cpu usage.

here is my setting.

Could you please share the

  1. The script that you ran without using openpbs
  2. The script that you run with openpbs
  3. the mpi flavour you are using ( intel, openmpi, platform , etc )
    • did you compile the MPI library with openpbs TM library
      example: FAQ: Building Open MPI
      29. How do I build Open MPI with support for PBS Pro / Open PBS / Torque?

#PBS -l select=1:ncpus=32:mpiprocs=32

  • please run this with select statement instead of -l nodes=1:ppns=32

Yeh …it is related to our Intel MPI. If I used Intel MPI (2015) , I can get full 100% cpu usage. But Intel MPI (2017), the cpu usage is reduced by 50%. I don’t know how to fix our intel compiler (2017) because the installation is almost automatic. (./ I don’t know how to add TM supported to my intel compiler.
I used openPBS installed on CentOS 8.3.

Yep Now I can solve my problem by adding this to my PBS script
export I_MPI_PIN=off
export I_MPI_PIN_DOMAIN=socket


1 Like

If you are using Intel MPI, then you do not need to compile from source.

Section: 5.2.7 Intel MPI 2.0.022, 3, and 4 with PBS

Thank you

Thanks a lot
Have a nice day