Wrong (?) limit on builtin resource mpiprocs?

Hello *,

For a new installed cluster with OpenPBS latest (CentOS 7), we see
resources_max.ncpus = 1728
resources_assigned.mpiprocs = 80
resources_assigned.ncpus = 80
resources_assigned.nodect = 80

I would have expected the builtin ressource mpiprocs to be in same order
of magnitude like ncpus. Same with ncpus.

Any hints, how the value of mpiprocs is set, or whether it may be influenced by the value of ncpus (also not set by our configuration) are welcome.
I also try to understand by searching the source code

thanks in advance

question was put wrong, indeed
resources_assigned.mpiprocs = 80
resources_assigned.ncpus = 80

are attributed because a running mpi job exactly was using those recources,
red herring, behaviour as expected.
We are searching, why in a cluster with 72 nodes we only can request a maximum of 35 vnodes. Above 35 vnodes jobs will fail / not be scheduled

Could you please share the job script used for this job and the flavour of MPI you are using ?
The job request for mpiprocs is explicity made as below

#PBS -l select=1:ncpus=36:mpiprocs=36
#PBS -l place=scatter