Scheduler's log messages "intel turned negative"

Hi,

We have upgraded to pbs version 20.0.0 on our centos 7 machines, and it looks much better than version 19.
However, in the past few days (a couple of weeks after the installation), we see that the scheduler’s log is filled with the below messages. And simple jobs are being performed with a big delay (40 minutes)
I did not find reference on the web, besides “PBS-16139 Queue resources_assigned.ncpus becomes negative and remains even after unset the attribute.” appearing in https://2022.help.altair.com/2022.1.2/PBS%20Professional/PBS_RN_2022.1.2.pdf
So, my question is, if indeed the log messages that we see cause the delay in jobs, and if so, do we have to install the latest pbs version? (we prefer not currently since we do not want yet to upgrade the centos 7 machines)

01/18/2023 14:11:36;0080;pbs_sched;Job;810187.power9.tau.ac.il;Considering job to run
01/18/2023 14:11:36;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:36;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:36;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:36;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:36;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:36;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:36;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:36;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:36;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:36;0080;pbs_sched;Job;810187.power9.tau.ac.il;Job is a top job and will run at Sun Jan 16 20:23:37 2028
01/18/2023 14:11:36;0080;pbs_sched;Job;1096188.power9.tau.ac.il;Considering job to run
01/18/2023 14:11:37;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:37;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:37;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:37;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:37;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:37;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:39;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:39;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:39;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:39;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:39;0080;pbs_sched;Job;1096188.power9.tau.ac.il;Job is a top job and will run at Mon Jan 17 11:41:08 2028
01/18/2023 14:11:39;0080;pbs_sched;Job;1096180.power9.tau.ac.il;Considering job to run
01/18/2023 14:11:40;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:40;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:40;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:40;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:40;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:40;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:41;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:41;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:41;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:44;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0
01/18/2023 14:11:44;0080;pbs_sched;Job;1096180.power9.tau.ac.il;Job is a top job and will run at Mon Jan 17 11:41:08 2028
01/18/2023 14:11:44;0080;pbs_sched;Job;1096181.power9.tau.ac.il;Considering job to run
01/18/2023 14:11:45;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:45;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:45;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:45;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:45;0080;pbs_sched;Svr;update_server_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:45;0080;pbs_sched;Node;update_queue_on_end;intel turned negative -1.00, setting it to 0
01/18/2023 14:11:46;0080;pbs_sched;Node;update_queue_on_end;preempt_targets turned negative -inf, setting it to 0

For what it’s worth
Just an update, that I did not find a reason for these messages, except for abserving that ‘intel’ is one of our defined resources.
However, found the cause of slow response from the scheduler, after increasing scheduler’s log level:
The scheduler was looping checking a job which could not run, because the job asked for resources which the nodes in the queue could not satisfy.
Another scenario which caused this,was looping over a job, which asked for a queue, which all its nodes were offline.
Log messages that appeared on the scheduler’s log file included the text: Failed to satisfy subchunk:
I guess this bug was fixed in later versions, but having centos 7, we could not update pbs beyond 20.0.0

I do not know if it relates, but the scheduler is set in its config file with:
help_starving_jobs: true