Pbs_mom dumps core when jobs are preempted / canceled

We are currently running Bright Computing’s spin of PBSPro 18.1.2 under RHEL 7.5. We are seeing pbs_mom segfault. These segfaults line up with job preemption or users canceling a job while starting. Any thoughts on debugging this further would be greatly welcome.

In /var/log/messages, we see:

May  1 14:36:14 compute040 kernel: pbs_mom[16824]: segfault at 8 ip 0000000000466228 sp 00007fffffffa010 error 4
May  1 14:36:14 compute040 kernel: in pbs_mom[400000+116000]

The traceback from core files is about the same:

#0  0x0000000000466228 in is_direct_write ()
#1  0x00000000004731ac in std_file_name ()
#2  0x00000000004735bf in open_std_file ()
#3  0x0000000000426e18 in req_mvjobfile ()
#4  0x000000000042427e in dispatch_request ()
#5  0x0000000000459caf in process_IS_CMD ()
#6  0x000000000045b06c in is_request ()
#7  0x00000000004522cb in do_rpp ()
#8  0x0000000000452327 in rpp_request ()
#9  0x000000000047f96c in wait_request ()
#10 0x000000000045275e in finish_loop ()
#11 0x0000000000458625 in main ()

The mom logs aren’t overly useful - the timestamp on core was 14:36:

05/01/2019 14:33:33;0002;pbs_mom;n/a;setup_env;read environment from /cm/local/apps/pbspro-ce/var/spool/pbs_environment
05/01/2019 14:33:33;0100;pbs_mom;Svr;parse_config;file config
05/01/2019 14:33:33;0002;pbs_mom;n/a;set_restrict_user_maxsys;setting 499
05/01/2019 14:33:33;0100;pbs_mom;Svr;parse_config;file /cm/local/apps/pbspro-ce/var/spool/mom_priv/config.d/excl
05/01/2019 14:33:33;0002;pbs_mom;n/a;read_config;max_check_poll = 120, min_check_poll = 10
05/01/2019 14:33:33;0002;pbs_mom;n/a;ncpus;hyperthreading disabled
05/01/2019 14:33:33;0002;pbs_mom;n/a;initialize;pcpus=40, OS reports 40 cpu(s)
05/01/2019 14:36:14;0100;pbs_mom;Req;;Type 1 request received from root@192.168.132.1:15001, sock=1
05/01/2019 14:36:14;0100;pbs_mom;Req;;Type 3 request received from root@192.168.132.1:15001, sock=1
05/01/2019 14:36:14;0100;pbs_mom;Req;;Type 57 request received from root@192.168.132.1:15001, sock=1
05/01/2019 14:38:15;0002;pbs_mom;Svr;Log;Log opened
05/01/2019 14:38:15;0002;pbs_mom;Svr;pbs_mom;pbs_version=18.1.2
05/01/2019 14:38:15;0002;pbs_mom;Svr;pbs_mom;pbs_build=mach=N/A:security=N/A:configure_args=N/A
05/01/2019 14:38:15;0002;pbs_mom;Svr;pbs_mom;hostname=compute040.cm.cluster;pbs_leaf_name=N/A;pbs_mom_node_name=N/A

Hi, if you can reproduce this problem please provide detailed steps including job details, $PBS_HOME/mom_priv/config from a node where this happened, etc.

If you can’t reproduce this at will, please look in the server logs to see what job the server was sending to this node at that time and send us the details of that job (ideally qstat -xf, but all server/sched/accounting logs pertaining to that job will do if the details are gone).

Hi Scott,

We can’t quite produce this at will and for the moment, it’s resisting reproduction. I’m going to share what I can from the same node and time period it happened in.

The $PBS_HOME/mom_priv/config is as follows:

# This section of this file was automatically generated by cmd. Do not edit manually!
# BEGIN AUTOGENERATED SECTION -- DO NOT REMOVE
# END AUTOGENERATED SECTION   -- DO NOT REMOVE

$clienthost master
$restrict_user_maxsysid 499
$usecp *:/home  /home
$usecp *:/cm/shared/home  /cm/shared/home

I’m working on the logs, but they’re somewhat verbose and need to be sanitized. What’s the best way to share them?

For the moment, here’s the output of tracejob for jobs 2250 and 2267 which seemed to be involved:

05/01/2019 08:39:05  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute001:ncpus=40)+(compute090:ncpus=40)
05/01/2019 08:39:05  S    send of job to compute001 failed error = 15070
05/01/2019 08:39:05  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 10:19:05  L    Insufficient amount of resource: ncpus (R: 80 A: 40 T: 6040)
05/01/2019 10:19:05  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 10:29:05  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 10:56:13  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:13  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute024:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:26:13  S    send of job to compute024 failed error = 15070
05/01/2019 14:26:13  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute032:ncpus=40)+(compute031:ncpus=40)
05/01/2019 14:26:13  S    send of job to compute032 failed error = 15070
05/01/2019 14:26:13  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute030:ncpus=40)+(compute029:ncpus=40)
05/01/2019 14:26:13  S    send of job to compute030 failed error = 15070
05/01/2019 14:26:13  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute031:ncpus=40)+(compute028:ncpus=40)
05/01/2019 14:26:13  S    send of job to compute031 failed error = 15070
05/01/2019 14:26:13  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute029:ncpus=40)+(compute027:ncpus=40)
05/01/2019 14:26:13  S    send of job to compute029 failed error = 15070
05/01/2019 14:26:14  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute028:ncpus=40)+(compute026:ncpus=40)
05/01/2019 14:26:14  S    send of job to compute028 failed error = 15070
05/01/2019 14:26:14  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute027:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:26:14  S    send of job to compute027 failed error = 15070
05/01/2019 14:26:14  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:14  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute026:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:26:14  S    send of job to compute026 failed error = 15070
05/01/2019 14:26:14  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:14  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:15  S    send of job to compute038 failed error = 15070
05/01/2019 14:36:15  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute037:ncpus=40)+(compute036:ncpus=40)
05/01/2019 14:36:15  S    send of job to compute037 failed error = 15070
05/01/2019 14:36:15  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute036:ncpus=40)+(compute035:ncpus=40)
05/01/2019 14:36:15  S    send of job to compute036 failed error = 15070
05/01/2019 14:36:15  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute035:ncpus=40)+(compute034:ncpus=40)
05/01/2019 14:36:15  S    send of job to compute035 failed error = 15070
05/01/2019 14:36:15  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute034:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:36:15  S    send of job to compute034 failed error = 15070
05/01/2019 14:36:15  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute023:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:36:15  S    send of job to compute023 failed error = 15070
05/01/2019 14:36:16  L    Insufficient amount of resource: ncpus (R: 80 A: 40 T: 6080)
05/01/2019 14:36:16  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:46:15  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 15:15:32  L    Not enough free nodes available
05/01/2019 15:15:32  S    send of job to compute040 failed error = 15070
05/01/2019 15:15:33  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute039:ncpus=40)+(compute038:ncpus=40)
05/01/2019 15:15:33  S    send of job to compute039 failed error = 15070
05/01/2019 15:15:33  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute037:ncpus=40)+(compute036:ncpus=40)
05/01/2019 15:15:33  S    send of job to compute037 failed error = 15010 reject_msg=pbs_mom: System error: No such file or directory
05/01/2019 15:23:24  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute036:ncpus=40)+(compute035:ncpus=40)
05/01/2019 15:23:24  S    send of job to compute036 failed error = 15010 reject_msg=pbs_mom: System error: No such file or directory
05/01/2019 15:24:18  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute038:ncpus=40)+(compute037:ncpus=40)
05/01/2019 15:24:18  S    send of job to compute038 failed error = 15010 reject_msg=pbs_mom: System error: No such file or directory
05/01/2019 15:24:19  L    Failed to run: Node is in the wrong state for operation (15181)
05/01/2019 15:24:19  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 15:34:59  L    Considering job to run
05/01/2019 15:34:59  L    Evaluating subchunk: ncpus=40:mpiprocs=20
05/01/2019 15:34:59  L    Allocated one subchunk: ncpus=40:mpiprocs=20
05/01/2019 15:34:59  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)
05/01/2019 15:34:59  L    Job run
05/01/2019 15:34:59  S    send of job to compute040 failed error = 15010 reject_msg=pbs_mom: System error: No such file or directory
05/01/2019 15:34:59  S    Unable to Run Job, MOM rejected
05/01/2019 15:34:59  S    delete job request received
05/01/2019 15:34:59  S    Job to be deleted at request of user1@cluster-hn1.cm.cluster
05/01/2019 15:34:59  A    requestor=user1@cluster-hn1.cm.cluster
05/01/2019 16:06:30  S    dequeuing from s2, state 9

and 2267:

[root@compute040 ~]# tracejob -n 2 -v 2267 
/cm/local/apps/pbspro-ce/var/spool/mom_logs/20190502: No such file or directory
/cm/local/apps/pbspro-ce/var/spool/mom_logs/20190501: No such file or directory

Job: 2267.cluster-hn1

05/01/2019 08:39:05  L    Insufficient amount of resource: ncpus (R: 320 A: 80 T: 6080)
05/01/2019 10:19:05  L    Insufficient amount of resource: ncpus (R: 320 A: 40 T: 6040)
05/01/2019 14:16:13  L    Not enough free nodes available
05/01/2019 14:26:13  L    Insufficient amount of resource: ncpus (R: 320 A: 280 T: 6080)
05/01/2019 14:26:13  L    Insufficient amount of resource: ncpus (R: 320 A: 240 T: 6080)
05/01/2019 14:26:13  L    Insufficient amount of resource: ncpus (R: 320 A: 200 T: 6080)
05/01/2019 14:26:13  L    Insufficient amount of resource: ncpus (R: 320 A: 160 T: 6080)
05/01/2019 14:26:13  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute033:ncpus=40)+(compute032:ncpus=40)+(compute031:ncpus=40)+(compute030:ncpus=40)+(compute029:ncpus=40)+(compute028:ncpus=40)+(compute027:ncpus=40)+(compute026:ncpus=40)
05/01/2019 14:26:13  S    send of job to compute033 failed error = 15070
05/01/2019 14:26:14  L    Insufficient amount of resource: ncpus (R: 320 A: 120 T: 6080)
05/01/2019 14:26:14  L    Insufficient amount of resource: ncpus (R: 320 A: 80 T: 6080)
05/01/2019 14:26:14  L    Insufficient amount of resource: ncpus (R: 320 A: 80 T: 6080)
05/01/2019 14:36:14  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:36:14  S    send of job to compute040 failed error = 15070
05/01/2019 14:36:14  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute023:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:36:15  L    Insufficient amount of resource: ncpus (R: 320 A: 280 T: 6080)
05/01/2019 14:36:15  L    Insufficient amount of resource: ncpus (R: 320 A: 240 T: 6080)
05/01/2019 14:36:15  L    Insufficient amount of resource: ncpus (R: 320 A: 200 T: 6080)
05/01/2019 14:36:15  L    Insufficient amount of resource: ncpus (R: 320 A: 160 T: 6080)
05/01/2019 14:36:15  L    Insufficient amount of resource: ncpus (R: 320 A: 120 T: 6080)
05/01/2019 14:36:15  L    Insufficient amount of resource: ncpus (R: 320 A: 80 T: 6080)
05/01/2019 14:36:15  S    send of job to compute039 failed error = 15070
05/01/2019 14:36:16  L    Insufficient amount of resource: ncpus (R: 320 A: 40 T: 6080)
05/01/2019 14:36:16  S    Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 15:15:32  L    Considering job to run
05/01/2019 15:15:32  L    Evaluating subchunk: ncpus=40:mpiprocs=40:ompthreads=1
05/01/2019 15:15:32  L    Allocated one subchunk: ncpus=40:mpiprocs=40:ompthreads=1
05/01/2019 15:15:32  S    Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 15:15:32  L    Job run
05/01/2019 15:15:32  S    send of job to compute040 failed error = 15010 reject_msg=pbs_mom: System error: No such file or directory
05/01/2019 15:15:32  S    Unable to Run Job, MOM rejected
05/01/2019 15:15:32  S    delete job request received
05/01/2019 15:15:32  S    Job to be deleted at request of user2@cluster-hn1.cm.cluster
05/01/2019 15:15:32  A    requestor=user2@cluster-hn1.cm.cluster
05/01/2019 15:46:30  S    dequeuing from s1, state 9

Here’s an except from the server logs with lines related to either job or node compute040:

grep -e 'compute040\|2250\|2267' ${PBS_HOME}/server_logs/20190501 | grep -e '\ 14:2\|3'
05/01/2019 14:03:32;0d80;Server@cluster-hn1;TPP;Server@cluster-hn1(Thread 0);sd 66, Received noroute to dest (compute040.cm.cluster)192.168.132.90:15003, msg="pbs_comm:192.168.132.1:17001: Dest not found at pbs_comm"
05/01/2019 14:08:33;0d80;Server@cluster-hn1;TPP;Server@cluster-hn1(Thread 0);sd 216, Received noroute to dest (compute040.cm.cluster)192.168.132.90:15003, msg="pbs_comm:192.168.132.1:17001: Dest not found at pbs_comm"
05/01/2019 14:13:32;0d80;Server@cluster-hn1;TPP;Server@cluster-hn1(Thread 0);sd 66, Received noroute to dest (compute040.cm.cluster)192.168.132.90:15003, msg="pbs_comm:192.168.132.1:17001: Dest not found at pbs_comm"
05/01/2019 14:18:32;0d80;Server@cluster-hn1;TPP;Server@cluster-hn1(Thread 0);sd 216, Received noroute to dest (compute040.cm.cluster)192.168.132.90:15003, msg="pbs_comm:192.168.132.1:17001: Dest not found at pbs_comm"
05/01/2019 14:23:32;0d80;Server@cluster-hn1;TPP;Server@cluster-hn1(Thread 0);sd 66, Received noroute to dest (compute040.cm.cluster)192.168.132.90:15003, msg="pbs_comm:192.168.132.1:17001: Dest not found at pbs_comm"
05/01/2019 14:23:45;0d80;Server@cluster-hn1;TPP;Server@cluster-hn1(Thread 0);sd 223, Received noroute to dest (compute040.cm.cluster)192.168.132.90:15003, msg="pbs_comm:192.168.132.1:17001: Dest not found at pbs_comm"
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute033:ncpus=40)+(compute032:ncpus=40)+(compute031:ncpus=40)+(compute030:ncpus=40)+(compute029:ncpus=40)+(compute028:ncpus=40)+(compute027:ncpus=40)+(compute026:ncpus=40)
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute024:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2267.cluster-hn1;send of job to compute033 failed error = 15070
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute024 failed error = 15070
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute032:ncpus=40)+(compute031:ncpus=40)
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute032 failed error = 15070
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute030:ncpus=40)+(compute029:ncpus=40)
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute030 failed error = 15070
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute031:ncpus=40)+(compute028:ncpus=40)
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute031 failed error = 15070
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute029:ncpus=40)+(compute027:ncpus=40)
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute029 failed error = 15070
05/01/2019 14:26:13;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute028:ncpus=40)+(compute026:ncpus=40)
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute028 failed error = 15070
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute027:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute027 failed error = 15070
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute026:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute026 failed error = 15070
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:26:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:28:32;0d80;Server@cluster-hn1;TPP;Server@cluster-hn1(Thread 0);sd 216, Received noroute to dest (compute040.cm.cluster)192.168.132.90:15003, msg="pbs_comm:192.168.132.1:17001: Dest not found at pbs_comm"
05/01/2019 14:32:46;0002;Server@cluster-hn1;Node;compute040.cm.cluster;Mom restarted on host
05/01/2019 14:32:46;0400;Server@cluster-hn1;Node;compute040.cm.cluster;Setting host to Initialize
05/01/2019 14:32:46;0002;Server@cluster-hn1;Node;compute040.cm.cluster;update2 state:0 ncpus:40
05/01/2019 14:32:46;0002;Server@cluster-hn1;Node;compute040.cm.cluster;Mom reporting 1 vnodes as of Wed May  1 14:32:46 2019
05/01/2019 14:32:46;0002;Server@cluster-hn1;Node;compute040.cm.cluster;node up
05/01/2019 14:32:46;0400;Server@cluster-hn1;Hook;resourcedef;hook resourcedef file mismatched checksums: server: 2151844810 mom compute040.cm.cluster: 0...resending
05/01/2019 14:32:46;0080;Server@cluster-hn1;Req;Server@cluster-hn1;successfully sent rescdef file /cm/shared/apps/pbspro-ce/var/spool/server_priv/hooks/resourcedef to compute040.cm.cluster:15002
05/01/2019 14:32:46;0080;Server@cluster-hn1;Req;Server@cluster-hn1;successfully sent hook file /cm/shared/apps/pbspro-ce/var/spool/server_priv/hooks/PBS_alps_inventory_check.HK to compute040.cm.cluster:15002
05/01/2019 14:32:46;0080;Server@cluster-hn1;Req;Server@cluster-hn1;successfully sent hook file /cm/shared/apps/pbspro-ce/var/spool/server_priv/hooks/PBS_alps_inventory_check.PY to compute040.cm.cluster:15002
05/01/2019 14:32:46;0080;Server@cluster-hn1;Req;Server@cluster-hn1;successfully sent hook file /cm/shared/apps/pbspro-ce/var/spool/server_priv/hooks/pbs_cgroups.HK to compute040.cm.cluster:15002
05/01/2019 14:32:46;0080;Server@cluster-hn1;Req;Server@cluster-hn1;successfully sent hook file /cm/shared/apps/pbspro-ce/var/spool/server_priv/hooks/pbs_cgroups.CF to compute040.cm.cluster:15002
05/01/2019 14:32:46;0080;Server@cluster-hn1;Req;Server@cluster-hn1;successfully sent hook file /cm/shared/apps/pbspro-ce/var/spool/server_priv/hooks/pbs_cgroups.PY to compute040.cm.cluster:15002
05/01/2019 14:32:46;0080;Server@cluster-hn1;Req;Server@cluster-hn1;successfully sent hook file /cm/shared/apps/pbspro-ce/var/spool/server_priv/hooks/PBS_power.HK to compute040.cm.cluster:15002
05/01/2019 14:32:46;0080;Server@cluster-hn1;Req;Server@cluster-hn1;successfully sent hook file /cm/shared/apps/pbspro-ce/var/spool/server_priv/hooks/PBS_power.CF to compute040.cm.cluster:15002
05/01/2019 14:32:46;0080;Server@cluster-hn1;Req;Server@cluster-hn1;successfully sent hook file /cm/shared/apps/pbspro-ce/var/spool/server_priv/hooks/PBS_power.PY to compute040.cm.cluster:15002
05/01/2019 14:33:33;0002;Server@cluster-hn1;Node;compute040.cm.cluster;update2 state:0 ncpus:40
05/01/2019 14:33:33;0002;Server@cluster-hn1;Node;compute040.cm.cluster;Mom reporting 1 vnodes as of Wed May  1 14:33:33 2019
05/01/2019 14:33:33;0100;Server@cluster-hn1;Req;;Type 0 request received from root@compute040.cm.cluster, sock=24
05/01/2019 14:33:33;0100;Server@cluster-hn1;Req;;Type 49 request received from root@compute040.cm.cluster, sock=22
05/01/2019 14:33:33;0100;Server@cluster-hn1;Req;;Type 58 request received from root@compute040.cm.cluster, sock=24
05/01/2019 14:34:00;0100;Server@cluster-hn1;Req;;Type 0 request received from root@compute040.cm.cluster, sock=18
05/01/2019 14:34:00;0100;Server@cluster-hn1;Req;;Type 49 request received from root@compute040.cm.cluster, sock=20
05/01/2019 14:34:00;0100;Server@cluster-hn1;Req;;Type 58 request received from root@compute040.cm.cluster, sock=18
05/01/2019 14:34:54;0004;Server@cluster-hn1;Node;compute040;attributes set:  at request of root@cluster-hn1.cm.cluster
05/01/2019 14:34:54;0004;Server@cluster-hn1;Node;compute040;attributes set: state - offline
05/01/2019 14:36:00;0100;Server@cluster-hn1;Req;;Type 0 request received from root@compute040.cm.cluster, sock=18
05/01/2019 14:36:00;0100;Server@cluster-hn1;Req;;Type 49 request received from root@compute040.cm.cluster, sock=27
05/01/2019 14:36:00;0100;Server@cluster-hn1;Req;;Type 58 request received from root@compute040.cm.cluster, sock=18
05/01/2019 14:36:14;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:36:14;0008;Server@cluster-hn1;Job;2267.cluster-hn1;send of job to compute040 failed error = 15070
05/01/2019 14:36:14;0002;Server@cluster-hn1;Node;compute040.cm.cluster;node down: could not send job to mom
05/01/2019 14:36:14;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:36:14;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute023:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:36:14;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2267.cluster-hn1;send of job to compute039 failed error = 15070
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute038:ncpus=40)+(compute037:ncpus=40)
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute038 failed error = 15070
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute037:ncpus=40)+(compute036:ncpus=40)
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute037 failed error = 15070
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute036:ncpus=40)+(compute035:ncpus=40)
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute036 failed error = 15070
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute035:ncpus=40)+(compute034:ncpus=40)
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute035 failed error = 15070
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute034:ncpus=40)+(compute023:ncpus=40)
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute034 failed error = 15070
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute023:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;send of job to compute023 failed error = 15070
05/01/2019 14:36:15;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:36:16;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:36:16;0008;Server@cluster-hn1;Job;2250.cluster-hn1;Job Modified at request of Scheduler@cluster-hn1.cm.cluster
05/01/2019 14:38:15;0002;Server@cluster-hn1;Node;compute040.cm.cluster;Mom restarted on host
05/01/2019 14:38:15;0400;Server@cluster-hn1;Node;compute040.cm.cluster;Setting host to Initialize
05/01/2019 14:38:15;0002;Server@cluster-hn1;Node;compute040.cm.cluster;update2 state:0 ncpus:40
05/01/2019 14:38:15;0002;Server@cluster-hn1;Node;compute040.cm.cluster;Mom reporting 1 vnodes as of Wed May  1 14:38:15 2019
05/01/2019 14:38:15;0002;Server@cluster-hn1;Node;compute040.cm.cluster;node up
05/01/2019 14:46:15;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:16;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:16;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:16;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:16;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:16;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:16;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:16;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:17;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:17;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:17;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:17;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:17;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:17;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:17;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:18;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:18;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:18;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:18;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:18;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:18;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:19;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:19;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:19;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:19;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:19;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:19;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:19;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:20;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:20;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:20;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:20;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:20;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:20;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:21;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:21;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:21;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:21;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:21;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
...
05/01/2019 14:46:22;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:22;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:22;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;send of job to compute040 failed error = 15010 reject_msg=pbs_mom: System error: No such file or directory
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;send of job to compute040 failed error = 15010 reject_msg=pbs_mom: System error: No such file or directory
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;send of job to compute040 failed error = 15010 reject_msg=pbs_mom: System error: No such file or directory
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Unable to Run Job, MOM rejected
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Job Run at request of Scheduler@cluster-hn1.cm.cluster on exec_vnode (compute040:ncpus=40)+(compute039:ncpus=40)+(compute038:ncpus=40)+(compute037:ncpus=40)+(compute036:ncpus=40)+(compute035:ncpus=40)+(compute034:ncpus=40)+(compute041:ncpus=40)
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;send of job to compute040 failed error = 15010 reject_msg=pbs_mom: System error: No such file or directory
05/01/2019 14:46:23;0008;Server@cluster-hn1;Job;2267.cluster-hn1;Unable to Run Job, MOM rejected

Hi @wscullin,

Can you share the complete stack trace with line number. Or Is it possible to share the core dump?

Hi all, I have same problem.
(Sorry, out cluster is CentOS7, …not REHL.)

In addition, it is also caused when user does ‘qrerun’ command.
And I figured out 2 things.

  1. The segmentation fault is caused by the following ‘sprinf’ line:
    (I use Version 18.1.4)
$ git diff
diff --git a/src/resmom/stage_func.c b/src/resmom/stage_func.c
index b9e8e6f..e1197b5 100644
--- a/src/resmom/stage_func.c
+++ b/src/resmom/stage_func.c
@@ -451,12 +451,14 @@ is_direct_write(job *pjob, enum job_file which, char *path, int *direct_write_po
        strncpy(working_path, oldpath, MAXPATHLEN);
        if (local_or_remote(&p) == 1) {
                *direct_write_possible = 0;
+/*
                sprintf(log_buffer,
                                "Direct write is requested for job: %s, but the destination: %s is not usecp-able from %s",
                                pjob->ji_qs.ji_jobid, p,
                                pjob->ji_hosts[pjob->ji_nodeid].hn_host);
                log_event(PBSEVENT_DEBUG3,
                PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);
+*/
                return (0);
        }

And I commented out the above 6 lines,
it seems that the problem does not happen.

Moreover, I think that pjob->ji_hosts is NULL pointer.

  1. Let /var/spool/pbs/mom_priv/config be:
$jobdir_root /dev/shm
$usecp *:/home /home
$restrict_user_maxsysid 999

If job writes it’s stdout/err file on /home then this problem doesn’t happen.
But, Job writes stdout/err file on another location, e.g. /tmp/, the segmentation fault will be caused.

(In this situation, the inside of above ‘if’ block does not executed)

1 Like

Thanks!
I’ve created the following issue for mainline: