Is it possible to run 1 pbs_server to control 2 separate subnets?

We have a site that used to run Torque with 1 flat subnet on all nodes.
The network was upgraded to 25GbE partially, and looks like this now:

It’s not possible to connect the 2 switches because the connector types are different.

We’d also like to try PBSpro.
But it seems that since you can’t configure 1 hostname on 2 IPs, the pbs_server can only connect to one subnet.
If the reasoning is true, then we’ll need 2 pbs_server for each subnet then, right?
But… is it possible to configure 2 pbs_server on the same machine, 1 for each subnet?
Or we’ll have to use 2 machines to run 2 pbs_servers?



  • Do you want the jobs to span across two subnets ? or jobs will be contained to respective subnets
  • Yes you can run two PBS servers on the same host ( but one of the PBS Servers’ configuration and service ports have to be updated to non-standard location and non-default ports)

Just thinking out loud, Would it be possible to create a Virtual IP address and assign a hostname for that on the PBS Server host. Make that IP address resolvable across the mom on both sides of the subnet.

Thank you

Hi adarsh,
No we don’t expect the jobs to run across subnets.
We’re running large MPI jobs, and know the MPI communication will be inefficient between subnets (if possible) before the network upgrade.
The idea was to set the nodes into 2 groups or queues, if they can be managed with 1 pbs_server.
If we can run 2 pbs_servers on the same host, then it would be straightforward(?): just send the jobs to the different pbs_server, and they’ll go to different group of nodes.
And… I don’t understand the Virtual IP idea…
Do you mean the IP alias?
Please do share your thoughts :wink:


In fact there’s a simple solution!
Just bind the master hostname to its internet IP in nodes’ /etc/hosts.
The nodes can contact master’s internet IP since the master acts as the gateway for both subnets.
In this way, all nodes and the master itself can share the same pbs_server hostname <–> IP resolution.
Problem solved :wink: