we have a setup where we want to limit memory via cgroups, but allow the jobs to access all the cores.
I modified the pbs_cgroups hook, and disabled the ‘cpuset’, enabled the ‘memory’ and set the memory ‘default’ to 16GB! After enabling the hook, I can see that my applications can get more than 16GB resident memory, so that part works! However, I am now limited to a single core, too, i.e. the cpuset is still(?) active and gets assigned a core! Is it possible to have a memory cgroup, only? Or is this bound to a ‘cpuset’, and assigns cores, too?
Our setup: Scientific Linux 7.9, OpenPBS 22.05
Rationale: so far, we have used ‘ulimit’ to limit memory, etc, on those ‘shared’ nodes, where we allow access to all cores (we even oversubscribe), but need to limit memory, e.g. to 16/32GB per job. However, more and more applications tend to pre-allocate memory in larger chunks, that conflicts with the ‘ulimit’ approach.
Any ideas? Hints? Or does that not work at all, with the versions we have?
I am aware of, that newer versions of systemd can do this on a per application basis, using systemd-run as a wrapper, but upgrading is currently no option.