In our current cluster, we have used NFS to store the job files. When we use qsub to submit a job under a shared directory, don’t want to stagein. When the job is done, don’t want to stageout. Because that is wasting the I/O.
How to disable stagein/stageout in job life cycle?
Have you tried this : the job submission script and the input deck/data referenced in that job submission script are in shared folder accessible from all the compute nodes (in the same path) by respective job owners. The job is submitted from within that shared folder. In this case the job script would be sent to the mother superior node, rest of the data is used from the respective paths of the common share and results are written back to common share.
Yes, I had tried this case. But it still transfer data between temp dir in executed node and shared directory. I don’t want to have any data transfer after job is done but just directly use the shared directory as the execution directory in execution nodes.