PP-388: Allow mom hooks to accumulate resources_used values

This note is to inform the community of work being done to enhance resource usage reporting in PBS. Details may be found here:


Please feel free to provide feedback on this topic.

1 Like

I believe it would be useful to allow the hook write to return a json string as a value and have it stored properly in the accounting logs. If the hook writer decides to return something other than a string for the resource my_res, we should either verify the string returned using json.dumps() or require that the hook writer does it. For example, if I wanted to return the following python dictionary

a = {‘a’:6,‘b’:“some value #$%^&*@”,‘c’:54.4,‘d’:“32.5gb”}

as the resources_used value for the string then I would need to do json.dumps(a) which would return

’{“a”: 6, “c”: 54.4, “b”: “some value #$%^&*@”, “d”: “32.5gb”}’

As you can see this is a valid json object value and we should record this in the accounting logs as

resources_used.my_res={“mom_1”:{“a”: 6, “c”: 54.4, “b”: “some value #$%^&*@”, “d”: “32.5gb”}} instead of

resources_used.my_res={“mom_1”:’{“a”: 6, “c”: 54.4, “b”: “some value #$%^&*@”, “d”: “32.5gb”}’}

Or if the hook writer returns a list we should accept is as a list instead of a string with a list in it

resources_used.my_res={“mom_1”:[1, 2, 3, “5”, “2”, “!@#$%^&*()”]} instead of

resources_used.my_res={“mom_1”:’[1, 2, 3, “5”, “2”, “!@#$%^&*()”]’}

I’m currently playing around with this implementation-wise. We might run into problems with the accounting_logs format though, as the Admin guide (v13.1) section 13.3.1 (Log Entry Format):

message-text …message text format is blank-separated keyword=value fields

13.3.2 Space Characters in String Entries

Under Unix/Linux, you must enclose any strings containing spaces with quotes.

Ex. …user=pbstest group=None Account_Name=“Power Users”

With your example, there’s an embedded space in “some value”, so I’d still have to do:
resources_used.my_res="{“mom_1”:’{“a”: 6, “c”: 54.4, “b”: “some value #$%^&*@”, “d”: “32.5gb”}’}"

But then account_logs parsing can still break since in a keyword="" pair, the keyword is “resources_used.my_res”, and the itself contains another quote and an embedded space.

This gets interesting…

hmmm. Do the accounting logs parser respect a single ’ verses a double "

It depends on which utility is used to parse the accounting_logs. Does PBS actually ship one, something like pbs-report? I would assume whoever created the utility will read through our Admin guide to figure out the format, and code the tool accordingly.

Hey Al,
Your EDD looks fine. I just have a couple of minor suggestions on formatting:
first a nit: at the top in the line about custom resources you say "set in a hoo1k"
Second, you should anonymize your examples. This is a public EDD. You should remove your username and change your host names to something else.

The content looks fine.



First, apologies for cluttering the Contributor’s portal (Confluence) with comments instead of keeping the discussion on the Community Forum (here) as per suggested practice. Here is a summary of those comments; I will also ask that those comments be deleted from the Contributor’s portal to clean it up.

Summary (all these are resolved):

1.Having examples is great – they really help with understanding the design – thanks!
2. Initial design resulted in inconsistent names for nodes in accounting logs, sometimes fully qualified “corretija.pbspro.com” and sometimes short “corretija”. Design to be updated to use short names.
3. Initial design text was ambiguous regarding whether aggregations were per-node or per-MOM. Design to be updated to be more clear that it is per-MOM.
4. Question about aggregation of resources of type string_array… Design to be updated: resources of type string_array will not be supported (as they are both complicated and unnecessary).
5. How does one create an aggregation (not a summation) for a bunch of numeric values? Answer: you must use a resource of type string (not a numeric resource).

There is also one outstanding comment which I will summarize and address in a separate post.

Thanks again!

- bill

I modified the accounting log to have a custom resource formatted as a json dictionary and then ran it through the pbs-report and PBSA. In both cases it did not break parsing. The additional resource that I added to the E record was

resources_used.cpu_test=’{“a”:5,“b”:63.2,“c”:{“d”:“abcd 'evf$%^&*”}}’

Note nether tool tried to display the resource but at least it did not break the parser and was still able to show the resources defined before and after the added resource in the accounting log

Regarding aggregation of string resources, I have a suggestion. I understand that the values are aggregated on a per-MOM basis, and I like the idea of JSON format. However, automatically including the MOM names results in reduced flexibility and increased work on anyone wanting to consume the output.

I suggest keeping JSON syntax, but removing the MOM names, and instead, aggregating all the JSON objects into a single JSON object. This would then allow a natural output for not only per-MOM values, but also for per-vnode, per-chunk, and even per-MPI process and per-thread values too.

For example, assume a simple job requesting four cpus that can run on any number of MOMs (depending on configuration and load on the system):

qsub -l select=4:ncpus=1 -l place=free

Assuming we want to report on a per-cpu value, e.g., max memory per MPI rank. The initial design forces MOM names into the output, resulting in 7 different arrangements for the output, depending on how many chunks are assigned to each of up to 4 MOMs. The suggestion is to allow the hook writer to decide whether MOM names should be included, allowing the hook writer to produce a single output arrangement, regardless of chunk-to-MOM assignment.

Here’s an example. Assume the hook-writer has per-MPI-rank max memory data (again, just one of many examples), and assume MOM A was assigned the first two chunks and MOM B and MOM C were assigned chunks 3 and 4 respectively. The hooks executed on the MOMs would assign the values as:

MOM A: val = { rank0 : 21.2gb, rank1 : 33.3gb }
MOM B: val = { rank2: 4gb }
MOM C: val = { rank3 : 600gb }

And PBS Pro would aggregate these into a single JSON object for the accounting log / qstat of:

val = { rank0 : 21.2gb , rank1 : 33.3gb , rank2: 4gb, rank3 : 600gb }

This suggested design change also allows one to achieve the output from the initial design, by simply including the MOM name, e.g., on MOM A:

val = { momA : { rank0 : 21.2gb , rank1 : 33.3gb } }

This has the strong advantage of easily supporting the initial design while also supporting a natural output for per-vnode values too.

One further thought, but not a suggestion… The above syntax assumes forcing JSON syntax. I’m on the fence with respect to whether it is good to force JSON syntax – forcing JSON ensures uniformity of output (which is good); simply concatenating the strings and leaving the syntax decision to the hook writer adds a lot of flexility (they can certainly output in JSON format, but also XML or …), but it also increases the hook writer’s burden (slightly) to correctly place opening and closing elements and commas.

Should we invite the PBS Analytics folks to review our design since they are consumers?

I saw an earlier email from Jon inviting Serguei, Ashish, and Joshi to comment on the forum. I’m not sure if they’re Analytics-related…


Thanks, Bhroam. I’ll update the EDD with your suggestion.

Jon, good to know that it didn’t break the parser in pbs-report and PBSA.

Hi Bill. I can also see your point. It does add flexibility if we just aggregate the arbitrary strings, as it allows the hook writer to specify JSON or XML or other formats. The hook write can make the ‘pbs.get_local_nodename()’ call to determine the mom host name and include that in the resources_used value. What do you think, Jon @jon, I’m seeing the benefits of both designs and could go either way.

I have already invited the PBSA team to review the design

One of the reasons we decided to force the mom name is so that if one node did not report back then we would know which node it was so that one could come back later and know if the results were complete or not.

If we ever what to be able to consume this data in an automated fashion then we need to standardize on the format. JSON is a standard in the community and is easily used by python.

I have two concerns that I would like to add to the discussion. The first concern is scalability. We will need to record an immense amount of data for extremely large jobs. Perhaps doing so in a string is not the right approach, whereas a database might be more practical. Perhaps we compress the string. Or perhaps we format the data in such a way that it minimizes redundancy.

Second is the format of the data, be it a string or otherwise. There is a very good chance we will want to improve upon the initial format at some point in the future. We should build a versioning mechanism into the format to allow for future revisions and provide backward compatibility. The first N bytes of the data would contain the version of the format being used.

Maybe I am missing something. Each resource string accumulated should be on the same order as our exec_hosts size. So if only a handful of resources are used then we would be ~2-3x the size of our current accounting “E” message (unless the hook writer gets carried away)

I agree that versioning is a good idea and I would like us to go there if/when we do a whole sale change of the accounting logs to a machine readable version (my vote is JSON at the moment). At this point, I am not sure that versioning is merited since we are maintaining the same format of the accounting logs. We are however allowing hook writers to aggregate stings across hosts which is something we have not done before and storing them as a string that is JSON formatted. One the other had we could just format it the same as exec_hosts line that way the accounting logs look consistent. I am not a fan of this idea since a string is a string and to my knowledge we don’t do anything with resources_used.some_name string in our reporting tools. So in my mind a JSON formatted string would help those trying to consume this new data.

I feel scalability is not a big concern here. The amount of data collected is almost the same, maybe slightly more, and only when a large number of string resources are reported. They eventually anyway reside in the database. The communication between the moms involving large data is anyway compressed now.

I do like the idea of not forcing any one single format, but allowing a site to choose what they would like to use. Like XML vs JSON, But if we have to tie to one format, then JSON in my mind if the best bet - its the current industry standard till the industry invents something else :slight_smile:

I’ve updated the design document based on the discussions so far. The link is as follows:


Updates done include:

  1. Stress that it’s a per-MOM aggregation.
  2. The mom hostnames in the aggregated value are using short hostnames.
  3. Removed ‘string_array’ from the resource types that can be aggregated.
  4. Fix some typos.
  5. Cleaned up the examples a bit.