The base information for the performance statistics you are looking to track will be included in the system's <site-7731>/logs/lae-audit.log file and, depending on the timeframe for the analysis, any 'rotated' audit logs (which will be named lae-audit.log.yyyy-mm-dd).
While the following use case does not use a data flow, you may want to consider using the 'jq' command line tool to analyze the lae-audit.log file. See the following articles relating to its use.
How to use jq on linux Environment
https://customer.precisely.com/s/article/How-to-use-jq-on-linux-Environment-360058604674?language=en_US
Data360 Analyze: How to use jq on Windows
https://customer.precisely.com/s/article/Data360-Analyze-How-to-use-jq-on-Windows-360059947393?language=en_US
Alternatively, you could build a custom data flow which parsed the JSON audit log event information.
------------------------------
Adrian Williams
Precisely Software Inc.
------------------------------
Original Message:
Sent: 10-10-2023 05:44
From: Toby Harkin
Subject: Server Performance Report
I am wondering if anyone in the community has ever built a dataflow to track what impact different runs may be having on the server?
I would like to be able to see the actual compute of each node within each dataflow for each run, both adhoc and schedule. If actual compute is not possible is there any sort of proxy that could be tracked and measured to help us identify poor performing runs on our server?
Any help on this would be great.
Cheers.
------------------------------
Toby Harkin
Telstra Corporation Limited
Sydney NSW
------------------------------