We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Post your ideas
Start by posting ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Help IBM prioritize your ideas and requests
The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The offering manager team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.
Receive notifications on the decision
Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.
Please use the following category to raise ideas for these offerings for all environments (traditional on-premises, containers, cloud):
Cloud Pak for Multicloud Management
Cloud Pak for Network Automation - incl Orchestration and Performance Management
Cloud Pak for Watson AIOps - incl Netcool Operations Management portfolio
Edge Application Manager
IBM Observability with Instana
ITM-APM Products - incl IBM Tivoli Monitoring v6 and Application Performance Monitoring v8
Workload Automation - incl Workload Scheduler
Tivoli System Automation - inc Tivoli System Automation Application Manager (SA AM), Tivoli System Automation for Multiplatforms (SA MP)
IWS CPU usage improvement for ETT data set CLOSE events
Each IWS tracker subsystem on z/OS gets control during data set CLOSE processing in order to check, if this event is related to an ETT-defined data set (ETT = Event Triggered Tracking) in the EQQDSLST-file. If EQQDSLST contains many entries (several hundreds) this processing can consume many CPU-hours per day on a z/OS system, which are hidden, as they are attributed to the caller of the CLOSE. We have analyzed a system where a conservative estimate was at least 8 CPU-hours a day only for this IWS processing (see "use case" for details). The design of this check can and should be improved in order to save CPU-time.
My analysis summary in the PMR shows some details:
We have noticed within a problem analysis that there is an excessive amount of calls to EQQXGENY during CLOSE processing. This happens during SMFWTM processing when IWS code is called via SSI with function code 68 (X'44'), for which function routine EQQZSSRX is registered.
A dump on our monoplex system xxxx showed 813 such calls to EQQXGENY for a single CLOSE event.
A GTF-Trace I took on system yyyy showed 2000 such calls for a single CLOSE event.
It looks like the number of calls is triggered by the number of defined possible ETT events for a data set close. Each IWS tracker subsystem checks for an occurrence of such an event.
What we want to address in this PMR is the current design of this processing that consumes many CPU-hours per day that are attributed to the address spaces doing the CLOSE.
A dump taken on system yyyy shows that this IWS processing within a single CLOSE SVC consumes a bit more than 1 ms (millisecond). A first improvement potential that I noticed is that each and every call to EQQXGENCY leads to getmaining and freemaining working storage for EQQXGENY via SVC. This in itself leads to 2 billion such SVC's per day on our system yyyy. This tremendous number could be improved by passing a working storage area address to EQQXGENY in addition to the parameters passed to it today.
I ran a couple of SLIPs with A=TRACE, just to get a time estimate for 50.000 calls to EQQXGENY. Each time it took less than 4 seconds, once even less than 1 second. If I conservatively calculate with 4 seconds, 80.000 secs/day, we should have at least 50.000 x 20.000 = 1.000.000.000 calls to EQQXGENY per day on system MT91. As 2.000 such calls happen in a single CLOSE, we should have 500.000 such CLOSE events per day. As each consumes more than 1 ms of CPU-time in IWS code on a 2827.725 CPU model we can assume a CPU-usage of at least 500 CPU-secs/day for this IWS SSI processing, i.e. at least 8 CPU-hours probably more!!!
The design for the ETT event checking is obviously suboptimal. I have looked at the code of EQQXGENY and I see that it is called with 6 parameters, of which one determines the ETT data set name (possibly generic) and another one determines the closed data set name. Due to the possible generic names the comparison performed in EQQXGENY is non-trivial. The module name might suggest we compare a GENeric X to a Y.
If all possible ETT data set names were properly sorted, at least up to the first generic character, the search for a possible match could be performed in a much, much faster way.
Do not place IBM confidential, company confidential, or personal information into any field.