Watch our on-demand webinar: How Nedbank Connected Their Mainframe to the Cloud with Model9 & Microsoft Azure

WATCH NOW

Avoiding Known Pitfalls When Comparing Model9’s CPU Performance Usage to Other Solutions

Understanding the Numbers Around Model9’s Blazing Speed and efficiency

Offer Baruch

|

Oct 14, 2021

At Model9, we proudly extol the virtues of our Model9 Cloud Data Management for Mainframe (CDM). We know it is fast and we have proved it. So have our customers. But for those trying to replicate our results, there are points of possible confusion that can lead to disappointment or misunderstanding.

For example, using reports generated through IBM SDSF or IBM OMEGAMON for z/OS performance monitor, someone running a test might look at Model9 CPU consumption and be disappointed. At first glance, in a competition between Model9 CDM and IBM DFSMShsm, IBM DFSMShsm might seem to be using fewer CPU resources than Model9. 

However, this is misleading.

First, one of the key indices used by IBM SDSF and OMEGAMON is how much CPU is consumed in real time. In other words, what is the consumption at any given moment.  All else being equal, IBM DFSMShsm normally generates a lower consumption number – but this is because the work takes longer. In contrast, Model9 CDM tends to have shorter, intensive peaks as it quickly “gets the job done.” But this doesn’t mean Model9 CDM consumes more total CPU. In fact, it is consuming much less CPU than IBM DFSMShsm for the same backup process.

In fact, a user can reduce the use of parallelism and throttle Model9 CDM to get the usage level they prefer – slowing down the process – though we find the vast majority of Model9 customers are happy with getting jobs done as fast as possible.

A second common mistake is looking at the “raw” data in the IBM RMF workload activity report fields to compare CPU consumption of different workloads. It is important to understand how RMF calculates SERVICE TIME CPU fields. The SERVICE TIME CPU field includes consumption on both general purpose and speciality processors, meaning it includes processing on zIIP engines as well in addition to general purpose CPU processing. On top of this, for customers running sub-capacity IBM Z hardware models, in this field, the work of the zIIP engines, which are running faster than general purpose processors on sub-capacity models, is normalized to the speed of the slower processors, which scales the SERVICE TIME CPU seconds by a normalization factor. This can lead to the appearance of greater CPU consumption with Model9 CDM because, in fact, Model9 CDM actually takes the load off of the general purpose CPU, assigning most of it such as compression, encryption, and data transfer processing to the zIIP speciality engines. This is a better solution that leaves the general purpose CPU free for critical system tasks and it saves on MSUs.

A third issue is that one cannot simply “subtract” zIIP seconds, mentioned above, from total CPU seconds to “correct” the questionable, combined metric reported by RMF. The normalization factor mentioned above must be used in order to calculate a true result as RMF multiplies the zIIP seconds by the normalization factor in order to match the speed of a full capacity zIIP engine to the speed of the sub-capacity general purpose CPU.

The fourth and final issue that crops up in assessing figures reported by RMF is the common mistake of comparing CPU percent (APPL % – CP metrics) generated over differing time intervals. For instance, a task that requires five minutes for a traditional system to complete compared to three minutes for Model9 CDM will, of course, show less CPU use per unit of time. But, when considered carefully, the numbers are favorable to Model9 CDM.

Let’s use the following RMF reports recently provided by one customer who compared Model9 CDM and IBM DFSMShsm to more clearly explain how to interpret the numbers:

RMF workload activity report for IBM DFSMShsm backup interval

RMF workload activity report for Model9 CDM backup interval

By examining the SERVICE TIME CPU field only, it would seem that Model9 CDM consumes 406.085 CPU seconds while IBM DFSMShsm consumes only 139.297 CPU seconds. However, as explained above, the number 406.085 includes the normalized CPU time on zIIP as well. It can be seen under the SERVICE TIME IIP field that Model9 CDM consumed 157.966 seconds on zIIP (in its full capacity speed) in comparison to IBM DFSMShsm which didn’t consume any zIIP time at all. 

The APPL% CPU field gives a more accurate view – showing Model9 CDM using only 5.17% CPU time in the interval while IBM DFSMShsm used 26.55% CPU time in the same interval length. It also shows that Model9 CDM used 26.33% of the zIIP (which is running in full capacity) thus yielding more service time seconds in terms of the slow CPU, while IBM DFSMShsm did not utilize the zIIP engine at all.

In summary of this customer example – Model9 CDM used 80% LESS CPU time than IBM DFSMShsm! You just have to properly and carefully study the reports to see it.

About the author

Offer Baruch | VP Field Engineering
Offer Baruch, VP Field Engineering for Model9, background is in the IBM mainframe ecosystem and cloud. His many roles in his career have been IBM-centric, involving security, MF tape management, Linux operation and management, open systems storage admin, and large projects in consolidation, migration, performance tuning, capacity control and capacity planning. Offer is a certified AWS professional solution architect and developer associate.
Register for a Demo