CloudFrame’s survey of large enterprises discovered the median installed mainframe MSU capacity is organically growing 8–12% annually. CloudFrame Relocate reduces mainframe GPP (General Purpose processor) MSU consumption by 40 – 70%, which may reduce application datacenter chargebacks. CloudFrame Renovate transforms legacy mainframe applications into elegant cloud native Java Spring Batch and Spring Boot services aligned to the 12 Factor Application Principles. For large batch workloads, CloudFrame’s generated Java code is often running as fast, or faster, than COBOL.
Relocate achieves this by enabling applications to run as Java, in any JVM including private and public cloud. On the mainframe, Java is zIIP (Z Integrated Information Processor) eligible, and the zIIP is significantly less costly than the GPP. Additionally, zIIP MSU is not subject to MLC. While reducing MSU most benefits the application owner, reducing MSU reduces MLC for all enterprise software licensed by MLC, thus benefiting the enterprise.
The 40 – 70% range of MSU reductions is conservative and varies based on the application profile, reference ‘Mileage varies’ below. Computationally or processing intensive applications will see higher reductions in MSU upon becoming zIIP eligible. Applications that make extensive use of DB2 SQL will benefit because SQL is zIIP eligible when executed on behalf of a Java JDBC application, this is true when executing on or off the mainframe.
Some mainframe data centers chargeback CPU costs at a constant rate, regardless of GPP vs zIIP usage. Even in these unique situations, reducing MSU and increasing zIIP consumption, or using CloudFrame Relocate to shift compute to private / public cloud, makes strong financial sense. zIIP costs only a fraction of GPP and reduces MLC; why overpay?
This point of view is based on cited information, customer feedback, and the experience of CloudFrame, Inc. It is, to the best of our understanding, accurate, complete, and current.
MSU (Millions of Service Units) is a unit that measures the amount of CPU consumed per hour. IBM uses the MSU metric for software pricing purposes. It is an analytic based on MIPS consumption, which is agnostic of the mainframe generation and CPU model.
MLC (Monthly Lease Charge) is the cost of enterprise software that IBM and most every other vendor in the marketplace charges, except CloudFrame. MLC is derived by multiplying your contracted lease rate by monthly peak R4HA (Rolling 4 Hour Average) MSU consumption. It’s quite normal for companies to have hundreds of different licensed software titles in their LPARs, each subject to MLC.
MIPS (Millions of instructions per second) is a measure of how much work the CPU did in a second. IBM’s SCRT (Sub-Capacity Reporting Tool) performs the complex calculation of converting MIPS to MSU. Like PCs and iPhones, newer generation mainframes and CPU models get faster, meaning they do more work in a second. Rather than directly pricing MSU and MLC based on the generation and CPU model, IBM chose to normalize MSU independent of the hardware. This implies that MIPS measurements are increased by a “factor” correlated to the mainframe generation and CPU model.
LPAR (Logical Partition) is a common mainframe configuration where a single physical hardware unit is virtualized to host multiple virtual machines (VMs). Companies run multiple LPARs to support isolation between quarantined new vendor software releases being tested, development and QA, pre-production, production, and Disaster Recovery.
Brief history of Java on Z
In the late 90’s IBM, and other tech giants, were caught flat-footed by the rapid adoption of the:
- Java’s success (a free open-source language)
- Redhat’s launch and the viability of Linux (a free open-source OS that ran on commodity hardware)
- Billions of dollars of IT investment flowing into the WinTel (Windows Intel) ecosystem.
IBM made a strategic commitment to make the mainframe price-point competitive to Java on Linux. Fast forward to 2021 and ~250 billion lines of COBOL remain in production, meanwhile, most mainframe customers have a strategy to either reduce or eliminate their mainframe footprint while innovation and investment in Linux Java applications and open source continues to accelerate. So, when you can’t beat them, buy them! IBM closed its acquisition of RedHat for USD 34B in July 2019.
However, in hindsight, IBM appears strategically blind to the pitfalls of the data center bureaucracy, which they invented and imbued on their customers dating back to the 1960’s, and for nearly 3 decades later, when the unit cost of compute costed more than the unit cost of labor. Once upon that time, that bureaucracy served a valuable purpose for customers. But, when the economics of IT changed, so should have the bureaucracy. There’s a term for this new normal of IT, bimodal IT.
Reflecting on 3 decades of experience integrating web, mobile, and desktop applications with mainframe, this bureaucracy has created friction for every project we at CloudFrame are aware. Fast forward to the Agile DevOps operating model prevalent today, and teams don’t want to do new development on the mainframe if they can avoid it. Nor do they want to depend on the applications that run on mainframe. Nor does any Linux administrator or development team want to run Linux on the mainframe unless they started doing so 20 years ago and got it right.
It’s curious, how many other platforms and technologies have become roadkill because of the bureaucracy around them?
Primer on mainframe costs
According to Gartner’s 2019 IT Key Metric Database. The total cost of mainframe, which includes hardware, software, facilities, utilities, loaded cost of labor, etc., is as follows: 22% is MSU, 46% is MLC, 28 –30% is labor. The remaining 2 – 4% is less significant.
MLC costs only can be reduced by rehosting applications off the mainframe, re-negotiating, reducing peak MSU utilization, capping CPU capacity, decommissioning unused enterprise software from LPARS, and re-partitioning LPARs to move workloads of high-cost enterprise software from LPARS consuming the most MSU to smaller capped LPARs.
Most customers run their data center as a cost center, except for hosting providers. Their application chargeback models are designed around units that can be consistently and accurately measured. The units are loaded to approximate the total costs to provide those units (reference Gartner above). Mainframe units tend to be MIPS and Storage. While Storage varies for disk, tape, virtual tape, redundancy, etc. MIPS are MIPS. For the data center chargeback models we’ve reviewed, each year finance must ‘true up’ the data center costs with either a final charge or refund to the internal customers.
Putting it all together. Customers with less than full-capacity licensing model generates an SCRT report (contractually obliged), which shows ‘Monthly-peak’ R4HA MSU utilization of each LPAR for MLC computation. For additional details, please see Understanding the MSU Consumption metric in an Enterprise Consumption Solution.
Demystifying the zIIP
The zIIP “engine”, as it’s known, is for all intent and purpose a CPU no different from the mainframe GPP, with a lower cost. The key exceptions are:
- IBM must approve software to be eligible to run on it, as they have for Java
- It is not possible to use the zIIP alone to IPL and run z/OS
- zIIP units of work are not interruptible by Workload Manager (WLM).
In z/OS, COBOL, PL/1, HL-ASM general purpose programming languages, can only run on the mainframe GPP, whereas Java is eligible to run on the zIIP Engine or GPP. The customer’s mainframe configuration controls what happens if there is no available zIIP capacity when a zIIP eligible unit of work is ready but cannot be dispatched to a zIIP processor. The options are wait for available zIIP or dispatch to the GPP. If well-written and performance-tuned Java (e.g., as generated by CloudFrame) runs on the GPP it has a similar cost structure as COBOL on the GPP, caveat JDBC SQL remains zIIP eligible.
One “zIIP engine”, is list priced at ~$27,243K plus ~$5,423k maintenance in years 2 through 5. Whereas, the GPP is leased and paid for based on MSU. The density of zIIP to GPP increases depending on your mainframe generation, e.g., z13, z14, z15. For z15 it’s 3 zIIP (maybe 4 if you negotiate well) to 1 GPP, for z14 it’s 2 to 1, and it’s 1 to 1 for z13. Mainframe as a hardware chassis has a large backplane and can support a lot of GPP and zIIP, as well as RAM. So, it’s not uncommon for a single physical to have hundreds of processors.
Want to add zIIP capacity and it’s causing friction and delay? Why? Do you know what the “z” in z/15, z/14, etc. means? It means “Zero Down Time”. That means, you can maintain the hardware (adding or removing zIIP and GPP or other things) without powering down the mainframe. So, question long change cycles, and if you have a pressing deadline consider the Cloud instead.
zIIP list pricing above provided by the IBM zIIP cost estimator, using the following steps:
- Click the link to the above
- Select “Get Started Now“
- Select ‘Run Docker OCI’
- Click ‘continue’
- Enter 4 for number of cores, click ‘continue’
- Enter 0 (zero) for ‘How many zIIPs do you have’
- Select ‘z15 T01’, click ‘Calculate your savings’.
The 40 – 70% range of MSU reductions is conservative and varies based on the application profile. An application that is computationally or processing intensive becoming fully zIIP eligible will be on the high end of MSU reduction. Also, applications that make extensive use of DB2 SQL will see higher reductions in MSU, because SQL is zIIP eligible when executed on behalf of a Java application.
MSU reduction benefits are slightly reduced for DB2 Inserts, Updates, and Deletes because they tend to have a slightly lower zIIP eligibility than read-only operations, likely due to COMMIT management. The same is true for VSAM operations, particularly when CI and CA splits occur. Calling subprograms written in COBOL, HL-ASM, PL/1, etc., from Java, a CloudFrame feature, is not zIIP eligible.
And finally, if CloudFrame Relocate is used to shift compute off the mainframe to a lower cost platform such as Cloud, then mainframe file I/O, calling remote subprograms, and JDBC SQL will incur GPP overhead when function shipped from CloudFrame Relocate back to the mainframe.
Java can be faster than COBOL
Architects and organizations that have conducted earlier proof of concepts may have concluded that Java has a legacy of being slower than COBOL. This is historically true on the mainframe, 5 – 20 years ago.
Surprise, again. Several of our customers had tried using MicroFocus COBOL on Linux or Windows to rehost their application off the mainframe. However, they got stranded running COBOL on two platforms, because they couldn’t meet the mainframe batch window. In fact, the best they achieved was several orders of magnitude longer, forcing the most expensive jobs to stay on the mainframe. Not what they had in mind when set off to reduce costs, only to increase cost and complexity! Meanwhile, the customer was delighted to discover CloudFrame’s generated Java met the batch window cycle time requirements, and in many instances was faster than COBOL on the mainframe.
Surprise! For large batch workloads, CloudFrame’s generated Java code is often running as fast, or faster, than COBOL. This is our customer’s claim when executing on the mainframe, or in a Windows or Linux platform. Of course, mileage varies according to the use case, but CloudFrame is committed to generating the best Java code and continuously improving quality, maintainability, and performance.
Closing thoughts on mainframe costs and COBOL 4.2
One concern with IBM’s longstanding approach for normalizing MIPS to MSU is that different computer instructions take shorter or longer to execute. Given that, is the SCRT factor used to normalize MIPS to MSU based on the normal distribution of instructions from longitudinal customer studies? We were unsuccessful finding any information that explains this, as were our customers.
Another concern with this approach is that older programs are not implicitly benefiting from newer hardware; but the cost of running these applications increases over time. Because it’s quite common for mainframe applications to run unchanged for years, even decades. Thus, running applications compiled 5 – 10 years, or more, ago, are not implicitly benefiting from improvements in newer hardware, newer instructions, and the newer compiler designs that exploit them. However, this issue is not unique to mainframes and the applications that run on them.
CloudFrame recommends migrating to Java as a mitigation to the COBOL 4.2 End of Life event, rather than upgrading to COBOL 6, because CloudFrame assures compatibility with COBOL 4.2 whereas IBM does not. In their defense, COBOL 6 does often run faster than COBOL 4.2, but can you afford to open the incompatibility can of worms? Migrating to Enterprise COBOL V6, Mike Chase, June 5, 2018.
Relocating to the cloud
While this article is advocating Java on Z using CloudFrame Relocate, a common deployment pattern with customers is shifting compute to cloud for non-production workloads. CloudFrame’s generated Java code runs in any Java 1.8 or above compliant JVM.
So, the choice is yours, use CloudFrame Relocate to shift compute off your mainframe to AWS, Azure, GCP, or private cloud.
Renovating legacy architectures
Customer’s comfortable with Cloud technology and operations are often seeking to reduce their mainframe and COBOL portfolio. Increasingly, these customers are turning to CloudFrame Renovate.
CloudFrame Renovate transforms COBOL, CICS, DB2 SQL, JCL, and SORT applications into elegant cloud native Java Spring Batch and Spring Boot services aligned to the 12 Factor Application Principles. And there’s no requirement to change your data. Renovate emulates VSAM and QSAM files systems, integrates with DB2 and MQ, and provides a clear path out of long-term vendor lock-in.
Modernization is about more than changing the code from COBOL to Java. Legacy architectures and technologies have no place in cloud native design. That’s why CloudFrame’s Lab experiments are automating the refactoring from single threaded COBOL to multithreaded Java architectures. Even cooler, recent experiments soon graduating will automate the refactoring from batch file processing to processing partitioned Kafka event streams, containerized, and horizontally scaled. Earlier experiments demonstrated the ease of replacing IBM MQ with AMQ or Kafka. Just as easily, 3rd-party product mainframe API’s can be replaced with their cloud cousin’s APIs. — Gregory Saxton, Chief Architect
Goodbye batch file processing. Hello cloud native near-time message driven architectures!
Greg is tech geek that is constantly curious and an excellent partner, with recognized strengths developing IT Strategies and leading Engineering, Enterprise / Solution Architecture, and Product Management.