CloudFrame Relocate Economics and Java on Z

The following understanding and point of view is based on cited information, customer testimony, and the experience of CloudFrame, Inc. It is, to the best of our understanding, accurate, complete, and current.


CloudFrame’s survey of large enterprises discovered the median installed mainframe MSU capacity is organically growing 8–12% annually.

CloudFrame Relocate helps organizations reduce mainframe GPP MSU consumption by 40 – 70%, which may reduce application datacenter chargebacks.

CloudFrame Renovate transforms legacy mainframe applications into elegant cloud native Java Spring Batch and Spring Boot services aligned to the 12 Factor Application Principles.

CloudFrame Relocate achieves this by enabling COBOL applications to run as Java, in any JVM including private and public cloud. On the mainframe, Java is zIIP eligible, and the zIIP Engine is significantly less costly than executing on the GPP. Additionally, zIIP MSU is not subject to MLC. While reducing MSU most benefits the application owner, reducing MSU reduces MLC for all enterprise software licensed by MLC, thus benefiting the enterprise.

The 40 – 70% range of MSU reductions is conservative and varies based on the application profile, reference ‘Mileage varies’ below. Computationally or processing intensive applications will see higher reductions in MSU upon becoming zIIP eligible. Applications that make extensive use of DB2 SQL will benefit because SQL is zIIP eligible when executed on behalf of a Java JDBC application, this is true when executing on or off the mainframe.

Some mainframe data centers chargeback CPU costs at a constant rate, regardless of GPP vs zIIP usage. Even in these unique situations, reducing MSU and increasing zIIP consumption, or using CloudFrame Relocate to shift compute to private / public cloud, makes strong financial sense. zIIP costs only a fraction of GPP and reduces MLC; why overpay?


MSU (Millions of Service Units) is a unit that measures the amount of CPU consumed per hour. IBM uses the MSU metric for software pricing purposes. It is an analytic based on MIPS consumption, which is agnostic of the mainframe generation and CPU model.

MLC (Monthly Lease Charge) is the cost of enterprise software charged by IBM and most every other vendor in the marketplace, except CloudFrame. MLC is derived by multiplying your contracted lease rate by monthly peak R4HA (Rolling 4 Hour Average) MSU consumption. It’s quite normal for companies to have hundreds of different licensed software titles in their LPARs, each subject to MLC.

MIPS (Millions of instructions per second) is a measure of how much work the CPU did in a second. Converting MIPS to MSU is a complex calculation performed by IBM’s SCRT (Sub-Capacity Reporting Tool). Like PCs and iPhones, newer generation mainframes and CPU models get faster, meaning they do more work in a second. Rather than directly pricing MSU and MLC based on the generation and CPU model, IBM chose to normalize MSU independent of the hardware. This implies that MIPS measurements are increased by a “factor” correlated to the mainframe generation and CPU model.

LPAR (Logical Partition) is a common mainframe configuration where a single physical hardware unit is virtualized to host multiple virtual machines (VMs). Companies run multiple LPARs to support isolation between quarantined new vendor software releases being tested, development and QA, pre-production, production, and Disaster Recovery.

Brief history of Java on Z

In the late 90’s IBM, and other tech giants, were caught flat-footed by the rapid adoption of the Internet, Java’s success (free open-source language), Redhat’s launch and the viability of Linux (a free open-source OS that ran on commodity hardware), and billions of dollars of IT investment flowing into the WinTel (Windows Intel) ecosystem. IBM made a strategic commitment to make the mainframe price-point competitive to Java on Linux. Fast forward to 2021 and ~250 billion lines of COBOL remain in production, meanwhile, most mainframe customers have a strategy to either reduce or eliminate their mainframe footprint. So, when you can’t beat them, buy them! IBM closed its acquisition of RedHat for USD 34B in July 2019.

However, in hindsight, IBM appears strategically blind to the pitfalls of the data center bureaucracy, which they invented and imbued on their customers dating back to the 1960’s, and for nearly 3 decades later, when the unit cost of computers cost more than the unit cost of labor. Once upon that time, that bureaucracy served a valuable purpose for customers. But, when the economics of IT changed, so should have the bureaucracy. There’s a term for this new normal, bimodal IT.

Reflecting on 3 decades of experience integrating web, mobile, and desktop applications with mainframe, this bureaucracy has created friction for every project we at CloudFrame are aware. Fast forward to the Agile DevOps operating model prevalent today, and teams don’t want to do new development on the mainframe if they can avoid it. Nor do they want to depend on the applications that run there. Nor does any Linux administrator or development team want to run Linux on the mainframe unless they started doing so 20 years ago and got it right.

It’s curious, how many other platforms and technologies have become roadkill because of the bureaucracy around them?

Primer on mainframe costs

According to Gartner’s 2019 IT Key Metric Database. The total cost of mainframe, which includes hardware, software, facilities, utilities, loaded cost of labor, etc., is as follows: 22% is MSU, 46% is MLC 28 –30% is labor, the remaining 2 – 4% is less significant.

MLC costs only can be reduced by rehosting applications off the mainframe, re-negotiating, reducing peak MSU utilization, capping CPU capacity, decommissioning unused enterprise software from LPARS, and re-partitioning LPARs to move workloads of high-cost enterprise software from LPARS consuming the most MSU to smaller capped LPARs.

Most customers run their data center as a cost center, except for hosting and cloud providers.  Application chargeback models are designed around units that can be consistently and accurately measured. The units are loaded to approximate the total costs to provide those units (reference Gartner above).  Mainframe units tend to be MIPS and Storage. While Storage varies for disk, tape, virtual tape, redundancy, etc. MIPS are MIPS. For the data center chargeback models we’ve reviewed, each year finance must ‘true up’ the data center costs with either a final charge or refund to the internal customers.

Putting it all together. Customers who are not on full-capacity licensing model are contractually obliged to generate an SCRT report that will report ‘Monthly-peak’ R4HA MSU utilization of each LPAR for MLC computation. For additional details, please see Understanding the MSU Consumption metric in an Enterprise Consumption Solution.

Demystifying the zIIP engine

The “zIIP engine”, as it’s called, is for all intent and purpose a CPU no different from the mainframe GPP, with a lower cost. The key exceptions are, IBM must approve software to be eligible to run on it, the zIIP cannot be used alone to IPL and run z/OS, and zIIP units of work are not interruptible by Workload Manager (WLM).

In z/OS, COBOL can only run on the mainframe GPP (General Purpose processor) whereas Java is eligible to run on the zIIP Engine (Z Integrated Information Processor) or GPP. The customer’s mainframe configuration controls what happens if there is no available zIIP capacity when a zIIP eligible unit of work is ready but cannot be dispatched to a zIIP processor; the options are wait or dispatch to the GPP. If well-written and performance-tuned Java (e.g., as generated by CloudFrame) runs on the GPP it has a similar cost structure as COBOL on the GPP, caveat JDBC SQL is still zIIP eligible.

According to IBM’s zIIP cost estimator, one “zIIP engine”, has a list price of ~$27,243K plus ~$5,423k maintenance in years 2 through 5. The GPP is leased and paid for based on MSU. Depending on your mainframe generation, e.g., z13, z14, z15 the density of zIIP to GPP increases. For z15 it’s 3 zIIP (maybe 4 if you negotiate well) to 1 GPP, for z14 it’s 2 to 1, and it’s 1 to 1 for z13. Mainframe as a hardware chassis has a large backplane and can support a lot of GPP and zIIP, as well as RAM. So, it’s not uncommon for a single physical to have hundreds of processors.

Want to add zIIP capacity and its causing friction and delay? Why? Do you know the z in z/15, z/14, etc. means? It means Zero Down Time. That means the hardware can be maintained, such as adding or removing zIIP and GPP or other things, without powering down the mainframe. So, question long change cycles, and if you have a pressing deadline consider the Cloud instead.

Costs above provided by the IBM zIIP cost estimator . Click the link to the left, click ‘Get Started Now’, select ‘Run Docker OCI’, click ‘continue’, enter 4 for number of cores, click ‘continue’, enter 0 (zero) for ‘How many zIIPs do you have’, select ‘z15 T01’, click ‘Calculate your savings’.

Mileage varies

The 40 – 70% range of MSU reductions is conservative and varies based on the application profile. An application that is computation/processing intensive becoming fully zIIP eligible will be on the high end MSU reduction. Also, applications that makes extensive use of DB2 SQL will see higher reductions in MSU, because SQL is zIIP eligible when executed on behalf of a Java application.

MSU reduction benefits are slightly reduced for DB2 Inserts, Updates, and Deletes because they tend to have a slightly lower zIIP eligibility than read-only operations, likely due to COMMIT management. The same is true for VSAM operations, particularly when CI and CA splits occur. Calling COBOL, HL-ASM, PL/1, etc., load modules from Java, a CloudFrame feature, is not zIIP eligible.

And finally, if CloudFrame Relocate is used to shift compute off the mainframe to a lower cost platform such as Cloud, then mainframe file I/O, remote load module invocation, and JDBC will incur GPP overhead when function shipped from CloudFrame Relocate back to the mainframe.

Java can be faster than COBOL

Java has a legacy of being slow with architects and organizations that have conducted earlier proof of concepts.

Surprise! For large batch workloads, CloudFrame’s generated Java code is often running as fast, or faster, than COBOL. This is our customer’s claim when executing on the mainframe, or in a Windows or Linux platform.  Of course, mileage varies by use case, but we’re committed to continuously making the quality and performance of the code we generate better.

Surprise, again. Several of our customers had tried using MicroFocus COBOL on Linux or Windows to rehost their application off the mainframe. However, they got stranded with COBOL running on two platforms, because they couldn’t meet the mainframe batch window. In fact, the best they achieved was several orders of magnitude longer, forcing the most expensive jobs to stay on the mainframe. Not what they had in mind when set off to reduce costs, only to increase cost and complexity! Meanwhile, the customer was delighted to discover CloudFrame’s generated Java met the batch window cycle time requirements, and in many instances was faster than COBOL on the mainframe.

Closing thoughts on mainframe costs and COBOL 4.2

One concern with IBM’s longstanding approach for normalizing MIPS to MSU is that different computer instructions take shorter or longer to execute. Given that, is the SCRT factor used to normalize MSU based on the normal distribution of instructions from longitudinal customer studies? We were unsuccessful finding any information that explains this, as were our customers. 

Another concern with this approach is that older programs are not implicitly benefiting from newer hardware; but the cost of running these applications increases over time. Because it’s quite common for mainframe applications to run unchanged for years, even decades. Thus, running programs compiled 5 – 10 years, or more, ago, are not implicitly benefiting from improvements in newer hardware, newer instructions, and newer compiler designs that exploit them. This issue is not unique to mainframes and the applications that run on them.

CloudFrame recommends migrating to Java as a mitigation to the COBOL 4.2 End of Life event, rather than upgrading to COBOL 6, because CloudFrame assures compatibility with COBOL 4.2 whereas IBM does not. In their defense, COBOL 6 does often run faster than COBOL 4.2 but can you afford to open the incompatibility can of worms? Migrating to Enterprise COBOL V6, Mike Chase, June 5, 2018.

Relocating to the cloud

While this article is advocating Java on Z using CloudFrame Relocate, a common deployment pattern with customers is shifting compute to cloud for non-production workloads. CloudFrame’s generated Java code runs in any Java 1.8 or above compliant JVM.

So, the choice is yours, use CloudFrame Relocate to shift compute off your mainframe to AWS, Azure, GCP, or private cloud.

Renovating legacy architectures

Customer’s comfortable with Cloud technology and operations are often seeking to reduce their mainframe and COBOL portfolio. Increasingly, these customers are turning to CloudFrame Renovate.

CloudFrame Renovate transforms COBOL, CICS, DB2 SQL, JCL, and SORT applications into elegant cloud native Java Spring Batch and Spring Boot services aligned to the 12 Factor Application Principles. And there’s no requirement to change your data. Renovate emulates VSAM and QSAM files systems, integrates with DB2 and MQ, and provides a clear path out of long-term vendor lock-in.

Modernization is about more than changing the code from COBOL to Java. Legacy architectures and technologies have no place in cloud native design. That’s why CloudFrame’s Lab experiments are automating the refactoring from single threaded COBOL to multithreaded Java architectures. Even cooler, recent experiments soon graduating will automate the refactoring from batch file processing to processing partitioned Kafka event streams, containerized, and horizontally scaled. Earlier experiments demonstrated the ease of replacing IBM MQ with AMQ or Kafka.  Just as easily, 3rd-party product mainframe API’s can be replaced with their cloud cousin’s APIs. — Gregory Saxton, Chief Architect

Goodbye batch file processing. Hello cloud native near-time message driven architectures!


Download cloudFrame Relocate Product Fact Sheet

Related Articles

Next Event

Recent Events