What is the Average Cost per MIPS or MSU? Who Cares!

The title of today’s post is a question that I get asked all the time. Sometimes by users who are interested in trying to curtail mainframe costs. Sometimes by vendors looking for a way to promote their products that help to reduce costs. While both are well-meaning, this is a question I never answer. Because it is meaningless. Let’s discuss why this is so.

First, let’s look at some of the “answers” to this question you can find by searching the web. First of all, there is a 2015 article that was published in Science of Computer Programming that cites the average cost per MIPS to be $3,285, with the expectation that the cost will increase by 20% annually. So that would make the average MIPS cost in 2023 somewhere around $14,125. Of course, vendors like to cite this reference, and I understand why. It sure makes things seem very costly. And sure, costs are rising. But surely not this much!

Another example of an “answer” to this question comes from a recent AWS blog post. Herein we find: “For a large mainframe of more than 11,000 MIPS, the average annual cost per installed MIPS is about $1,600. Hardware and software account for 65 percent of this, or approximately $1,040.” Interesting. This was published a full three years after the previous estimate, and it is around $2,000 per MIPS cheaper. What happened to the 20% annual growth rate? Can we assume that there is a $2,000 per MIPS cost that is attributable to something other than hardware and software? Electricity? Real estate? Human resources? OK… but $2,000 per MIPS?

Clearly, something is wrong here. But what?

If you have been reading my articles here on the Cloudframe blog, you probably have at least an inkling of what I am about to say.

First of all, there is a lot of focus on MIPS instead of MSUs, and if you know the difference between the two, then as an IBM Z user, you’d probably want to focus on MSUs. But the average cost per MSU is no more useful than the average cost per MIPS.

Let’s dig a little deeper. Focusing on just the software component of MIPS/MSU cost, remember that your monthly IBM Z software bill is calculated monthly based on the peak rolling four-hour average MSU utilization for that month. At least for sub-capacity pricing metrics and monthly license charge products (like z/OS, CICS, Db2, etc.). And the peak can change from month to month based on your tuning efforts and workload requirements. So, it would be best if you kept in mind that your peak varies by product, which will change your bill, and, therefore, your cost per MIPS/MSU. But that isn’t the only thing to consider.

It would be best if you also remembered that the last MIPS/MSU used is the cheapest. The more MIPS or MSUs you use, the less you may pay for each additional one as you cross the thresholds established by IBM.

So, let’s get to the issue. Why are people interested in an average cost per MIPS or MSU? The general idea is always to estimate how much savings they can achieve if they reduce consumption. And I understand the desire to know this. But think about it.

Let’s say we’ve shaved off 10 MIPS from our nightly batch processing. First things first, congratulations. That is an excellent accomplishment. But what is the impact? If our monthly peak occurs during the day running CICS transactions, our monthly MLC bill from IBM will not be impacted (assuming AWLC sub-capacity pricing is in effect). But I guarantee you that if you have an average cost per MIPS, the folks who tuned that batch workload will multiply that average by ten and try to claim that they saved that amount of money. But they didn’t.

OK, let’s assume that the monthly peak occurs during the batch cycle. There are 8 to 10 hours in most batch cycles. But the peak is based on a rolling four-hour average (R4HA), so if the 10 MIPS were spread over 10 hours, they do not all impact the bill. Only the MIPS saved during the specific peak R4HA count. So again, using the average would not be an accurate way to estimate cost savings.

Of course, we can make the assumption that all 10 MIPS are saved specifically during the four-hour window of the peak. In this case, the average is probably more useful, but it still is not accurate. What if the MIPS savings falls on one of the thresholds  where the cost per MSU/MIPS changes? Even if the average was spot-on to your cost at the high-end, it is inaccurate because the first MIPS you save will be at a lower cost than the later MIPS.

And that gets to the fundamental problem of using averages. Averages are strongly influenced by outliers. For example, say you have three employees aged 30, 40, and 80. The average age of your employees is 50. But how useful is that upon which to base any decisions?

But let’s just plow ahead and assume that everything above is irrelevant or that we somehow miraculously saved all our 10 MIPS in the sweet spot where we will achieve some savings. Which of the above averages are you going to use?

If we use the 1985 value of $3,285, then we will try to say that we saved $32,850. But if we are aggressive, then we’ll use that 20% annual increase estimate and cite a $14,125 saving per MIPS, or $141,250! Or do we use the AWS estimate of $1,600 per MIPS saved and claim to have saved $16,000… but wait a minute, is this only hardware and software savings? In that case, we have to use $1,040, giving us a savings of $10,400. But did we actually save anything on hardware by tuning batch workload? Maybe we just saved on software… but where is that average?

Hopefully, you can see that averages are just not worth anything at all when trying to determine your savings by reducing MIPS or MSUs.

FIND OUT MORE

Download cloudFrame Relocate Product Fact Sheet

Related Articles

Next Event

Recent Events