Choosing the Right Application for Shifting COBOL Compute Application Modernization

What Applications are Good Shift COBOL Compute Candidates?

If your curiosity has been piqued by the benefits of shifting COBOL compute and how it can be accomplished, you may have begun to think about the applications in your inventory and if they would align with this type of modernization effort.

Selecting the right applications or programs to begin this journey is one of the critical success factors. Choose incorrectly, and no matter how good your intentions are, the square peg isn’t going to fit into the round hole. Make the right choice, and things will go easier.

When thinking about applications or programs that might be candidates for shifting compute consider this criterion: the profile of the application or job, the business case, and the ability to validate the outcome.

The Profile of Good Shift Candidates

Good shift compute candidates are usually well-known in an IT organization. They are the highest mainframe MSU or MIPS consumers, the jobs that spike utilization and are recognized to be expensive to run. They are the jobs that align with the peak of your rolling four-hour average (R4HA).  These jobs or processes have been carefully monitored and have received years, maybe even decades, of tweaking and performance enhancements.   

Examples of this application or job include financial organization end-of-day, start-of-day, and trade reconciliation processes. Certain retail or utility industry billing or account updates may also fit this description.

Within that list of usual suspects, the ideal candidates for shifting COBOL compute are programs and applications that are computation-heavy, batch-oriented, and have sequential data access processes aligned to Db2, QSAM, or VSAM. 

The least favorable candidates for shifting compute would be programs that do not contain heavy computation processes and are “chatty.” That means making many reads or updates to files and databases to initiate and drive processes.

Provide a Valid Business Case

The candidates that meet the profile filter should then be analyzed to determine if a valid business case can be established. The business case should communicate the financial impact of shifting compute. 

The business case contains the baseline for the current execution of the application, job, or program in terms of dollars and the hypothesis of the costs if that workload was removed. The cost hypothesis should include any remaining charge on the mainframe plus the costs associated with the platform where the execution will be shifted.

Most organizations evaluate the difference and label this the cost savings.

Establishing the baseline begins with capturing the current CPU and MSU utilization numbers. From there, it is simple math: MSU utilized multiplied by the cost of the MSU equals execution cost. This may be straightforward since these candidates are probably known and monitored. They are usually the applications or jobs highlighted in the reports from system management facilities (SMF) and sub-capacity reporting tools (SCRT) applications.

Once utilization is known, a cost for that utilization can be calculated.  Each organization’s cost per MSU will be unique due to contractual variations and internal chargeback and attribution situations.  

After the baseline is known, the business case needs the cost of future execution. Identifying this cost may require investigation and could have significant variations. These costs depend on where the execution occurs, e.g., the zIIP or cloud container. Moving compute to the zIIP may be relatively inexpensive in some organizations or may have significant (but less than mainframe) costs in others.

The business case establishes the shifting compute cost saving (current cost minus future cost) and creates an element of the test and validation that occurs as this modernization technique is implemented.

Ability to Test and Validate

It should come as no surprise that shifting COBOL compute will require comprehensive testing and validation. The process can be segregated into three distinct areas; equivalency, business case, and performance.

Equivalency is the first validation area because it doesn’t matter how much money is saved or how fast the application executes if the results are wrong or inaccurate. These applications and programs are mission-critical and proprietary, and they have run for years or decades, so they must deliver the exact data results of the existing system. Any variation could have tremendous consequences. Data equivalency testing ensures that the same outcomes are delivered.  Similarly, functional equivalency must be tested to ensure downstream processes remain uninterrupted and consistent.

If the shift can achieve the data and functional equivalence, the business case hypothesis is then tested.  Questions like; did the shift reduce mainframe utilization as expected? What was the cost of executing on the target platform (zIIP, cloud, etc.)?  have to be answered and analyzed.

If the business case can be met, performance becomes the next area of testing and validation. Here the execution of the system is measured against established service level agreements (SLA) and process times and durations.  There may be scheduling and batch job sequencing impacts if the execution duration is considerably longer than the current application or job. 

It is important to recognize that the current system may have undergone considerable structural and architectural enhancements over the years. Performance in the new compute environment may also benefit from tweaking and re-configuring the parameters and configuration used in the Java cross-compile.  

Establishing how you will select shifting compute candidates and how they will be measured is vital for your modernization journey’s success.

FIND OUT MORE

Download cloudFrame Relocate Product Fact Sheet

Related Articles

Next Event

Recent Events