Modern Portals to Quality for Mainframe Transformation

Mainframe modernization can feel like a scene from Poltergeist, or any other thriller movie where the protagonist is trapped in an endlessly stretching hallway with a door that keeps receding into the distance.

Ask anyone who’s survived a major enterprise upgrade. Transforming thousands of files and COBOL and JCL code into a form that will play nicely with modern Java architecture and cloud computing infrastructure seems to be an endless concern with few offramps.

Even if the application code and the mainframe supporting it are translated, how will we know when the job is really done? How will we know if it is high quality? And how should we define quality, anyway?

Let’s explore these questions of quality and see if we can find a safe passage for mainframe transformation without carrying forward any of the mainframe baggage.

From old code to new gold

The most obvious difficulty of refactoring mainframe code to an object-oriented language is that it is unlikely to work as it once did.

Even if it does work, you could end up with results professionals lovingly refer to as “JOBOL” – which is either a job-oriented language started in the 1970s or a chaotic leprechaun at the end of the rainbow with ‘Just o’Bunch O’Lines’ of code.

Unlike object-oriented programming and relational databases, compute, and data operations can be highly intertwined within legacy mainframes. For instance, the result of a CICS transaction in COBOL may depend upon its location relative to other commands and may reference a record in a specific location to complete its operations.

Mainframes also have their own flavor of utilities – for instance, an older IBM mainframe can have its own file transfer and sort utilities. Some vendors like Micro Focus have encapsulated these common utilities into their own COBOL platforms. Still, these proprietary elements must also become maintainable code if and when they are transformed.

Procedural legacy code can’t simply be translated to Java code; it must be transliterated so it makes logical sense when it arrives, using a framework that understands how to split code that was never object-oriented in the first place into components useful to current developers working in Java.

Spring Batch is an open-source record processing engine that provides a solid substrate for converting existing COBOL into a more declarative format ready for Java developers to redeploy into Spring Boot or microservices-ready architectures without having to learn legacy coding concepts.

From there, intelligent rules-based automation should ensure that existing mainframe workflows are replicated in the new, decoupled architecture. Then, you can just sit back and watch the cost savings on eliminated MIPS roll in, right? It looks like we still need to make sure…

Trust but verify, incrementally

Companies start on a mainframe modernization journey to free themselves from constraints that hinder the delivery of new business functionality and take advantage of modern cloud architectures that can scale elastically and perform better at a lower cost.

Quality is seldom the driving factor, but it can rapidly take the wheel and sideline modernization efforts. Therefore, quality should be inserted at several key points of each project to verify the health of the code, as well as the sustainability of the overall initiative.

There are many code inspection and linting solutions on the market. SonarQube is a commonly used open-source static analysis tool that can perform thousands of automated checks against Java and other languages to ensure the code is compliant.

“With SonarQube, you are tapping into a global community and a belief system about ‘what is quality code,’” says Gregory Saxton, CTO of CloudFrame. “All of our customers are using it.”

Of course, there’s a big difference between checking for structural bugs in code and verifying that it will support complex business functionality at cloud scale. To answer this challenge, enterprise IT organizations and their service partners should structure modernization projects in smaller functional increments, where custom test coverage of new business logic can be fit within the reasonable project scope.

Extending performance and quality without lock-in

A big reason for migrating applications off the mainframe and onto a public cloud IaaS is to escape the vendor lock-in of walled gardens, where proprietary technologies are required, and very costly licensing, support and usage costs become inevitable.

When provisioning into AWS or Azure, look up their “well-architected framework” documents. These tools help clarify the design decisions that the team can make about how to organize code and services in the cloud target environment for better reliability and performance.

Many of the variables inherited from COBOL or C on legacy mainframes have very limited local usefulness. They may have been put there decades ago to patch up a one-off problem that no longer even exists!

An intelligent modernization solution should be able to maintain all of the variables and threads running through the mainframe application, so it can discard the dead-ends and elevate and reuse the global functions as microservices that can be consistently reused throughout the target application environment.

“The bar for what good looks like, it’s not just open-source code unencumbered by long-term vendor lock-in … it’s encapsulating the complexity of all of the mainframe languages and esoteric things in COBOL,” said Saxton. “You don’t want to require Java developers to learn all of the legacy concepts before they can make changes to the code we’ve generated. There are quite a few products on the market that do exactly that!”

Minding the migration portal

All databases and cloud providers offer the ability to set up ingress pipelines with some form of ETL (extract, transfer, load) operations for moving data and code.

Freshly transliterated and refactored Java code comes through this portal, and at this point, there are hundreds of black-box test tools in the cloud marketplace that could potentially give teams a ‘health score’ on that code as it funnels through.

That’s not enough to assure success. Full-stack static, functional, and performance testing of the entire delivered application environment is critical because you are rebuilding an entire system, even if that happens incrementally.

That could mean finding developers who understand mainframes and Java and distributed cloud applications and getting them to write test code. Good luck with that!

“The planet simply will not have enough talented people available to write all of the code businesses want them to write,” said Saxton.

This is where low-code test automation concepts can come in. Low-code solutions are generally associated with drag-and-drop app builders, but in a sense, building a test is really building the user’s interaction with an app. Automated scans can capture the past and current system states and allow DevTest teams to abstract away much of the need for code comparison and test coding.

The Intellyx Take

What was once permanent is now becoming portable. We are finally reaching an era of computing where a viable exit portal from the mainframe to modern distributed and cloud architecture is within reach.

Automated code generation that can discard non-essential functions while following the thread of critical business processes can get us most of the way there. Good architectural planning and intelligent test automation will ensure a safe arrival with less effort and quality baked in.

An Intellyx BrainBlog for CloudFrame, by Jason English

©2022 Intellyx LLC. Intellyx is editorially responsible for this content. At the time of writing, CloudFrame is an Intellyx customer, and IBM and Microsoft are former Intellyx customers. No other organizations mentioned here are Intellyx customers.

Image credit: Jennifer Kramer, flickr CC2.0 license

Find Out More

Learn more about CloudFrame Renovate.

FIND OUT MORE

Download cloudFrame Relocate Product Fact Sheet

Related Articles

Next Event

Recent Events