Moving COBOL Workload from the Mainframe to become a “Cloud-Advantaged” Application

Migrating mainframe workload to the cloud entails more than simply a recompilation of code or transformation from one programming language to another. Mainframe applications are more than just an implementation of a business function in a particular programming language; the most common of which is COBOL. These applications have dependencies based on the platform’s unique data types and encoding architecture (EBCDIC) and the implementation techniques for batch and transactional workload that were established decades ago when computing was a very different beast! More than just the language and data types must change to have the characteristics of a genuinely cloud-native application. Herein lies the rub! 

For the purposes of this blog, we use the term “cloud-native” to define the characteristics of applications that were born “native” to the architecture of the cloud. These characteristics revolve around modern languages, web/HTTP communications, relational data models, etc. Traditional mainframe applications have almost NONE of these, and transformation from the traditional procedure model of languages like COBOL to an object-oriented one of languages such as Java is only the beginning. We have coined the term “cloud advantaged” to describe those mainframe applications that have been modified to gain the same advantages of the cloud that cloud-native ones had from their inception.

Moving a mainframe application to the cloud to take full advantage of the cloud architecture is possible. Transforming this application to take advantage of the elasticity and scalability of the cloud and fitting into modern DevOps environments requires a specific approach. Many legacy mainframe applications were built when computer systems were digital implementations of heretofore manual business processes. In other words, they were online paper – they represented a computer-based implementation of a previously paper-based process. Consequently, programs were written to replicate the PROCESS and thus became defined procedurally – define the process and include the needed data. 

Today’s modern applications define the data FIRST (objects) and then determine the processes (methods) required to modify it. The business process is still a structured step-by-step procedure but is defined from a “data first” mindset. 

Note: We are not really debating which is better. We are only explaining that these are the differences between the two approaches. 

The modernization journey involves the migration of a procedural defined program that uses data to implement a business process that must now be transformed into objects with methods that replicate these existing processes. 

The technologies that are necessary to create an effective, performant, repeatable, and tailorable approach to this transformation are well defined in the realm of computer science. 

Transforming a legacy mainframe program in COBOL, for instance, to another language often utilizes an intermediary format – abstract syntax trees (AST). ASTs represent the form and flow of the program independent of the source language syntax. 

This is a very straightforward technique when transforming from one procedural language to another. However, when the fundamental architecture of the program needs to change from “process first” to “object first,” then more steps are required. The Data Division is the section of a COBOL program that defines the objects being manipulated by the paragraphs in the Procedure Division. In the simplest description of this approach to transformation, objects are sourced from the Data Division, and the Methods used to manipulate the attributes of the object are sourced from the paragraphs of that Procedure Division. The process is generally much more complicated than this, but this is how it works at the highest level.

Many complications are introduced because of the difference between the mainframe character set (EBCDIC) used to represent data and that associated with almost every other computing platform (ASCII). In addition, the mainframe had data types, for example, packed decimal that don’t always exist in different computing architectures. Also, the structure of business transactions is completely dependent on the APIs defined by the mainframe subsystems that support online interaction with mainframe applications (CICS, IMS, etc.). 

Batch processes are dependent on a mainframe unique job control language (JCL) and the underlying subsystem (JES) that manages the scheduling and execution of batch jobs. In addition to this subsystem dependency, there is also the common usage of mainframe unique utilities to support custom-developed batch programs. None of this exists in the world of cloud computing.

There are also differences between the way data was represented on the mainframe (fixed or variable length records) and keyed/non-keyed access methods that must be handled as part of the conversion. This is an incomplete list of those issues that come into play and require a strong computer science-based approach to generate good quality code. 

These transformed programs must be maintainable by developers skilled in the newer, more modern implementation languages of web and/or cloud-based systems. 

Using strong grammar-based parsers, abstract syntax tree representational techniques, and a rules-based migration technology represent the price of entry for this modernization journey. These technologies provide the basis for an effective, performant, repeatable, and tailorable approach to mainframe modernization. 

An IT modernization strategy, such as code transformation, is one of the best ways to future-proof your legacy applications, removing all dependencies on legacy computing architectures and structures and making these applications truly cloud-advantaged!

– Guest content from Dale Vecchio, Mainframe Modernization Thought Leader

Find Out More

Watch A Modernization Dilemma: Cloud Application – 
Or An Application In The Cloud

FIND OUT MORE

Download cloudFrame Relocate Product Fact Sheet

Related Articles

Next Event

Recent Events