Showing posts with label specialty processors. Show all posts
Showing posts with label specialty processors. Show all posts

Thursday, August 04, 2022

All About zIIPs

If you work with Db2 then you need to know about the IBM zIIP specialty processor. This is true whether you are a DBA or a developer... and, let's face it, if you work on a mainframe in any capacity, you'll want to know at least something about zIIPs.

With that in mind, today's blog post is sort of a meta-post. You see, I've been writing a series of blog posts for Cloudframe on the topic of zIIPs. So, the goal today is to provide you with information about those posts and links to their content.

The first blog post, Understanding Mainframe Specialty Processors: zIIPs and More, is introductory in nature. It defines the term "specialty processor" and defines the various types of specialty processors available from IBM. And then it offers a bit more information about the zIIP.

The second post in this series is titled Digging Into the zIIP: What Does zIIP-Eligible Mean? The title is kind of self-describing as to what this post offers!

Next (3 of 8) we have Types of Processing That Can Utilize zIIPs & Why You Want to Use zIIPs which gets into TCBs and SRBs, enclaves, and discusses how zIIPs can reduce your mainframe software costs.

The fourth post in this series digs into one of the major benefits of zIIPs, namely the ability to run Java workloads on the zIIP. In the article, titled Java and the zIIP: Five Major Benefits, we look at the primary advantages that can be accrued by running Java on zIIPs instead of a general purpose CP.

And because the vast majority of mainframe programs are written in COBOL the fifth post, provides Options for Converting from COBOL to Java.

Next, we take a look at Common zIIP Usage Mistakes and How to Identify Them.

Then we turn our attention to the latest IBM pricing model, Tailored Fit Pricing, and examine how it can impact cost savings when it comes to zIIP usage. This post is titled, appropriately enough, The Impact of Tailored Fit Pricing on zIIPs.

And we close out with Predictions on the Future of zIIP and Specialty Processors (of course, this is my personal viewpoint on the topic with the proviso that nobody knows the future for sure)!

So it is my hope that you will take a moment or two and click through the links to the articles above that look interesting to you. And if you have any comments, or suggestions for future articles, as always, please post them below!

Tuesday, August 18, 2020

Navigating the IBM COBOL 4.2 End of Service Waters: Chart a course to benefit your business

 

Surprisingly, COBOL has been in the news a lot recently, due to its significant usage in many federal govern-ment and state systems, most recently with unemployment systems, being in the news. With the global COVID-19 pandemic, those unemployment systems were stressed like never before with a 1600% increase in traffic (Government Computer News, May 12, 2020) as those impacted by the pandemic filed claims.

Nevertheless, there is another impending event that will likely pull COBOL back into the news as IBM withdraws older versions of the COBOL compiler from service. All IBM product versions go through a lifecycle that starts with GA (general availability), after some time moves to EOM (end of marketing) where IBM no longer sells that version, and ends with EOS (end of support) where IBM no longer supports that product or version. It is at this point that most customers will need to decide to stop using that product or upgrade to a newer version because IBM will no longer fix or support EOS products or versions.

Of course, code that was compiled using an unsupported COBOL compiler will continue to run, but it is not wise to use unsupported software for important, mission-critical software, such as is usually written using COBOL. And you need to be aware of interoperability issues if you rely on more than one version of the COBOL compiler.

So what is going on in the world of COBOL that will require your attention? First of all, earlier this year on April 30, 2020, IBM withdrew support for Enterprise COBOL 5.1 and 5.2. And Enterprise COBOL 4.2 will be withdrawn from service on April 30, 2022 – just about two years from now.

So now is the time for your organization to think about its migration strategy.

Why is COBOL still being used?

Sometimes people who do not work in a mainframe environment are surprised that COBOL is still being used. But it is, and it is not just a fringe language. COBOL is a language that was designed for business data processing, and it is extremely well-suited for that purpose. It provides features for manipulating data and printing reports that are common business requirements. COBOL was purposely designed for applications that perform transaction processing like payroll, banking, airline booking, etc. You put data in, process that data, and send results out.

COBOL was invented in 1959, so its history stretches back over 60 years; a lot of time for organizations to build complex applications to support their business. And IBM has delivered new capabilities and features over the years that enable organizations to keep up to date as they maintain their application portfolio.

So, COBOL is in wide use across many industries.

A majority of global financial transactions are processed using COBOL, including processing 85 percent of the world’s ATM swipes. According to Reuters, almost 3 trillion dollars in DAILY commerce flows through COBOL systems!

The reality is that more than 30 billion COBOL transactions run every day. And there are more than 220 billion lines of COBOL in use today. COBOL is not dead…

 What’s new in COBOL 6

With COBOL 5.1 and 5.2 already out of support, and COBOL 4.2 soon to follow, one migration path is to Enterprise COBOL 6, and IBM has already delivered three releases of it: 6.1, 6.2, and 6.3. There are some nice new features that are in the latest version(s) of IBM Enterprise COBOL, including:

  • Compile and runtime support delivering performance improvements for z15 hardware and z/OS 2.4 operating system
  • Increased compiler capacity making it possible to compile and optimize larger programs (6.1)
  • 64-bit (AMODE 64) support in this compiler enables users to process large data tables that require greater than 2 GB of addressing space (6.3)
  • JSON support (6.1) including JSON PARSE statement (6.2)
  • Support for many new features from the COBOL 2002/2014 programming standards including new statements like ALLOCATE, FREE and INITIALIZE; addition of Dynamic Length elementary items; conditional compilation using the DEFINE compiler option, and more
  • Many new compiler options
  • Improved usability with USS

At the same time, there are concerns that need to be considered if and when you migrate to version 6. One example is that the new compiler will take longer to compile programs than earlier versions – from 5 to 12 times longer depending on the optimization level. There are also additional work data sets required and additional memory considerations that need to be addressed to ensure the compiler works properly. As much as 20 times more memory may be needed to compile than with earlier versions of the compiler.

Some additional compatibility issues to keep in mind are that your executables are required to be stored in PDSE data sets and that COBOL 6 programs cannot call or be called by OS/VS COBOL programs.

And of course, one of the biggest issues when migrating from COBOL 4.2 to a new version of COBOL is the possibility of invalid data – even if you have not changed your data or your program (other than re-compiling in COBOL 6). This happens because the new code generator may optimize the code differently. That is to say, you can get different generated code sequences for the same COBOL source with COBOL 6 than with 4.2 and earlier versions of COBOL. While this can help minimize CPU usage (a good thing) it can cause invalid data to be processed differently, causing different behavior at runtime (a bad thing).

Whether you will experience invalid data processing issues depends on your specific data and how your programmers coded to access it. Some examples of processing that may cause invalid data issues include invalid data in numeric USAGE DISPLAY data items; parameter/argument size mismatches; using TRUNC() with binary data values having more digits than they are defined for in working storage; and data items that are used before they have been assigned a value.

Migration considerations

Keep in mind that migration will be a lengthy process for any medium-to-large organization, mostly due to testing application behavior after compilation, and comparing it to pre-compilation behavior. You need to develop a plan that best suits your organization’s requirements and work to implement it in the roughly 2-year timeframe before IBM Enterprise COBOL 4.2 goes out of support.

Things to consider:

  • Gartner research shows that “huge ‘all-or-nothing’ modernization programs often fail to meet expectations”
  • What is your current state? Which COBOL compilers are you using and what is your end goal (6.1, 6.2, 6.3)?
  • Remember that compiled programs will continue to run, so it may not be imperative to re-compile everything prior to the end of support date. Of course, it can be difficult to keep track of what has been converted and what has not if you do not have a plan moving forward other than “convert when the program has to be changed at some point.” And it can become difficult to keep track of all the requirements and incompatibilities for multiple versions of COBOL if you do not plan for, and eventually convert to a newer compiler version.
  • Do you have the COBOL talent and knowledge not only to convert but to continue supporting your existing portfolio of COBOL applications?
  • Enterprise application portfolios can be quite large, making it difficult to effectively discover and map all of the dependencies. Consider using tools to help. 

Migration challenges and Options to Consider

As you put your plan together, you might consider converting some of your COBOL applications to Java. An impending event such as the end of support for a compiler is a prime opportunity for doing so. But why might you want to convert your COBOL programs to Java?

Well, it can be difficult to obtain and keep skilled COBOL programmers. As COBOL coders age and retire, there are fewer and fewer programmers with the needed skills to manage and maintain all of the COBOL programs out there. At the same time, there are many skilled Java programmers available on the market, and universities are churning out more every year.

Additionally, Java code is portable, so if you ever want to move it to another platform it is much easier to do that with Java than with COBOL. Furthermore, it is easier to adopt cloud technologies and gain the benefits of elastic compute with Java programs.

Cost reduction can be another valid reason to consider converting from COBOL to Java. Java programs can be run on zIIP processors, which can reduce the cost of running your applications. A workload that runs on zIIPs is not subject to IBM (and most ISV) licensing charges... and, as every mainframe shop knows, the cost of software rises as capacity on the mainframe rises. But if capacity can be redirected to a zIIP processor, then software license charges do not accrue - at least for that workload.

Additional benefits of zIIPs include:

  • They are significantly cheaper to acquire than standard CPs
  • When workload is redirected to a zIIP it frees up capacity on the standard CP

So, there are many reasons to consider converting at least some of your COBOL programs to Java. Some may be worried about Java performance, but Java performance is similar to COBOL these days; in other words, most of the performance issues of the past have been resolved. Furthermore, there are many tools to help you develop, manage, and test your Java code, both on the mainframe and other platforms.

Keeping in mind the concerns about “all-or-nothing” conversions, most organizations will be working toward a mix of COBOL migrations and Java conversions, with a mix of COBOL and Java being the end results. As you plan for this be sure to analyze and select appropriate candidate programs and applications for conversion to Java. There are tools that can analyze program functionality to assist you in choosing which the best candidates. For example, you may want to avoid converting programs that frequently call other COBOL programs and programs that use pre-relational DBMS technologies (such as IDMS and IMS).

How to convert COBOL to Java

At this point, you may be thinking, “Sure, I can see the merit in converting some of my programs to Java, but how can I do that? I don’t have the time for my developers to re-create COBOL programs in Java going line-by-line!” Of course, you don’t!

This is where an automated tool comes in handy. The CloudFrame Migration Suite provides code conversion tools, automation, and DevOps integration to deliver very maintainable, object-oriented Java that can integrate with modern technology available within your open architecture.  It can be used to refactor COBOL source code to Java without changing data, schedulers, and other infrastructure components. It is fully automated and seamlessly integrates with the change management systems you already use on the mainframe.

The Java code generated by CloudFrame will operate the same as your COBOL and produce the same output. There are even options you can use to maintain the COBOL 4.2 treatment of data, thereby avoiding the invalid data issues that can occur when you migrate to COBOL 6. This can help to reduce project testing and remediation time.

It is also possible to use CloudFrame to refactor your COBOL programs to Java but keep maintaining the code in COBOL. Such an approach, as described in this blog post (Consider Cross-Compiling COBOL to Java to Reduce Costs), can allow you to keep using your COBOL programmers for maintenance but to gain the zIIP eligibility of Java when you run the code.

Upcoming Webinar

To learn more about COBOL migration, modernization considerations, and how CloudFrame can help you to achieve your modernization goals, be sure to attend CloudFrame’s upcoming webinar where I will be participating on a panel along with Venkat Pillay (CEO and founder of CloudFrame) and Dale Vecchio (industry analyst and former Gartner research VP). The webinar, titled Navigating the COBOL 4.2 End of Support(EOS) Waters: An expert panel discusses the best course of action to benefit your business will be held on September 23, 2020 at 11:00 AM Eastern time. Be sure to register and attend!

Summary

Users of IBM Enterprise COBOL 4.2 need to be aware of the imminent end of service date in April 2022 and make appropriate plans for migrating off of the older compiler.

This can be a great opportunity to consider what should remain COBOL and where the opportunities to modernize to Java are.  Learn how CloudFrame can help you navigate that journey.

Wednesday, June 01, 2011

Mainframe Specialty Processors

Anyone who uses an IBM z Series mainframe has probably heard about zIIPs and zAAPs and other specialty processors. But maybe you haven't yet done any real investigation into what they are, what they do, and why they exist. So, with that in mind, let's take a brief journey into the world of specialty processors in today's blog entry!

Over the course of the past decade or so, IBM has introduced several different types of specialty processors. The basic idea of a specialty processor, is that it sits alongside the main CPUs and specific types of "special" workload is shuttled to the specialty processor to be run there, instead of on the primary CPU complex. Why is this useful or interesting to mainframe customers? Well, the specialty processor workload is not subject to IBM (as well as many ISVs) licensing charges... and, as any mainframer knows, the cost of software rises as capacity on the mainframe rises. But if capacity can be redirected to a specialty processor, then software license charges do not accrue -- at least for that workload.

And for VWLC customers, shuttling workload to a specialty processor can reduce the rolling four hour average and thereby decrease your monthly IBM software license bill.

Another benefit of the specialty processors is that can be cheaper to acquire than standard CPUs.

But specialty processors can only run certain types of workloads. There are four types of specialty processors:

  • ICF: Internal Coupling Facility - used for redirecting coupling facility cycles in a data sharing environment.
  • IFL: Integrated Facility for Linux - used for processing zLinux workload on an IBM mainframe.
  • zAAP: Application Assist Processor - used for Java workload
  • zIIP: Integrated Information Processor - used for processing certain, distributed database workloads.

When you activate any of these processors, some percentage of that type of workload can be redirected off of the main CP onto the specialty processor... but not 100% of the workload. It can be frustrating, particularly with the zIIP, to determine exactly what is redirected exactly when and exactly how much of it. In general, distributed DB2 for z/OS workload and XML processing can be redirected to zIIP processors.

Additionally, to run on a zIIP, the workload must run under an enclave SRB. So, code written to execute under a TCB will usually be unable to execute under an SRB without major changes. If you didn't understand that sentence, don't worry about it too much. Basically, IBM has enabled certain types of (mostly DB2) workload to run on zIIPs, and ISVs have enabled some of their code to run on zIIPs, too. If you are interested, more details about zIIPs can be found at this link.

Another interesting tidbit is that zAAP-eligible workloads can be run on zIIPs with IBM's zAAP on zIIP support. This can be a boon to some shops that only have zIIPs and no zAAPs. Now, with zAAP on zIIP support, you can use zIIP processors for both Java and distributed DB2 workloads. The combined eligible TCB and enclave SRB workloads might make the acquisition of a zIIP cost effective.This capability also provides more value for customers having only zIIP processors by making Java- and XML-based workloads eligible to run on existing zIIPs.

To take advantage of zAAP on zIIP, you need to be running z/OS V11.1 (or z/OS V1.9 or V1.10 with the PTFs for APAR OA27495) on a z9, z10, or z196 server.

Keep in mind, that the terms for specialty processors do not change. You can only have 1 zAAP and 1 zIIP per each general purpose processor. So, even if you have zAAP on zIIP configured, the chip is still a zIIP and you cannot have any more than 1 per general purpose processor.

The Bottom Line

The bottom line is that even though it can take some studying and research to understand their benefit and functionality, specialty processors can help to reduce the cost of mainframe computing... and that is a good thing!

Wednesday, January 20, 2010

On Specialty Processors

Today's blog entry is kind of a meta entry.

As many of my readers know I write two blogs, this one that is predominantly about DB2 for z/OS, and the Data Management Today blog, that focuses on data-related issues, database management, and industry trends.

Well, I've written a series of entries on my other blog about specialty processors (zIIPs, zAAPs, etc.) and related issues. Since DB2 for z/OS folks should find that information useful and relevant, I thought I'd write an entry pointing my readers here to the pertinent specialty processor entries at my Data Management Today blog, so here goes...

The first entry I'd like to call your attention to is titled Specialty Processors on the Mainframe. This piece is basically an introduction to the different types of specialty processors (zIIp, zAAP, IFL, ICF), and what they can be used for. It is a good place to start if you are new to specialty processors or are looking for an update.

The second entry worth a peak is titled simply zAAP on zIIP. As you may or may not know, IBM delivered the capability for zAAP work to run on a zIIP (with certain conditions). This blog entry provides a brief synopsis of the August 18, 2009 z/OS V1.11 announcement that introduced the new capability to enable zAAP-eligible workloads to run on zIIPs

Next up is What is an Enclave? If you are working with zIIPs you have probably heard the term Enclave SRB. And if you are doing any type of distributed workload you've probably heard about enclaves, too. This blog entry is for those who are new to the term, or are confused about it. It offers an explanatory definition of the term "enclave" and points you on to additional reference material for those interested.

My most recent post over there (January 19, 2010), titled What is Generosity Factor?, has been a popular one. This blog entry delves into the generosity factor imposed upon zIIP workload including definitions of geneorsity factor, qualified and eligible work, and a discussion of what it implies for ISV products.

Hope you find this material worthwhile... and thanks for your continued support of my blogs.