Showing posts with label zIIP. Show all posts
Showing posts with label zIIP. Show all posts

Monday, March 20, 2023

Harnessing the Power of zIIP Processors for Improved Db2 Performance and Lower Cost

As a Db2 DBA, you're constantly looking for ways to improve performance and efficiency while minimizing costs. One technology that can help achieve these goals is the zIIP (IBM System z Integrated Information Processor) processor. By offloading eligible Db2 workloads to zIIP processors, you can free up capacity on general-purpose processors and reduce costs, while improving performance.

So, what workloads are eligible for offloading to zIIP processors? XML processing, and portions of the Db2 LOAD, REORG, RUNSTATS and REBUILD utilities are among the most common. If you have third-party utilities (BMC, Broadcom, InfoTel) it is likely that they, too, will be zIIP-eligible, at least for some of their functionality.

Shifting workload to distributed/DDF is another good way to exploit zIIPs because SQL statements executed through DDF are zIIP-eligible. But most of the time DBAs have little influence on moving workload to distributed processing. This choice is typically driven by application development plans instead of DBA tuning tactics. 

Nevertheless, by understanding what type of workload is zIIP-eligible and encouraging such usage, you can offload workload to zIIP processors. Moving workload from general-purpose processors to zIIPs can possibly improve system performance and reduce costs.

You might also want to take a look at converting some of your COBOL workload to Java, if at all possible because Java programs are zIIP-eligible. Of course, this requires application developers to get involved, as well as (possibly) a conversion tool.

To fully harness the power of zIIP processors, it's important to identify eligible workloads and configure the system accordingly. Here are some tips to help you get started:

  • Configure Db2 for zIIP offload: Configure Db2 to take advantage of zIIP processors by setting the appropriate parameters and options. Consult the IBM Db2 documentation for specific guidance on configuring zIIP offload.

  • Monitor and analyze performance: Use Db2 performance monitoring tools to track the performance of zIIP offloaded workloads and identify areas for further optimization. This can help you continually improve performance and efficiency over time.
By effectively utilizing zIIP processors for Db2 workloads, you can achieve significant cost savings and performance improvements on IBM Z mainframe systems. Don't let this powerful technology go to waste – start exploring the benefits of zIIP processors today!

Thursday, August 04, 2022

All About zIIPs

If you work with Db2 then you need to know about the IBM zIIP specialty processor. This is true whether you are a DBA or a developer... and, let's face it, if you work on a mainframe in any capacity, you'll want to know at least something about zIIPs.

With that in mind, today's blog post is sort of a meta-post. You see, I've been writing a series of blog posts for Cloudframe on the topic of zIIPs. So, the goal today is to provide you with information about those posts and links to their content.

The first blog post, Understanding Mainframe Specialty Processors: zIIPs and More, is introductory in nature. It defines the term "specialty processor" and defines the various types of specialty processors available from IBM. And then it offers a bit more information about the zIIP.

The second post in this series is titled Digging Into the zIIP: What Does zIIP-Eligible Mean? The title is kind of self-describing as to what this post offers!

Next (3 of 8) we have Types of Processing That Can Utilize zIIPs & Why You Want to Use zIIPs which gets into TCBs and SRBs, enclaves, and discusses how zIIPs can reduce your mainframe software costs.

The fourth post in this series digs into one of the major benefits of zIIPs, namely the ability to run Java workloads on the zIIP. In the article, titled Java and the zIIP: Five Major Benefits, we look at the primary advantages that can be accrued by running Java on zIIPs instead of a general purpose CP.

And because the vast majority of mainframe programs are written in COBOL the fifth post, provides Options for Converting from COBOL to Java.

Next, we take a look at Common zIIP Usage Mistakes and How to Identify Them.

Then we turn our attention to the latest IBM pricing model, Tailored Fit Pricing, and examine how it can impact cost savings when it comes to zIIP usage. This post is titled, appropriately enough, The Impact of Tailored Fit Pricing on zIIPs.

And we close out with Predictions on the Future of zIIP and Specialty Processors (of course, this is my personal viewpoint on the topic with the proviso that nobody knows the future for sure)!

So it is my hope that you will take a moment or two and click through the links to the articles above that look interesting to you. And if you have any comments, or suggestions for future articles, as always, please post them below!

Wednesday, May 25, 2022

TCBs, SRBs, and Enclaves

Many Db2 DBAs first hear about Task Control Blocks (TCBs) and Service Request Blocks (SRBs) in an IBM performance class, but not everyone has taken one of those classes. And even for those who have, a refresher is probably in order.

At a high level, for mainframe z/OS programs code can execute in one of two modes: TCB mode, also known as task mode, or SRB mode. Most programs execute under the control of a task. Each thread is represented by a TCB. A program can exploit multiple processors if it is composed of multiple tasks, as most programs are.

An SRB is a control block that represents a routine that performs a particular function or service in a specified address space. SRBs are lightweight and efficient but are available only to supervisor state software. An SRB is similar to a TCB in that it identifies a unit of work to the system. But an SRB cannot “own” storage areas. SRB routines can obtain, reference, use, and free storage areas, but the areas must be owned by a TCB. SRB mode typically is used by operating system facilities and vendor programs to perform certain performance-critical functions.

In general, z/OS dispatches Db2 work in TCB mode if the request is local, or in SRB mode if the request is distributed. These parallel tasks are assigned the same importance as the originating address space. Of course, this is a gross generalization and as zIIPs have become ubiquitous more SRB mode work has been enabled (as only SRBs can run on the zIIP).

Preemptible enclaves are used to do the work on behalf of the originating TCB or SRB address space. Enclaves are grouped by common characteristics and service requests, and because they are preemptible, the z/OS dispatcher—and Workload Manager—can interrupt these tasks for more important ones. There are two types of preemptible SRBs: client SRBs and enclave SRBs.

If the Db2 request is distributed DRDA workload, then it will be executed in enclave SRBs. If the request is coming over a local connection, then it will be dispatched between TCBs, client SRBs, and in some cases enclave SRBs (such as for parallel queries and index maintenance).

What Is an Enclave?

An enclave is a construct that represents a transaction or unit of work. Enclaves are a method of managing mainframe transactions for non-traditional workloads. You can think of an enclave as an anchor point for resource accumulation regardless of where the transaction is executing.

With traditional workloads, it is relatively easy to map the resources consumed to the actual transaction doing the consumption. But with non-traditional workloads (such as web transactions, distributed processing, and so on) it can be more difficult because a transaction can span multiple platforms. Enclaves are used to overcome this difficulty by correlating closely to the end user’s view of the transaction.

So even though a non-traditional transaction can be composed of multiple “pieces” spanning many server address spaces, and can share those address spaces with other transactions, the enclave gives you more effective control over the non-traditional workload. 

Synopsis

Hopefully this short introduction to TCBs, SRBs, and Enclaves has been helpful. At least the next time you hear somebody use these terms you'll have some idea what they are talking about!

Tuesday, August 18, 2020

Navigating the IBM COBOL 4.2 End of Service Waters: Chart a course to benefit your business

 

Surprisingly, COBOL has been in the news a lot recently, due to its significant usage in many federal govern-ment and state systems, most recently with unemployment systems, being in the news. With the global COVID-19 pandemic, those unemployment systems were stressed like never before with a 1600% increase in traffic (Government Computer News, May 12, 2020) as those impacted by the pandemic filed claims.

Nevertheless, there is another impending event that will likely pull COBOL back into the news as IBM withdraws older versions of the COBOL compiler from service. All IBM product versions go through a lifecycle that starts with GA (general availability), after some time moves to EOM (end of marketing) where IBM no longer sells that version, and ends with EOS (end of support) where IBM no longer supports that product or version. It is at this point that most customers will need to decide to stop using that product or upgrade to a newer version because IBM will no longer fix or support EOS products or versions.

Of course, code that was compiled using an unsupported COBOL compiler will continue to run, but it is not wise to use unsupported software for important, mission-critical software, such as is usually written using COBOL. And you need to be aware of interoperability issues if you rely on more than one version of the COBOL compiler.

So what is going on in the world of COBOL that will require your attention? First of all, earlier this year on April 30, 2020, IBM withdrew support for Enterprise COBOL 5.1 and 5.2. And Enterprise COBOL 4.2 will be withdrawn from service on April 30, 2022 – just about two years from now.

So now is the time for your organization to think about its migration strategy.

Why is COBOL still being used?

Sometimes people who do not work in a mainframe environment are surprised that COBOL is still being used. But it is, and it is not just a fringe language. COBOL is a language that was designed for business data processing, and it is extremely well-suited for that purpose. It provides features for manipulating data and printing reports that are common business requirements. COBOL was purposely designed for applications that perform transaction processing like payroll, banking, airline booking, etc. You put data in, process that data, and send results out.

COBOL was invented in 1959, so its history stretches back over 60 years; a lot of time for organizations to build complex applications to support their business. And IBM has delivered new capabilities and features over the years that enable organizations to keep up to date as they maintain their application portfolio.

So, COBOL is in wide use across many industries.

A majority of global financial transactions are processed using COBOL, including processing 85 percent of the world’s ATM swipes. According to Reuters, almost 3 trillion dollars in DAILY commerce flows through COBOL systems!

The reality is that more than 30 billion COBOL transactions run every day. And there are more than 220 billion lines of COBOL in use today. COBOL is not dead…

 What’s new in COBOL 6

With COBOL 5.1 and 5.2 already out of support, and COBOL 4.2 soon to follow, one migration path is to Enterprise COBOL 6, and IBM has already delivered three releases of it: 6.1, 6.2, and 6.3. There are some nice new features that are in the latest version(s) of IBM Enterprise COBOL, including:

  • Compile and runtime support delivering performance improvements for z15 hardware and z/OS 2.4 operating system
  • Increased compiler capacity making it possible to compile and optimize larger programs (6.1)
  • 64-bit (AMODE 64) support in this compiler enables users to process large data tables that require greater than 2 GB of addressing space (6.3)
  • JSON support (6.1) including JSON PARSE statement (6.2)
  • Support for many new features from the COBOL 2002/2014 programming standards including new statements like ALLOCATE, FREE and INITIALIZE; addition of Dynamic Length elementary items; conditional compilation using the DEFINE compiler option, and more
  • Many new compiler options
  • Improved usability with USS

At the same time, there are concerns that need to be considered if and when you migrate to version 6. One example is that the new compiler will take longer to compile programs than earlier versions – from 5 to 12 times longer depending on the optimization level. There are also additional work data sets required and additional memory considerations that need to be addressed to ensure the compiler works properly. As much as 20 times more memory may be needed to compile than with earlier versions of the compiler.

Some additional compatibility issues to keep in mind are that your executables are required to be stored in PDSE data sets and that COBOL 6 programs cannot call or be called by OS/VS COBOL programs.

And of course, one of the biggest issues when migrating from COBOL 4.2 to a new version of COBOL is the possibility of invalid data – even if you have not changed your data or your program (other than re-compiling in COBOL 6). This happens because the new code generator may optimize the code differently. That is to say, you can get different generated code sequences for the same COBOL source with COBOL 6 than with 4.2 and earlier versions of COBOL. While this can help minimize CPU usage (a good thing) it can cause invalid data to be processed differently, causing different behavior at runtime (a bad thing).

Whether you will experience invalid data processing issues depends on your specific data and how your programmers coded to access it. Some examples of processing that may cause invalid data issues include invalid data in numeric USAGE DISPLAY data items; parameter/argument size mismatches; using TRUNC() with binary data values having more digits than they are defined for in working storage; and data items that are used before they have been assigned a value.

Migration considerations

Keep in mind that migration will be a lengthy process for any medium-to-large organization, mostly due to testing application behavior after compilation, and comparing it to pre-compilation behavior. You need to develop a plan that best suits your organization’s requirements and work to implement it in the roughly 2-year timeframe before IBM Enterprise COBOL 4.2 goes out of support.

Things to consider:

  • Gartner research shows that “huge ‘all-or-nothing’ modernization programs often fail to meet expectations”
  • What is your current state? Which COBOL compilers are you using and what is your end goal (6.1, 6.2, 6.3)?
  • Remember that compiled programs will continue to run, so it may not be imperative to re-compile everything prior to the end of support date. Of course, it can be difficult to keep track of what has been converted and what has not if you do not have a plan moving forward other than “convert when the program has to be changed at some point.” And it can become difficult to keep track of all the requirements and incompatibilities for multiple versions of COBOL if you do not plan for, and eventually convert to a newer compiler version.
  • Do you have the COBOL talent and knowledge not only to convert but to continue supporting your existing portfolio of COBOL applications?
  • Enterprise application portfolios can be quite large, making it difficult to effectively discover and map all of the dependencies. Consider using tools to help. 

Migration challenges and Options to Consider

As you put your plan together, you might consider converting some of your COBOL applications to Java. An impending event such as the end of support for a compiler is a prime opportunity for doing so. But why might you want to convert your COBOL programs to Java?

Well, it can be difficult to obtain and keep skilled COBOL programmers. As COBOL coders age and retire, there are fewer and fewer programmers with the needed skills to manage and maintain all of the COBOL programs out there. At the same time, there are many skilled Java programmers available on the market, and universities are churning out more every year.

Additionally, Java code is portable, so if you ever want to move it to another platform it is much easier to do that with Java than with COBOL. Furthermore, it is easier to adopt cloud technologies and gain the benefits of elastic compute with Java programs.

Cost reduction can be another valid reason to consider converting from COBOL to Java. Java programs can be run on zIIP processors, which can reduce the cost of running your applications. A workload that runs on zIIPs is not subject to IBM (and most ISV) licensing charges... and, as every mainframe shop knows, the cost of software rises as capacity on the mainframe rises. But if capacity can be redirected to a zIIP processor, then software license charges do not accrue - at least for that workload.

Additional benefits of zIIPs include:

  • They are significantly cheaper to acquire than standard CPs
  • When workload is redirected to a zIIP it frees up capacity on the standard CP

So, there are many reasons to consider converting at least some of your COBOL programs to Java. Some may be worried about Java performance, but Java performance is similar to COBOL these days; in other words, most of the performance issues of the past have been resolved. Furthermore, there are many tools to help you develop, manage, and test your Java code, both on the mainframe and other platforms.

Keeping in mind the concerns about “all-or-nothing” conversions, most organizations will be working toward a mix of COBOL migrations and Java conversions, with a mix of COBOL and Java being the end results. As you plan for this be sure to analyze and select appropriate candidate programs and applications for conversion to Java. There are tools that can analyze program functionality to assist you in choosing which the best candidates. For example, you may want to avoid converting programs that frequently call other COBOL programs and programs that use pre-relational DBMS technologies (such as IDMS and IMS).

How to convert COBOL to Java

At this point, you may be thinking, “Sure, I can see the merit in converting some of my programs to Java, but how can I do that? I don’t have the time for my developers to re-create COBOL programs in Java going line-by-line!” Of course, you don’t!

This is where an automated tool comes in handy. The CloudFrame Migration Suite provides code conversion tools, automation, and DevOps integration to deliver very maintainable, object-oriented Java that can integrate with modern technology available within your open architecture.  It can be used to refactor COBOL source code to Java without changing data, schedulers, and other infrastructure components. It is fully automated and seamlessly integrates with the change management systems you already use on the mainframe.

The Java code generated by CloudFrame will operate the same as your COBOL and produce the same output. There are even options you can use to maintain the COBOL 4.2 treatment of data, thereby avoiding the invalid data issues that can occur when you migrate to COBOL 6. This can help to reduce project testing and remediation time.

It is also possible to use CloudFrame to refactor your COBOL programs to Java but keep maintaining the code in COBOL. Such an approach, as described in this blog post (Consider Cross-Compiling COBOL to Java to Reduce Costs), can allow you to keep using your COBOL programmers for maintenance but to gain the zIIP eligibility of Java when you run the code.

Upcoming Webinar

To learn more about COBOL migration, modernization considerations, and how CloudFrame can help you to achieve your modernization goals, be sure to attend CloudFrame’s upcoming webinar where I will be participating on a panel along with Venkat Pillay (CEO and founder of CloudFrame) and Dale Vecchio (industry analyst and former Gartner research VP). The webinar, titled Navigating the COBOL 4.2 End of Support(EOS) Waters: An expert panel discusses the best course of action to benefit your business will be held on September 23, 2020 at 11:00 AM Eastern time. Be sure to register and attend!

Summary

Users of IBM Enterprise COBOL 4.2 need to be aware of the imminent end of service date in April 2022 and make appropriate plans for migrating off of the older compiler.

This can be a great opportunity to consider what should remain COBOL and where the opportunities to modernize to Java are.  Learn how CloudFrame can help you navigate that journey.

Friday, October 25, 2013

Say "Hello" to DB2 11 for z/OS

DB2 11 for z/OS Generally Available Today, October 25, 2013

As was announced earlier this month (see press release) Version 11 of DB2 for z/OS is officially available as of today. Even if your company won’t be migrating right away, the sooner you start learning about DB2 11, the better equipped you will be to embrace it when you inevitably must use and support it at your company.
So let’s take a quick look at some of the highlights of this latest and greatest version of our favorite DBMS. As usual, a new version of DB2 delivers a large number of new features, functions, and enhancements, so of course, not every new DB2 11 “thing” will be addressed in today’s blog entry.

Performance Claims

Similar to most recent DB2 versions, IBM boasts of performance improvements that can be achieved by migrating to DB2 11. The claims for DB2 11 from IBM are out-of-the-box savings ranging from 10 percent to 40 percent for different types of query workloads: up to 10 percent for complex OLTP and update intensive batch – up to 40 percent for queries.

As usual, your actual mileage may vary. It all depends upon things like the query itself, number of columns requests, number of partitions that must be accessed, indexing, and on and on. So even though it looks like performance gets better in DB2 11, take these estimates with a grain of salt.

The standard operating procedure of rebinding to achieve the best results still applies. And, of course, if you use the new features of DB2 11 IBM claims that you can achieve additional performance improvements.
DB2 11 also offers improved synergy with the latest mainframe hardware, the zEC12. For example, FLASH Express and pageable 1MB frames are used for buffer pool control blocks and DB2 executable code. So keep in mind that getting to the latest hardware can help out your DB2 performance and operation!

Programmer Features

Let’s move along and take a look at some of the great new features for building applications offered up by DB2 11. There are a slew of new SQL and analytical capabilities in the new release, including: 
  • Global variables – which can be used to pass data from program to program without the need to put data into a DB2 table
  • Improved SQLPL functionality, including an array data type which makes SQLPL more computationally complete and simplifies coding SQL stored procedures.
  • Alias support for sequence objects.
  • Improvements to Declared Global Temporary Tables (DGTTs) including the ability to create NOT LOGGED DBTTs and the ability to use RELEASE DEALLOCATE for SQL statements written against DGTTs.
  • SQL Compatibility feature which can be used to minimize the impact of new version changes on existing applications.Support for views on temporal data. 
  • SQL Grouping Sets, including Rollup, Cube
  • XML enhancements including XQuery support, XMLMODIFY for improved updating of XML nodes, and improved validation of XML documents.
The BIND and REBIND enhancements made in DB2 11 are important to note here, too. Since BIND and REBIND spans application programming and database administration, I’ll talk about here at the end of the Programming Features section and right before we move on to talk about DBA features.

The first new capability is the addition of the APREUSE(WARN) parameter. Before we learn about the new feature, let’s backtrack for a moment to talk about the current (DB2 10) capabilities of the APREUSE parameter. There are currently two options:
  • APREUSE(NONE): DB2 will not try to reuse previous access paths for statements in the package. (default value)
  • APREUSE(ERROR): DB2 tries to reuse previous access paths for SQL statements in the package. If the access paths cannot be reused, the operation fails and no new package is created.

So you can either not try to reuse or try to reuse, and if you can’t reuse when you try to, you fail. Obviously, a third, more palatable choice was needed. And DB2 11 adds this third option.
  • APREUSE(WARN): DB2 tries to reuse previous access paths for SQL statements in the package, but the bind or rebind is not prevented when they cannot be reused. Instead, DB2 generates a new access path for that SQL statement.
So you can think of APREUSE(ERROR) as functioning on a package boundary, whereas APREUSE(WARN) functions on a statement boundary.

DBA and Other Technical Features

There are also a slew of new in-depth technical and DBA-related features in DB2 11. Probably the most important, and one that impacts developers too, is transparent archiving using DB2’s temporal capabilities first introduced in DB2 10.

Basically, if you know how to set up SYSTEM time temporal tables, setting up transparent archiving will be a breeze. You create both the table and the archive table and then associate the two. This is done by means of the ENABLE ARCHIVE USE clause. DB2 is aware of the connection between the operational table and the archive table, so any data that is deleted will be moved to the archive table.

Unlike SYSTEM time, only deleted data is moved to the archive table. There is a new system defined global variable MOVE_TO_ARCHIVE to control the ability to DELETE data without archiving it, should you need to do so.

Of course, there are more details to learn about this capability, but remember, we are just touching on the highlights today!

Another notable feature that will interest many DBAs is the ability to use SQL to query more DB2 Directory tables. The list of DB2 Directory tables which now can be accessed via SQL includes:
  • SYSIBM.DBDR
  • SYSIBM.SCTR
  • SYSIBM.SPTR
  • SYSIBM.SYSLGRNX
  • SYSIBM.SYSUTIL

Another regular area of improvement for new DB2 version is enhanced IBM DB2 Utilities, and DB2 11 is no exception to the rule. DB2 11 brings the following improvements:

  • REORG – automated mapping tables (where DB2 takes care of the allocation and removal of the mapping table during a SHRLEVEL CHANGE reorganization), online support for REORG REBALANCE, automatic cleanup of empty partitions for PBG table spaces, LISTPARTS for controlling parallelism, and improved switch phase processing.
  • RUNSTATS – additional zIIP processing, RESET ACCESSPATH capability to reset existing statistics, and improved inline statistics gathering in other utilities.
  • LOAD – additional zIIP processing, multiple partitions can be loaded in parallel using a single SYSREC and support for extended RBA LRSN.
  • REPAIR – new REPAIR CATALOG capability to find and correct for discrepancies between the DB2 Catalog and database objects.
  • DSNACCOX – performance improvements
Additionally, there is a new command to externalize Real Time Statistics. You can use ACCESS DATABASE … MODE(STATS) instead of stopping and starting a database object or forcing a system checkpoint to externalize RTS.

DB2 11 also delivers a bevy of new security-related enhancements, including:
  • Better coordination between DB2 and RACF, including new installation parameters (AUTHEXIT_CHECK and AUTHECIT_CACHEREFRESH) and the ability for DB2 to capture event notifications from RACF
  • New PROGAUTH bind plan option to ensure the program is authorized to use the plan.
  • The ability to create MASKs and PERMISSIONs on archive tables and archive-enabled tables
  • Column masking restrictions are removed for GROUP BY and DISTINCT processing
Online schema changes are still being introduced to new version of DB2 amd DB2 11 offers up some nice functionality in this realm. Perhaps the most interesting new capability is DROP COLUMN. Dropping a column from an existing table has always been a difficult task requiring dropping and recreating the table (and all related objects and security), so most DBAs just left unused and unneeded columns in the table. This can cause confusion and data integrity issues if the columns are used by programs and end users. Now, DROP COLUMN can be used (as long as the table is in a UTS). Of course, there are some other restrictions on its use, but this capability may help many DBAs clean up unused columns in DB2 tables.

An additional online schema change capability in DB2 11 is support for online altering of limit keys, which enables DBAs to change the limit keys for a partitioned table space without impacting data availability.

Finally, in terms of online schema change, we have an improvement to operational administration for deferred schema changes. DB2 11 provides improved recovery for deferred schema changes. With DB2 10, when the REORG begins to materialize pending change it is no longer possible to perform a recovery to a prior point in time. DB2 11 removes this restriction, allowing recovery to any valid prior point.

In terms of Buffer Pool enhancements, DB2 11 offers up the new 2GB frame size for very large BP requirements.

In terms of Data Sharing enhancements, DB2 11 offers faster CASTOUT, improved RESTART LIGHT capability, and automatic recovery of all pages in LPL during a DB2 restart.

Analytics and Big Data Features

There are also a lot of features added to DB2 11 to support Big Data and analytical processing. Probably the biggest is the ability to support Hadoop access. If you don’t know what Hadoop is, this is not the place to learn about that. Instead, check out this link.

Anyway, DB2 11 can be used to enable applications to easily and efficiently access Hadoop data sources. This is done via the generic table UDF capability in DB2 11. Using this feature you can create a variable shape of UDF output table.

This capability allows access to BigInsights, which is IBM’s Hadoop-based platform for Big Data. As such, you can use JSON to access Hadoop data via DB2 using the UDF supplied by IBM BigInsights.

DB2 11 also adds new SQL analytical extensions, including:
  • GROUPING SETS can be used for GROUP BY operations to enable multiple grouping clauses to be specified in a single statement.
  • ROLLUP can be used to aggregate values along a dimension hierarchy. In addition to aggregation along the dimensions a grand total is produced. Multiple ROLLUPs can be coded in a single query to produce multidimensional hierarchies in a result set.
  • CUBE can be used to aggregate data based on columns from multiple dimensions. You can think of it like a cross tabulation.
And finally, new version (V3) of IBM DB2 Analytics Accelerator (IDAA) is part of the mix, too. IDAA V3 brings about improvements such as:
  • The ability to store 1.3 PB of data
  • Change Data Capture support to capture changes to DB2 data and propagate them to IDAA as they happen
  • Additional SQL function support for IDAA queries (including SUBSTRING, among others, and additional OLAP functions).
  • Work Load Manager integration
Other "Stuff"

Of course, there are additional features and functionality being introduced with DB2 11 for z/OS. A blog entry of this nature on the day of GA cannot exhaustively cover everything. That being said, two additional areas are worth noting.
  • Extended log record addressing – increases the size of the RBA and LRSN from 6 bytes to 10 bytes. This avoids the outage that is required if the amount of log records accumulated exhausts the capability of DB2 to create new RBAs or LRSNs. To move to the new extended log record addressing requires converting your BSDSs.
  • DRDA enhancements – including improved client info properties, new FORCE option to cancel distributed threads, and multiple performance related improvements.
Summary

DB2 11 for z/OS brings with it a bevy of interesting and useful new features. They range the gamut from development to administration to performance to integration with Big Data. Now that DB2 11 is out in the field and available for organizations to start using it, the time has come for all DB2 users to take some time to learn what DB2 11 can do. 

Wednesday, June 01, 2011

Mainframe Specialty Processors

Anyone who uses an IBM z Series mainframe has probably heard about zIIPs and zAAPs and other specialty processors. But maybe you haven't yet done any real investigation into what they are, what they do, and why they exist. So, with that in mind, let's take a brief journey into the world of specialty processors in today's blog entry!

Over the course of the past decade or so, IBM has introduced several different types of specialty processors. The basic idea of a specialty processor, is that it sits alongside the main CPUs and specific types of "special" workload is shuttled to the specialty processor to be run there, instead of on the primary CPU complex. Why is this useful or interesting to mainframe customers? Well, the specialty processor workload is not subject to IBM (as well as many ISVs) licensing charges... and, as any mainframer knows, the cost of software rises as capacity on the mainframe rises. But if capacity can be redirected to a specialty processor, then software license charges do not accrue -- at least for that workload.

And for VWLC customers, shuttling workload to a specialty processor can reduce the rolling four hour average and thereby decrease your monthly IBM software license bill.

Another benefit of the specialty processors is that can be cheaper to acquire than standard CPUs.

But specialty processors can only run certain types of workloads. There are four types of specialty processors:

  • ICF: Internal Coupling Facility - used for redirecting coupling facility cycles in a data sharing environment.
  • IFL: Integrated Facility for Linux - used for processing zLinux workload on an IBM mainframe.
  • zAAP: Application Assist Processor - used for Java workload
  • zIIP: Integrated Information Processor - used for processing certain, distributed database workloads.

When you activate any of these processors, some percentage of that type of workload can be redirected off of the main CP onto the specialty processor... but not 100% of the workload. It can be frustrating, particularly with the zIIP, to determine exactly what is redirected exactly when and exactly how much of it. In general, distributed DB2 for z/OS workload and XML processing can be redirected to zIIP processors.

Additionally, to run on a zIIP, the workload must run under an enclave SRB. So, code written to execute under a TCB will usually be unable to execute under an SRB without major changes. If you didn't understand that sentence, don't worry about it too much. Basically, IBM has enabled certain types of (mostly DB2) workload to run on zIIPs, and ISVs have enabled some of their code to run on zIIPs, too. If you are interested, more details about zIIPs can be found at this link.

Another interesting tidbit is that zAAP-eligible workloads can be run on zIIPs with IBM's zAAP on zIIP support. This can be a boon to some shops that only have zIIPs and no zAAPs. Now, with zAAP on zIIP support, you can use zIIP processors for both Java and distributed DB2 workloads. The combined eligible TCB and enclave SRB workloads might make the acquisition of a zIIP cost effective.This capability also provides more value for customers having only zIIP processors by making Java- and XML-based workloads eligible to run on existing zIIPs.

To take advantage of zAAP on zIIP, you need to be running z/OS V11.1 (or z/OS V1.9 or V1.10 with the PTFs for APAR OA27495) on a z9, z10, or z196 server.

Keep in mind, that the terms for specialty processors do not change. You can only have 1 zAAP and 1 zIIP per each general purpose processor. So, even if you have zAAP on zIIP configured, the chip is still a zIIP and you cannot have any more than 1 per general purpose processor.

The Bottom Line

The bottom line is that even though it can take some studying and research to understand their benefit and functionality, specialty processors can help to reduce the cost of mainframe computing... and that is a good thing!

What is an Enclave?

If you are a DB2 professional dealing with distributed workload… or if you are enabling zIIP specialty processors… chances are you’ve heard the term “enclave” or “enclave SRB.” But just what is an enclave?

An enclave is a construct that represents a transaction or unit of work. Enclaves are a method of managing mainframe transactions for non-traditional workloads. You can think of an enclave as an anchor point for resource accumulation regardless of where the transaction is executing.

With traditional workloads it is relatively easy to map the resources consumed to the actual transaction doing the consumption. But with non-traditional workloads – web transactions, distributed processing, etc. – it is more difficult because the transaction can span platforms. Enclaves are used to overcome this difficulty by correlating closely to the end user’s view of the transaction.

So even though a non-traditional transaction can comprise multiple “pieces” spanning many server address spaces, and can share those address spaces with other transactions, the enclave gives you more effective control over the non-traditional workload.

If you are interested in more details on enclaves and how they are managed, read through Enclaves – Managing Business Transactions from IBM’s RMF Newsletter.

Wednesday, January 20, 2010

On Specialty Processors

Today's blog entry is kind of a meta entry.

As many of my readers know I write two blogs, this one that is predominantly about DB2 for z/OS, and the Data Management Today blog, that focuses on data-related issues, database management, and industry trends.

Well, I've written a series of entries on my other blog about specialty processors (zIIPs, zAAPs, etc.) and related issues. Since DB2 for z/OS folks should find that information useful and relevant, I thought I'd write an entry pointing my readers here to the pertinent specialty processor entries at my Data Management Today blog, so here goes...

The first entry I'd like to call your attention to is titled Specialty Processors on the Mainframe. This piece is basically an introduction to the different types of specialty processors (zIIp, zAAP, IFL, ICF), and what they can be used for. It is a good place to start if you are new to specialty processors or are looking for an update.

The second entry worth a peak is titled simply zAAP on zIIP. As you may or may not know, IBM delivered the capability for zAAP work to run on a zIIP (with certain conditions). This blog entry provides a brief synopsis of the August 18, 2009 z/OS V1.11 announcement that introduced the new capability to enable zAAP-eligible workloads to run on zIIPs

Next up is What is an Enclave? If you are working with zIIPs you have probably heard the term Enclave SRB. And if you are doing any type of distributed workload you've probably heard about enclaves, too. This blog entry is for those who are new to the term, or are confused about it. It offers an explanatory definition of the term "enclave" and points you on to additional reference material for those interested.

My most recent post over there (January 19, 2010), titled What is Generosity Factor?, has been a popular one. This blog entry delves into the generosity factor imposed upon zIIP workload including definitions of geneorsity factor, qualified and eligible work, and a discussion of what it implies for ISV products.

Hope you find this material worthwhile... and thanks for your continued support of my blogs.