Friday, February 24, 2017

The DB2 12 for z/OS Blog Series - Part 5: Enhancements for Managing Dynamic and Static SQL

Most modern application development is done using dynamic SQL. But some features work only with static SQL and others only with dynamic SQL. DB2 12 for z/OS delivers functionality that minimizes the difference between static and dynamic SQL.

Dynamic plan stability brings the plan stability feature of static SQL to dynamic SQL. With plan stability for static SQL, you can use the PLANMGMT parameter of REBIND to save off old access paths that can be switched back to active if the new access paths are inefficient for any reason.

DB2 12 introduces dynamic plan stability, which is a little different but should prove to be quite useful. You can choose to stabilize dynamic SQL access paths by storing them in the DB2 system catalog. DB2 will not just automatically start to capture all dynamic access paths. There are options for selecting which queries to stabilize, so you do not have to store all dynamic SQL. This is controlled by the command:

 -START DYNQUERYCAPTURE

When a dynamic statement is run, DB2 will look in the dynamic statement cache first. If it is not found there, DB2 will look in the system catalog before resorting to a full prepare. This is particularly useful when statements are flushed from the DSC and a prepare would otherwise be required.

You can label a set of stabilized SQL statements into a group known as a stabilization group. This makes it easier for DBAs to track and manage stabilized queries.

Click here for more details on Dynamic Plan Stability.

So dynamic plan stability can make your dynamic SQL more static. But there is another new DB2 12 capability that can make your static SQL more dynamic: static Resource Limit Facility (RLF). The RLF, heretofore, could only be used to govern dynamic SQL statements. RLF tables, manipulated by DBAs, contain limits that make sure that dynamic SQL statements do not consume too many resources such as CPU, I/O, locks, etc. This enables improperly tested SQL or poor resource planning from disrupting performance.

But dynamic SQL is not the only type of SQL that could cause performance issues; static SQL transactions can benefit from the same controls. DB2 12 extends the RLF to support static SQL statements thereby improving the ability to avoid problem applications from dominating your system’s resource consumption.

Specifically, you can set up reactive governing for static SQL statements by adding new rows in resource limit facility tables. Static SQL statements will be governed by rows where RLFFUNC='A' (for DSNRLSTxx tables) and RLFFUNC='B' (for DSNRLMTxx tables).

You can control whether resource limits apply to dynamic only, static only or all SQL statements using the DSNZPARM RLFENABLE. Acceptable values are DYNAMIC, STATIC, or ALL and the default is DYNAMIC. Furthermore, you can specify the default resource limit actions for static SQL statements by setting two DSNZPARMS:
  • RLFERRSTC for local statements
  • RLFERRDSTC for remote statements


In each case, the acceptable values are NOLIMIT, NORUN, or a number between 1 and 500000. You use NOLIMIT to let any static SQL statement that does not correspond to a row in the resource limit table run freely. Alternately, NORUN indicates that any static SQL statement that does not correspond to a row in the resource limit table is prevented from running. Finally, if you specify a number between 1 and 500000, that will be the number of service units to use as the default resource limit. If the limit is exceeded, the SQL statement is terminated.

If you need more details on setting up resource limits in general, you can find that information in the IBM manuals and online here.

But the bottom line is that DB2 12 gives users more and better options for managing both their dynamic and static SQL performance.  And that is definitely a good thing!

Monday, February 13, 2017

The DB2 12 for z/OS Blog Series - Part 4: Real Time Statistics History

For those uninitiated to what RTS are, they are a series of statistics that are automatically maintained by DB2 and can be used by DBAs (and tools) to schedule maintenance tasks like REORG and COPY.

Prior to the introduction of RTS, the only way to gather statistics about DB2 database structures was by running the RUNSTATS utility. RUNSTATS collects statistical information about DB2 database objects and stores this data in the DB2 Catalog. There are two types of statistics collected by RUNSTATS: data used by the optimizer to formulate efficient access paths and data used by DBAs to determine when maintenance tasks should be run. The optimization statistics are still required, and therefore, so is RUNSTATS. The maintenance statistics, on the other hand, can be ignored and replaced by RTS.

RTS runs in the background and automatically updates statistics in two system catalog tables (one for table space data and one for index data) as your DB2 data is modified. This differs from RUNSTATS, which is a hands-on administrative process. RTS is hands-off.
So what is new with RTS in DB2 12 for z/OS?

The most significant enhancement is the ability to maintain RTS history using DB2’s system-time temporal capabilities. When history is enabled, both SYSIBM.SYSINDEXSPACESTATS and SYSIBM.SYSTABLESPACESTATS will have a table with "_H" on the end that have the same columns, with the same data types. But the temporal relationship is not automatically enabled, you have to choose to do so.

To enable the temporal relationship between a history table and its associated catalog table you must issue an ALTER statement to “hook up” the relationship. Here are the DDL statement for each of the RTS tables:

  ALTER TABLE SYSIBM.SYSINDEXSPACESTATS
  ADD VERSIONING
  USE HISTORY TABLE SYSIBM.SYSIXSPACESTATS_H;

  ALTER TABLE SYSIBM.SYSTABLESPACESTATS
  ADD VERSIONING
  USE HISTORY TABLE SYSIBM.SYSTABSPACESTATS_H;

Before turning on RTS history you should take care to build a process for maintaining, consolidating and purging historical information. Remember, that each time the STATSINT interval is reached, new RTS data will be written, causing the old information to be written to the system time temporal table. That history table will grow over time. You will want to have a plan for how to manage that growth. Perhaps you will want to aggregate the data daily, or weekly, or monthly. Or maybe you will just want to purge the data after a period of time. At any rate, your strategy for managing this data should be worked out before you decide to start collecting RTS history.

Why would you want to record historical RTS information? Well, such details can be useful for analyzing activity and predicting future requirements. With historical RTS you can see trends and might be able to improve performance, maintenance tasks, and service to your end users. And, of course, this means that you can write SQL time travel queries against the RTS system catalog tables.

You can always turn off historical RTS collection by severing the temporal relationship. This is accomplished by issuing ALTER TABLE specifying the DROP VERSIONING clause on the appropriate table. For example:

  ALTER TABLE SYSIBM.SYSTABLESPACESTATS
  DROP VERSIONING;

At any rate, the ability to take advantage of DB2’s temporal capabilities to capture RTS history is a powerful new feature of DB2 12 for z/OS that DBAs and performance analysts will want to take advantage of.


One other DB2 12 change to RTS worth noting, is the addition of the GETPAGES column to both table space and index space stats. This column is used to record the number of GETPAGE requests for the table space since the last time REORG was run (or since the object was created).

Tuesday, February 07, 2017

The DB2 12 for z/OS Blog Series - Part 3: Temporal Improvements

Temporal tables and time-travel query support was added in DB2 10 for z/OS. With temporal tables a time period is attached to the data to indicate when it was valid or changed in the database. A traditional database stores data implied to be valid at the current point-in-time; it does not track the past or future states of the data. Temporal support makes it possible to store different database states and to query the data as of those different states. That means different DDL to support temporal data, as well as different SQL syntax to query it.

Using built-in DB2 temporal capabilities is much easier than coding for it yourself, but as with most new features there were some additional tweaks that customers asked for. DB2 12 for z/OS delivers on making it easier for organizations to adopt and use temporal DB2 data and queries by supporting:
  • temporal referential constraints, 
  • more flexible time periods, and 
  • logical transactions.


Let’s examine each of these areas in a bit more depth. First up, let’s take a look at temporal referential constraints. Prior to DB2 12, temporal referential integrity required triggers or stored procedures to implement. But with DB2 12, you can define a temporal referential constraint for a temporal table that contains a BUSINESS_TIME period by specifying the PERIOD BUSINESS_TIME clause in the definition of the constraint. This should improve data integrity for temporal tables and increase temporal uptake for DB2 customers.

The second new temporal feature is improved flexibility for defining the application period in temporal tables. The application period is defined with two date/time columns, one specifying the beginning of the period and the other specifying the end of the period. Prior to DB2 12, the beginning value of a period had to be inclusive, and the end value had to be exclusive. So for a period beginning at 2017-01-01 and ending 2019-10-01, 2017-01-01 is part of the period but 2019-10-01 is not part of the period. This type of period is called an inclusive-exclusive period.

In DB2 12, you can create an application-period temporal table with a BUSINESS_TIME period that is inclusive-inclusive. That means that the end value is considered to be part of the period, instead of outside the period definition.

Generally speaking, the inclusive-exclusive definition is preferred (and is the default) because it complies with the SQL standard. However, if your data already exists as inclusive-inclusive, or it makes more sense to your business users, then DB2 12 support for inclusive-inclusive will make it easier for your organization to use DB2 temporal tables and time travel queries.

Finally, DB2 12 offers temporal logical transactions on system temporal tables (that is, those using SYSTEM_TIME periods). With this new feature, DB2 supports logical units of temporal work that are not determined by COMMIT and ROLLBACK. That means that values for row-begin and row-end columns are determined by applications based on a built-in global variable that you can set.


So overall, DB2 12 for z/OS makes using temporal tables easier and more flexible with these three compelling new features.

Wednesday, February 01, 2017

The DB2 12 for z/OS Blog Series - Part 2: Advanced Triggers

As we continue our series on new functionality in DB2 12 for z/OS, today we take a look at the improvements delivered for triggers.

Before we dive into the new capabilities, let's briefly review what triggers are. Triggers are event-driven specialized procedures that are stored in, and executed by, the DBMS. Each trigger is attached to a single, specified table. Triggers can be thought of as an advanced form of "rule" or "constraint" written using procedural logic. A trigger can not be directly called or executed; it is automatically executed (or "fired") by the DBMS as the result of a modification being made to the associated table. Once a trigger is created it is always executed when its "firing" event occurs (update, insert, delete, etc.). Triggers are generally used to promote data integrity and make the database self-maintaining.

So far, so good. But what does DB2 12 provide us that we did not have in prior releases?

Well, now we have a new type of trigger called an advanced trigger. All this really means is that the trigger supports new capabilities that I will outline momentarily. Triggers created before you have moved to function level 500 (this is new terminology for DB2 12 that roughly equate to NFM in earlier releases) will be referred to as basic triggers. 

So what makes a trigger advanced? Simply put, support for additional development and usability capabilities. The following features are supported only with advanced triggers:

  • You can define and reference SQL variables in the trigger
  • More types of SQL statements can be used, including dynamic SQL statements
  • SQL PL control statements can be used in the trigger
  • It is possible to reference global variables, and to assign values to global variables
  • Finally, you can explicitly specify options, including BIND options
  • Comments are supported in SQL statements
  • You can include definitions for multiple versions of the trigger

These are all good things. But there are some additional changes that you need to be aware of with advanced triggers. All transition variables are nullable in advanced triggers. And debugging options are available. Perhaps the most significant change though is the support for a more robust ALTER TRIGGER statement that can be used to change trigger options or regenerate the trigger body. Only a very rudimentary ALTER capability is available for basic triggers.

Another nice capability for advanced triggers it that CREATE TRIGGER now provides the OR REPLACE clause. This lets the developer use one CREATE statement to define a trigger or update an existing trigger depending on whether the trigger already exists. The OR
REPLACE option can also be used with a CREATE TRIGGER statement to define a
new version of a trigger, or to replace an existing version of a trigger.


A nice chart of the behavioral differences between basic and advanced triggers can be found here, if you are interested in additional details.

The bottom line is that for shop's that rely on triggers, or that are looking to use more triggers, the set of new functionality offered in DB2 12 for z/OS greatly expands the usefulness and usability for triggers.