Monday, February 13, 2017

The DB2 12 for z/OS Blog Series - Part 4: Real Time Statistics History

For those uninitiated to what RTS are, they are a series of statistics that are automatically maintained by DB2 and can be used by DBAs (and tools) to schedule maintenance tasks like REORG and COPY.

Prior to the introduction of RTS, the only way to gather statistics about DB2 database structures was by running the RUNSTATS utility. RUNSTATS collects statistical information about DB2 database objects and stores this data in the DB2 Catalog. There are two types of statistics collected by RUNSTATS: data used by the optimizer to formulate efficient access paths and data used by DBAs to determine when maintenance tasks should be run. The optimization statistics are still required, and therefore, so is RUNSTATS. The maintenance statistics, on the other hand, can be ignored and replaced by RTS.

RTS runs in the background and automatically updates statistics in two system catalog tables (one for table space data and one for index data) as your DB2 data is modified. This differs from RUNSTATS, which is a hands-on administrative process. RTS is hands-off.
So what is new with RTS in DB2 12 for z/OS?

The most significant enhancement is the ability to maintain RTS history using DB2’s system-time temporal capabilities. When history is enabled, both SYSIBM.SYSINDEXSPACESTATS and SYSIBM.SYSTABLESPACESTATS will have a table with "_H" on the end that have the same columns, with the same data types. But the temporal relationship is not automatically enabled, you have to choose to do so.

To enable the temporal relationship between a history table and its associated catalog table you must issue an ALTER statement to “hook up” the relationship. Here are the DDL statement for each of the RTS tables:

  ALTER TABLE SYSIBM.SYSINDEXSPACESTATS
  ADD VERSIONING
  USE HISTORY TABLE SYSIBM.SYSIXSPACESTATS_H;

  ALTER TABLE SYSIBM.SYSTABLESPACESTATS
  ADD VERSIONING
  USE HISTORY TABLE SYSIBM.SYSTABSPACESTATS_H;

Before turning on RTS history you should take care to build a process for maintaining, consolidating and purging historical information. Remember, that each time the STATSINT interval is reached, new RTS data will be written, causing the old information to be written to the system time temporal table. That history table will grow over time. You will want to have a plan for how to manage that growth. Perhaps you will want to aggregate the data daily, or weekly, or monthly. Or maybe you will just want to purge the data after a period of time. At any rate, your strategy for managing this data should be worked out before you decide to start collecting RTS history.

Why would you want to record historical RTS information? Well, such details can be useful for analyzing activity and predicting future requirements. With historical RTS you can see trends and might be able to improve performance, maintenance tasks, and service to your end users. And, of course, this means that you can write SQL time travel queries against the RTS system catalog tables.

You can always turn off historical RTS collection by severing the temporal relationship. This is accomplished by issuing ALTER TABLE specifying the DROP VERSIONING clause on the appropriate table. For example:

  ALTER TABLE SYSIBM.SYSTABLESPACESTATS
  DROP VERSIONING;

At any rate, the ability to take advantage of DB2’s temporal capabilities to capture RTS history is a powerful new feature of DB2 12 for z/OS that DBAs and performance analysts will want to take advantage of.


One other DB2 12 change to RTS worth noting, is the addition of the GETPAGES column to both table space and index space stats. This column is used to record the number of GETPAGE requests for the table space since the last time REORG was run (or since the object was created).

Tuesday, February 07, 2017

The DB2 12 for z/OS Blog Series - Part 3: Temporal Improvements

Temporal tables and time-travel query support was added in DB2 10 for z/OS. With temporal tables a time period is attached to the data to indicate when it was valid or changed in the database. A traditional database stores data implied to be valid at the current point-in-time; it does not track the past or future states of the data. Temporal support makes it possible to store different database states and to query the data as of those different states. That means different DDL to support temporal data, as well as different SQL syntax to query it.

Using built-in DB2 temporal capabilities is much easier than coding for it yourself, but as with most new features there were some additional tweaks that customers asked for. DB2 12 for z/OS delivers on making it easier for organizations to adopt and use temporal DB2 data and queries by supporting:
  • temporal referential constraints, 
  • more flexible time periods, and 
  • logical transactions.


Let’s examine each of these areas in a bit more depth. First up, let’s take a look at temporal referential constraints. Prior to DB2 12, temporal referential integrity required triggers or stored procedures to implement. But with DB2 12, you can define a temporal referential constraint for a temporal table that contains a BUSINESS_TIME period by specifying the PERIOD BUSINESS_TIME clause in the definition of the constraint. This should improve data integrity for temporal tables and increase temporal uptake for DB2 customers.

The second new temporal feature is improved flexibility for defining the application period in temporal tables. The application period is defined with two date/time columns, one specifying the beginning of the period and the other specifying the end of the period. Prior to DB2 12, the beginning value of a period had to be inclusive, and the end value had to be exclusive. So for a period beginning at 2017-01-01 and ending 2019-10-01, 2017-01-01 is part of the period but 2019-10-01 is not part of the period. This type of period is called an inclusive-exclusive period.

In DB2 12, you can create an application-period temporal table with a BUSINESS_TIME period that is inclusive-inclusive. That means that the end value is considered to be part of the period, instead of outside the period definition.

Generally speaking, the inclusive-exclusive definition is preferred (and is the default) because it complies with the SQL standard. However, if your data already exists as inclusive-inclusive, or it makes more sense to your business users, then DB2 12 support for inclusive-inclusive will make it easier for your organization to use DB2 temporal tables and time travel queries.

Finally, DB2 12 offers temporal logical transactions on system temporal tables (that is, those using SYSTEM_TIME periods). With this new feature, DB2 supports logical units of temporal work that are not determined by COMMIT and ROLLBACK. That means that values for row-begin and row-end columns are determined by applications based on a built-in global variable that you can set.


So overall, DB2 12 for z/OS makes using temporal tables easier and more flexible with these three compelling new features.

Wednesday, February 01, 2017

The DB2 12 for z/OS Blog Series - Part 2: Advanced Triggers

As we continue our series on new functionality in DB2 12 for z/OS, today we take a look at the improvements delivered for triggers.

Before we dive into the new capabilities, let's briefly review what triggers are. Triggers are event-driven specialized procedures that are stored in, and executed by, the DBMS. Each trigger is attached to a single, specified table. Triggers can be thought of as an advanced form of "rule" or "constraint" written using procedural logic. A trigger can not be directly called or executed; it is automatically executed (or "fired") by the DBMS as the result of a modification being made to the associated table. Once a trigger is created it is always executed when its "firing" event occurs (update, insert, delete, etc.). Triggers are generally used to promote data integrity and make the database self-maintaining.

So far, so good. But what does DB2 12 provide us that we did not have in prior releases?

Well, now we have a new type of trigger called an advanced trigger. All this really means is that the trigger supports new capabilities that I will outline momentarily. Triggers created before you have moved to function level 500 (this is new terminology for DB2 12 that roughly equate to NFM in earlier releases) will be referred to as basic triggers. 

So what makes a trigger advanced? Simply put, support for additional development and usability capabilities. The following features are supported only with advanced triggers:

  • You can define and reference SQL variables in the trigger
  • More types of SQL statements can be used, including dynamic SQL statements
  • SQL PL control statements can be used in the trigger
  • It is possible to reference global variables, and to assign values to global variables
  • Finally, you can explicitly specify options, including BIND options
  • Comments are supported in SQL statements
  • You can include definitions for multiple versions of the trigger

These are all good things. But there are some additional changes that you need to be aware of with advanced triggers. All transition variables are nullable in advanced triggers. And debugging options are available. Perhaps the most significant change though is the support for a more robust ALTER TRIGGER statement that can be used to change trigger options or regenerate the trigger body. Only a very rudimentary ALTER capability is available for basic triggers.

Another nice capability for advanced triggers it that CREATE TRIGGER now provides the OR REPLACE clause. This lets the developer use one CREATE statement to define a trigger or update an existing trigger depending on whether the trigger already exists. The OR
REPLACE option can also be used with a CREATE TRIGGER statement to define a
new version of a trigger, or to replace an existing version of a trigger.


A nice chart of the behavioral differences between basic and advanced triggers can be found here, if you are interested in additional details.

The bottom line is that for shop's that rely on triggers, or that are looking to use more triggers, the set of new functionality offered in DB2 12 for z/OS greatly expands the usefulness and usability for triggers.

Tuesday, January 17, 2017

The DB2 12 for z/OS Blog Series - Part 1: SQL Pagination

Today’s blog post kicks off a new series of posts that examine new features and functionality delivered with the latest and greatest new version of DB2 foir z/OS – Version 12.

We’ll begin with a nice new feature that should make it easier for programmers writing queries where the program needs to be able to deal with a subset of results. For example, to show the first 25 rows, then the next, etc. This is a frequent requirement for mobile and web applications that are common these days.

This separating of rows into piece can now be accomplished quite easily in DB2 12 with the new OFFSET syntax. OFFSET is specified as a clause of the SELECT statement and it is used to specify the number of rows in the result table that will be skipped before
any rows are retrieved.

The offset clause is simple to code. You just need to determine the number of rows that you want to skip and code that in the clause. For example, to skip 5 rows you would code OFFSET 5 ROWS. Simple, right? Well, conceptually, yes, but in practice you have to be careful.

First of all, you must know where/when you can code an OFFSET clause. OFFSET can be specified on a subselect or fullselect that is outermost fullselect in a prepared SQL statement or a DECLARE CURSOR statement. OFFSET is also permissible in a SELECT INTO statement. However, you cannot code an OFFSET clause in a view definition, an MQT, the RETURN statement of SQL table functions, in row permissions or column masks, or in the outermost fullselect for a sensitive dynamic cursor. A further restriction is that the statements that includes the OFFSET clause cannot contain an expression that is not deterministic or that has external action.

The second, and I think more interesting aspect to consider when using the OFFSET clause is that your result set must be predictable to assure that you are retrieving useful data. As we all should know, the access path chosen by the optimizer can alter the order in which rows are returned. If you want to access rows in sets of 25, for example, then you want to make sure that each successive set does not contain any rows from previous sets, and does not omit any rows in between sets.

So how do we do this? The best approach is to code up an ORDER BY clause that specifies columns that uniquely identify each row in the result table. If there are duplicates, then there is no way to verify that you are getting the data that you want. IN other words, the order of the rows is not deterministic. The data being skipped will not be predictable and it is highly likely that you will not be accessing all of the data that you want (or perhaps even accessing the same data twice).

It is also important to understand that if the OFFSET number of rows is greater than the number of rows in the intermediate result table, you will get an empty result.

So let’s look at a quick example. Suppose we are looking for employees earning more than $50000. Further, we want to skip the first 10 rows of the EMP table, and grab the next 10. This SQL should do the trick:

SELECT *
FROM EMP
WHERE SALARY > 50000.00
ORDER BY EMPNO
OFFSET 10 ROWS
FETCH FIRST 10 ROWS ONLY;

The OFFSET 10 ROWS will cause the first 10 qualifying rows to be skipped. The FETCH clause gets the next 10 qualifying rows (after skipping the first 10).

But OFFSET is not the only new feature to help developers with SQL pagination. Also available with DB2 12 is data-dependent pagination, which uses row value
expressions in a basic predicate. That sounds more complicated than it really is, so let’s look at an example:

We can code a WHERE clause like this to search more rows with a name greater than mine:

WHERE (LASTNAME, FIRSTNAME) > (′MULLINS′, ′CRAIG′)

This is easier than what we previously had to do (before DB2 12), which was breaking the WHERE up as follows:

WHERE (LASTNAME = ′MULLINS′ AND FIRSTNAME > ′CRAIG′)
OR (LASTNAME > ′MULLINS′)


The bottom line is that application coding for pagination becomes a lot easier in DB2 12 for z/OS…

Thursday, December 22, 2016

Happy Holidays 2016

It is that time of year again. When we all take some time to pause and spend time with our family, friends and loved ones. We take time off of work to celebrate and reflect on the past year... and to welcome the new year.

No matter what holiday tradition you celebrate, I wish you an enjoyable holiday season. Happy holidays, that is!



And we'll meet back here again in January 2017 to talk more about DB2 stuff!

Happy Holidays!