Wednesday, July 31, 2013

Compression Becoming More Important in the Age of Big Data

DBAs and database professionals have been aware of the pros and cons of compressing data for years. The traditional argument goes something like this: with compression you can store more data in less space, but at the cost of incurring CPU to compress the data upon insertion (and modification) and decompress the data upon reading it. Over time, the benefits of compression became greater as compression algorithms became more robust, hardware assist chips became available to augment compression speed, and the distributed model of computing made transmitting data across networks a critical piece of the business transaction (and transmitting compressed data is more efficient than transmitting uncompressed data).
IBM has significantly improved compression in DB2 for z/OS over the years. In the early days of mainframe DB2 no compression capability came with DB2 out-of-the-box -- the only mechanism for compressing data was via an exit routine (EDITPROC). Many software vendors developed and sold compression routines for DB2. Eventually, IBM began shipping a sample compression routine with DB2. And then in DB2 Version 3 (1993) hardware-assisted compression was introduced. Using the hardware assist , the CPU used by DB2 compression is minimal and the cons list gets a little shorter.
Indeed, one piece of advice that I give to most shops when I consult for them is that they probably need to look at compressing more data than they already are. Compressed data can improve performance these days because, in many cases, you can fit more rows per page. And therefore scans and sequential processes can process more data with the same number of I/Os, thereby improving performance. Of course, you should use the DSN1COMP utility to estimate the amount of savings that can accrue via compression before compressing any existing data.
Eventually, in DB2 9 we even get index compression capability (of course, using different technology than data compression). At any rate, compressing data on DB2 for z/OS is no longer the “only-if-I-have-to” task that it once was.
Then along comes the Big Data phenomenon where increasingly large data sets need to be stored and analyzed. Big Data is typified by data sets that are so large and complex that traditional tools and database systems are ill-suited to process them. Clearly, compressing such data could be advantageous… but is it possible to process and compress such large volumes of data?
New alternatives to traditional systems are being made available that offer efficient resource usage based on principles of compressed sensing and other techniques. One example of this new technology is IBM’s BLU Acceleration, which is included in DB2 10.5 for Linux, Unix, and Windows. One feature of BLU Acceleration is extended compression, which eliminates the need for indexes and aggregation and operates on compressed data and can thereby eliminate the CPU time that would be required to decompress the data. Advanced encoding maximizes compression while preserving the order of encoding so compressed data can be quickly analyzed without decompressing it. It is an impressive technology as no changes are required to your existing SQL statements.
IBM reports that some clients using DB2 10.5 for LUW with BLU Acceleration have achieved compression rates 10 times greater than uncompressed tables.
Of course, BLU Acceleration is much more than compression (it combines in-memory, columnar and compression technologies), but for the purposes of today’s blog entry we won’t delve deeper into the technology. If you are interested in a little bit more on BLU read my high-level overview in my coverage of this year’s IDUG DB2 TechnicalConference.

So compression is becoming cool… who’d have thought that back in the 1980s when compression was something we only did when we absolutely had to?

Friday, July 26, 2013

Top Ten Most Common DB2 Performance Problems


  1. PEBCAK 

    The number one cause of DB2 performance problems today, as always, is Problem Exists Between Chair And Keyboard!
     
  2. Poorly coded SQL

    Many performance problems can be traced back to inappropriately coded SQL Code it correctly from the beginning and tune what is already out there.
         
  3. Improper indexing

    Optimize performance via indexing by workload, not by object.  
     
  4. Bad program design

    Coding DB2 SQL for Performance in your application programs from the outset can eliminate many future problems.

    .

  5. Bachelor programming syndrome

    Yes, the dreaded "Fear of Committing" can cause performance problems due to concurrency issues.
         
  6. Improperly defined buffer pools

    Defining effective buffer pools for your DB2 workload is important. There are many things you can do to identify the proper settings and sizing of your DB2 buffer pools.
         
  7. Index / table space needs to be reorganized

    Reorganization of indexes and table spaces can improve performance. Follow the Five R's to assure optimal DB2 application performance.
     
  8. Improperly designed database structures

    Designing database structures correctly -- from the beginning -- is the way to go for efficiency and efficacy.
     
  9. Copied code syndrome

     
  10. RUNSTATS not up-to-date (or not even run)

    How can you expect for the DB2 Optimizer to do its thing on your SQL without statistics about your data and environment? Again, The Five R's!

And if a Top Ten list does not provide enough detail for you, then splurge for a copy of my book: DB2 Developer's Guide, 6th edition. Recently updated for DB2 10 for z/OS, the book delivers over 1600 pages full of DB2 tips, tricks, guidelines, and details...  It rates 5 stars on Amazon!

Friday, July 19, 2013

DB2 Locking, Part 17: In Conclusion

Today's blog post concludes our multi-part series on DB2 for z/OS locking and concurrency. We have touched on a great many aspects of locking in this series. Such an in-depth, technical topic as DB2 locking can be difficult to master, but doing so can deliver a more clear understanding of how DB2 operates and how your programming and design decisions impact not only your application, but also the entire DB2 subsystem. 

A Couple Additional Locking Ideas and Thoughts

Before concluding this series, here are a few more guidelines and thoughts on DB2 locking and concurrency:

  • It is a good idea to use clustering to encourage DB2 to maintain data that is accessed together on the same page. If you use page locking, fewer locks are required to access multiple rows if the rows are clustered on the same page or pages. You can also use larger page sizes to control the amount of data locked for page locking.

  • Consider using the free space parameters to influence locking. If you increase free space, fewer rows are stored on a single page. Therefore, fewer rows are locked by a single page lock. This approach can decrease contention. However, it consumes additional disk storage, and it can also decrease the performance of table space scans (because more pages with fewer rows must be read). Additionally, keep in mind that as data is added to the table the free space will decrease (because the new data is using it). As such, locking issues may become more prevalent.
  • You can also decrease the number of rows per page using the MAXROWS option of the CREATE TABLESPACE statement. The fewer rows per page, the less intrusive page locking will be because fewer rows will be impacted by a page lock. This approach is probably better than the free space approach (previous bullet) because new data will not impact number of rows per page.

  • Design your application programs with locking considerations in mind. THis is the Number One thing to remember to increase concurrency and minimize the impact of locking on DB2 application and system performance. You can minimize the effect of locking through proper application program design. This means:
  • Limiting the number of rows that are accessed by coding predicates to filter unwanted rows
  • Requesting only the data (rows and columns) that your actually need to perform your business processes
  • Perform modifications as close to the end of the unit of work as possible
  • And remember to avoid bachelor programming syndrome (see Part 9).

Summary

Of course, there are probably many more hints, tips, and guidelines for developing DB2 databases and applications with concurrency in mind, but I think a 17 part series is sufficient for my blog. If you want more details on concurrency (or any other aspect of DB2 for z/OS) might I recommend the latest edition of my book -- DB2 Developer's Guide, 6th edition.

And be sure to come back and review this series on locking if you get confused as you work to maximize the concurrency of your DB2 queries, transactions, and programs. 

Finally, as a service to my readers, this post includes a directory/index to the 16 separate posts that make up this series.

Index of Blog Posts on DB2 Locking


Monday, July 15, 2013

DB2 Locking, Part 16: Skipping Uncommitted Inserts

DB2 10 for z/OS introduces a new technique for concurrency. As we have discussed previously in this series on DB2 locking, one of the most troublesome problems for DB2 performance analysts is reducing timeouts and lock suspensions.

The CONCURRENTACCESSRESOLUTION parameter can be used to specify a concurrent access resolution option to use for statements in a package when binding your application program.
There are two options. The first, USECURRENTLYCOMMITTED, indicates that DB2 can use the currently committed version of the data for applicable scans when data is in the process of being updated or deleted. Rows that are in the process of being inserted can be skipped. This clause applies only when the isolation level in effect is Cursor Stability (CS) or Read Stability (RS) making skip uncommitted inserts apply. It is ignored for other isolation levels.

The second option is WAITFOROUTCOME, which indicates that applicable scans must wait for a COMMIT or ROLLBACK operation to complete when data is in the process of being updated or deleted. Rows that are in the process of being inserted are not skipped.

The default behavior is WAITFOROUTCOME. Instead, if you choose to specify USECURRENTLYCOMMITTED DB2 will ignore rows that are in the process of being inserted and use only currently committed rows. This might be desirable for highly concurrent web transactions or to mimic the application concurrency settings or behavior of another DBMS.

Another option at your disposal is to specify that uncommitted inserts are to be skipped at the subsystem level (using option 19 on panel DSNTIPB during DB2 installation or by assembling a new DSNZPARM). The SKIPUNCI subsystem parameter specifies whether statements ignore a row that was inserted by another transaction if the row has not if the row has not yet been detected as committed. A newly inserted row can be detected as committed only after the lock held on the row has been released.

There are two options for this parameter:
  • Specifying YES for SKIPUNCI will cause DB2 to behave as though the newly inserted row has not yet arrived and the row is skipped, until the lock held on a newly inserted row is released.
  • SKIPUNCI(NO) indicates that DB2 will wait for the inserted row to be committed or rolled back. It then processes the row if the insert commits, or it moves on to find another row if the insert is rolled back. If a transaction performs one or more inserts, and then spawns a second transaction, specify NO for SKIP UNCOMM INSERTS if the first transaction needs the second transaction to wait for the outcome of the inserts. This is the default value.
By using either of these two methods to skip uncommitted inserts you can improve concurrent access to data as you tune your application’s locking and concurrency requirements.

Tuesday, July 09, 2013

DB2 Locking, Part 15: Tackling Timeout Troubles

Many shops battle with locking issues and frequently, the cause of performance issues can be traced back to locking issues, more specifically, lock timeout issues. When you experience a timeout, it means that another process holds a lock on the data that you are trying to modify.
You may experience timeout troubles as the dreaded -911 SQLCODE. Timeouts are different than most performance issues because the job or users receiving the -911 SQLCODE is usually not the source of the problem. Instead, the jobs causing the problem typically continue to run just fine, bringing about problems for everybody else!
The primary factors to examine when looking to resolve timeout issues are:
  • The lock size parameter that was specified at table space creation time.
  • The duration of the locks established by the BIND strategies. Lock duratiuon is determined by a combination of the ACQUIRE and RELEASE strategies for the table spaces and the ISOLATION parameters for the page locks.

  • The use of LOCK TABLE statements in any programs (See DB2 Locking Part 14).
  • The scope of the commit in the application code.
  • The manner in which the table space was started (RW or RO).
Most of the time, resource contention and timeouts are due to long-running programs that are not issuing COMMITs…or perhaps are not committing their work frequently enough. Issuing a COMMIT ends a unit of recovery and hardens all database modifications made during that unit of recovery.
So be sure to verify that all of your batch processes -- especially any that run concurrently with other workload (but really all batch process) -- have a COMMIT strategy. This means that your programs should issue a COMMIT after processing "a set number of" inserts, updates, and deletes. A COMMIT will tell DB2 to make the changes permanent and releases locks. A good rule of thumb is to strive for having no lock held for more than five seconds.
A reasonable approach is to set a counter that is incremented after every modification. Then, check it and when it exceeds a predefined threshold -- say 25 or 50 or 100 modifications -- then issue a COMMIT. You should make the threshold an input parameter so that you can change it as the workload in your system changes. For example, make it 25 when concurrent activity is high, but ramp it up to 100 or higher when it is low. Failure to issue COMMITs will result in timeouts, as well as possibly deadlocks and lock escalation.
For a more in-depth discussion on the importance of issuing COMMITs in your application programs review DB2 Locking Part 9.
Another practical approach for reducing lock duration is to make data modifications as close to the end of the unit of work as possible. Look at all of your programs, both batch and online, and try to save the data modification statements to as close to the COMMIT as you can. By saving the data modification until right before you issue a COMMIT, you reduce the overall average lock duration. This should result in reducing contention and therefore, the number of timeouts.
Some additional suggestions to maximize concurrency and reduce timeouts include:
  • Use WHERE CURRENT OF CURSOR to perform UPDATEs and DELETEs in your programs.
  • Ensure that you have created your table spaces with the appropriate LOCKSIZE (usually PAGE, but sometimes perhaps ROW).
  • If you have tables that are static, consider starting their table space as RO instead of RW. Doing so can enable DB2 to utilize table locking instead of page or row locking for those tables.
  • Limit the scope of SQL statements so that a statement locks 150 or fewer pages on a table for each unit of work that is performed.

Finally, if you want to investigate timeout details, be sure to examine the statistics trace class 3 and IFCID 0196 for timeouts (IFCID 0172 is for deadlocks).

Saturday, July 06, 2013

DB2 Locking, Part 14: Using the LOCK TABLE Statement

We continue the series on DB2 locking today with a look at the SQL statement: LOCK TABLE .

You can issue the LOCK TABLE statement in your application programs to raise the lock granularity to the table (actually, table space) level. Doing so means that you will not need to take lower level locks (at the page or row level, whichever is in effect for the table space in question). Issuing a LOCK TABLE in a program can make sense in certain, specific circumstances. Consider using the LOCK TABLE statement to control the efficiency of locking in programs that will issue many page/row lock requests when there are no concurrent requests for the same data. Issuing a LOCK TABLE, at times, can be a reasonable alternative to using an ISOLATION level of RR or RS when a large percentage of a table's rows or pages will be modified.

If your environment can withstand the concurrency hit caused by LOCK TABLE you can gain performance by reducing locking activity. Taking a larger granularity lock at the table(space) level instead of multiple page or row locks will improve the performance of your program, albeit at the expense of concurrent activity to the data. If you wish to avoid modifying the program you can START the table space is read only mode (RO) and achieve similar results (no locks are taken because the data cannot be modified). This can be accomplished using a command like so:

-START DATABASE(DBNAME) SPACENAM(SPNAME) ACCESS(RO)

Of course, if you go that route you will have to make sure that you restart the table space for read write activity (RW) after the program finishes. This can be unwieldy to implement.

If, instead, you wish to use the LOCK TABLE approach, there are two types of LOCK TABLE requests. The LOCK TABLE...IN SHARE MODE statement acquires an S-lock on the table specified in the statement. This locking strategy effectively eliminates the possibility of concurrent modification programs running while the LOCK TABLE is in effect. The S-lock is obtained on the table space for tables contained in non-segmented table spaces. This is important to understand, especially if you have multi-table table spaces.

The LOCK TABLE...IN EXCLUSIVE MODE statement acquires an X-lock on the table specified in the statement. All concurrent processing is suspended until the X-lock is released. Again, for non-seqmented table spaces, the X-lock is obtained on the table space not the table.

In both cases, you can specify the PART parameter to indicate that only a specific partition is to be locked. For example, to lock only the third partition of a partitioned table space, you can issue: LOCK TABLE...PART 3 IN EXCLUSIVE MODE.
The table locks acquired as a result of the LOCK TABLE statement are held until the next COMMIT point unless RELEASE(DEALLOCATE) was specified for the plan issuing the LOCK TABLE statement. In that situation, the lock is held until the program terminates. That means, for RELEASE(COMMIT) programs, you will need to issue the LOCK TABLE again after each COMMIT or processing will revert to page/row locking.

Also, keep in mind that the lock will not take effect until the statement executes, even if ACQUIRE(ALLOCATE) was coded at BIND time. 

Monday, July 01, 2013

DB2 Locking, Part 13: Optimistic Locking

Continuing our series on DB2 locking, let's look into a relatively recent development -- optimistic locking...

IBM added improvements for optimistic locking techniques in DB2 9 for z/OS.  What is optimistic locking? Sometimes referred to as optimistic concurrency control, optimistic locking is basically just what it sounds like. We are optimists and think that usually we will be the only ones with interest in the data. In other words, when optimistic locking is implemented you are assuming that most of the time there will be no other programs that are interested in the page of data that you are planning to modify.

Of course, even in the most optimistic world there will be exceptions, so optimistic locking does not assume that there will never be any concurrent processes that need to access your page(s). Basically, with optimistic locking you can improve performance by minimizing locking. So how do we do that?

When an application uses optimistic locking, locks are obtained immediately before a read operation and then released immediately. Update locks are obtained immediately before an update operation and held until the end of the transaction. Optimistic locking uses the RID (Record IDentifier) and a row change timestamp to test whether data has been changed by another transaction since the last read operation.

DB2 knows when a row was changed and so therefore he (I always tend to make DB2 masculine, sorry ladies) can ensure data integrity even as he minimizes the duration of locks. With optimistic locking, DB2 releases the page (or row) locks immediately after a read operation. And if you are using row locks, DB2 releases the row lock after each FETCH, taking a new lock on a row only for a positioned update or a positioned delete.

Careful readers will have noticed that I talked about a “row change timestamp” but you may not have heard that expression before. DB2 V9 added support for automatically generated timestamp columns and if you wish to implement optimistic locking you will need to create (or alter) your tables to have a row change timestamp column, defined as follows:

NOT NULL GENERATED ALWAYS
FOR EACH ROW ON UPDATE
AS ROW CHANGE TIMESTAMP

or

NOT NULL GENERATED BY DEFAULT
FOR EACH ROW ON UPDATE
AS ROW CHANGE TIMESTAMP

For tables having a row change timestamp column, DB2 automatically populates and maintains the timestamp values for each row. Notice how the syntax is similar to the syntax used for other automatically generated DB2 values, such as sequences. DB2 will automatically generate the timestamp value for each row when the row is inserted, and modify the timestamp for each row when any column in that row is updated.

When you add a ROW CHANGE TIMESTAMP column to an existing table, the initial value for existing rows will not be immediately populated. Instead, DB2 places the table space in an advisory-REORG pending state. When you reorganize the table space, DB2 will generates the values for the ROW CHANGE TIMESTAMP column for all rows (and, of course, remove the advisory-REORG pending status).

OK, but how does this implement optimistic locking? Well, you can use this new column as a condition for making an UPDATE, by specifying it in your WHERE clause. Let’s walk thru a couple of examples.

First of all, when a table contains a ROW CHANGE TIMESTAMP you can use it to find out when its rows were modified. Let’s use the following table as an example:

CREATE TABLE CUSTOMER
 (CUSTNO           CHAR(8)   NOT NULL,
  CUST_INFOCHANGE  NOT NULL GENERATED ALWAYS
                   FOR EACH ROW ON UPDATE
                   AS ROW CHANGE TIMESTAMP,
  CUST_NAME        VARCHAR(50),
  CUST_ADDRESS     VARCHAR(100),
  CUST_CITY        CHAR(20),
  CUST_STATE       CHAR(2),
  CUST_ZIP         CHAR(9),
  CUST_PHONE       CHAR(10),

  PRIMARY KEY (CUSTNO))

Now that the table is defined with the ROW CHANGE TIMESTAMP we can use it in our programs and queries to determine change information about the data. For example, if we want to find all of the customer rows that were changed in the past week (ie. the last 7 days) we could run the following query:

SELECT CUSTNO, CUST_NAME
FROM   CUSTOMER
WHERE  ROW CHANGE TIMESTAMP FOR CUSTOMER <=
       CURRENT TIMESTAMP
AND    ROW CHANGE TIMESTAMP FOR CUSTOMER >=
       CURRENT TIMESTAMP - 7 DAYS;

But what would happen if you issued a statement like this against a table that was altered to include a ROW CHANGE TIMESTAMP? For example, if we created the CUSTOMER table as shown but without the CUST_INFOCHANGE column, populated the table with data, and then altered the table to include the CUST_INFOCHANGE column? In this case, DB2 will use the time the page was last modified. So the results will not be exactly correct because it would return all the rows on each page that qualifies (because at least one row on the page changed). This is why it is important to clear up the advisory REORG pending as soon as possible after adding the ROW CHANGE TIMESTAMP.

This is all well and good, and you can probably see the value of having this automagically changing timestamp in some of your tables, but where is the optimistic locking part? Well, for programs that use updateable static scrollable cursors DB2 can use optimistic locking as long as the program is bound specifying ISOLATION(CS). If you have this situation, DB2 will deploy optimistic locking to reduce the duration of locks between consecutive FETCH operations and between fetch operations and subsequent positioned UPDATE or DELETE operations.

Without optimistic locking, the lock taken at the first FETCH is held until the next FETCH. The lock taken at the last FETCH is held until COMMIT, ROLLBACK, or the end of transaction.

With optimistic locking, the scenario changes significantly. When the application requests a FETCH to position the cursor on a row, DB2 locks that row, executes the FETCH and releases the lock. When the application requests a positioned UPDATE or DELETE on the row, DB2 locks the row and then re-evaluates the predicate to ensure that the row still qualifies for the result table.

Optimistic locking itself will not happen without some effort on your part. Your application must have a ROW CHANGE TIMESTAMP and it must be selected first. Then, during a modification, a predicate needs to be added as a condition to tell whether the row has been modified or not. The static scrollable cursor uses the optimistic locking technique automatically. DB2 cannot use optimistic concurrency control for dynamic scrollable cursors. With dynamic scrollable cursors, the most recently fetched row or page from the base table remains locked to maintain position for a positioned UPDATE or DELETE.


So, if you have not taken a look at which applications might benefit from optimistic locking techniques since your shop migrated to you move to DB2 9 for z/OS it is time to take a look at what applications could take advantage of optimistic locking – and then add the ROW CHANGE TIMESTAMP to the appropriate tables.