Thursday, March 22, 2012

Working With Variable Data


One of the key design issues that is addressed in most DB2 applications is how to implement character data that varies in size from row to row. The basic trade-off is “ease-of-use and performance” versus “storage requirements.” It is possible to save storage space by using variable columns instead of placing small amounts of data in a large fixed space. Each variable column carries a 2-byte overhead, however, for storing the length of the data. Additionally, variable columns tend to increase CPU usage and can cause the update process to become inefficient. When a variable column is updated with a larger value, the row becomes larger; if not enough space is available to store the row, it must be moved to another page. This makes the update and any subsequent retrieval slower.

Follow these rules when defining variable character columns:
  • Avoid variable columns if a sufficient disk space is available to store the data using fixed columns. The general rule is: disk is cheap! Trading the cost of disk for more efficient development and subsequent performance is often an easy decision to make.
  • Though this may be obvious, do not define a variable column when the data does not vary from row to row.
  • Do not define a variable columns if its maximum length is less than 30 bytes. Furthermore, do not define a variable column if its maximum length is within 10 bytes of the average length of the column. Adhering to these two rules will cause you to choose VARCHAR data types only when they can potentially provide enough DASD savings to offset other costs.
  • Consider redefining variable columns by placing multiple rows of fixed length columns in another table or by shortening the columns and placing the overflow in another table.

If, after following these guidelines, VARCHAR columns need to be implemented, go ahead and do so. However, it is wise to continuously re-assess the decision to use variable character data.

So how can you monitor the effectiveness of variable columns? Well, it is possible to query the DB2 Catalog to determine the effectiveness of using VARCHAR for a column instead of CHAR. Consider, for example, the PROJNAME column of the DSN81010.PROJ table. It is defined as VARCHAR(24).

To gauge whether VARCHAR is appropriate for this particular column, consider issuing the following SQL query:

     SELECT    COL_LGTH, COUNT(*)
     FROM      (SELECT LENGTH(PROJNAME) AS COL_LENGTH
                FROM   DSN81010.PROJ
               )
     GROUP BY  COL_LGTH
     ORDER BY  COL_LGTH;


This query will produce a report listing the lengths (in this case, from 1 to 24, excluding those lengths which do not occur) and the number of times that each length occurs in the table.  These results can be analyzed to determine the range of lengths stored within the variable column. 

If you are not concerned about this level of detail, the following query can be used instead to summarize the space characteristics of the variable column in question:

    SELECT  24*COUNT(*),
            SUM(2+LENGTH(PROJNAME)),
            24*COUNT(*)-SUM(2+LENGTH(PROJNAME)),
            24,
            AVG(2+LENGTH(PROJNAME)),
            24-AVG(2+LENGTH(PROJNAME))
    FROM    DSN81010.PROJ;



This query produces a report similar to the one shown below:

SPACE      SPACE        TOTAL   AVG.       AVG.         AVG.
USED AS    USED AS      SPACE   SPACE AS   SPACE AS     SPACE
CHAR(24)   VARCHAR(24)  SAVED   CHAR(24)   VARCHAR(24)  SAVED
--------   -----------  ------  ---------  -----------  -----
158058        96515     61543     24          16          8



The following list itemizes the definition for each of the individual columns calculated by this query:

Definition                               
Calculation                                                 
Space Used As CHAR(24)
24*COUNT(*)
Space Used As VARCHAR(24)
SUM(2+LENGTH(PROJNAME))
Total Space Saved Using VARCHAR
24*COUNT(*)-SUM(2+LENGTH(PROJNAME))
Avg. Space Used As CHAR(24)
24
Avg. Space Used As VARCHAR(24)
AVG(2+LENGTH(PROJNAME))
Avg. Space Saved Using VARCHAR
24-AVG(2+LENGTH(PROJNAME))

The query can be modified to be used for any VARCHAR-defined column. The constant 24 can be changed to indicate the maximum length of the variable column as defined in the DDL. Using these tools, you can better judge the actual disk space savings accruing as a result of VARCHAR usage.

Synopsis

There are sound reasons for using variable data types within DB2 databases. Yet, it is important to remember that business conditions change and what may have been a sound reason for using VARCHAR data in the past, may no longer be sound. Use the queries in this article to determine if it is still appropriate to use VARCHAR data. In general, do not use VARCHAR for small columns or for columns whose length does not vary considerably.

Sunday, March 04, 2012

Fetching Multiple Rows

When you need to retrieve multiple rows, consider deploying a multi-row fetch to transfer more than one row using a single FETCH statement. This capability was added as of DB2 Version 8.

A multi-row FETCH retrieves multiple rows at one time into column arrays in your application program. By fetching multiple rows at once, your request can become more efficient, especially for distributed requests. The performance improvement using multi-row FETCH in general depends on several factors, such as whether the request is distributed, the number of rows to be fetched, the complexity of the SELECT statement, and the number of columns being fetched.

Nevertheless, using multi-row FETCH (in local environments) can improve performance with a significant reduction of CPU time possible. Tests conducted by IBM have shown performance gains of between 25% and 40% processing 10 rows per SQL statement for programs processing a considerable number of rows. With such significant gains possible, why hasn’t everyone moved to multi-row FETCH? Well, perhaps because it requires programming changes. A multi-row FETCH requires a cursor defined with rowset positioning. A rowset is a group of rows that are operated on as a set. Such a cursor enables your program to retrieve more than one row using a single FETCH statement. By fetching multiple rows at once, your request might become more efficient, especially for distributed requests.

 To use this feature, you must DECLARE your cursor as using the WITH ROWSET POSITIONING parameter. For example

EXEC SQL
  DECLARE CURSOR SAMPCURS
  WITH ROWSET POSITIONING
  FOR
  SELECT DEPTNO
  FROM   DSN81010.DEPT
END-EXEC.

Furthermore, to use a multi-row fetch you must have defined the appropriate structures to receive multi-row data. This means you must have defined an array of host variables into which the fetched rows can be placed. Each column fetched requires its own host variable array into which its values will be placed. Be sure to match the array size to the rowset size. With the appropriate setup coded, FETCH statements can be written to retrieve more than a single row from the result set. For example

FETCH ROWSET FROM SAMPCURS
  FOR 5 ROWS
  INTO HOSTVAR-ARRAY;

As you can see, the multiple-row fetch block is identical to a single-row-fetch block, except that there are two additional clauses—ROWSET and FOR n ROWS. The ROWSET clause specifies that the orientation of this cursor is rowset positioning (instead of single row). The FOR n ROWS clause specifies the size of the rowset to be returned. The maximum rowset size is 32,767.

Rowset cursors are very useful when you need to retrieve many rows or large amounts of data in distributed systems. By retrieving multiple rows with a single FETCH, multiple trips between the application and the database can be eliminated, thereby improving network performance.

To learn more about multi-row FETCH consider attending my upcoming webinar on the topic. This presentation will introduce and define multi-row FETCH, how to use it, and the performance implications of doing so. The presentation will also touch upon multi-row UPDATE. And it will introduce the new SoftBase Attach Facility MRF Feature, which allows you to implement multi-row FETCH without coding changes. To attend, sign up at this link: https://www1.gotomeeting.com/register/754473201

Tuesday, February 28, 2012

Identifying Unused Indexes

Did you know that DB2 V9 added a new column to the Real Time Statistics to help identify unused indexes?

The LASTUSED column in the SYSINDEXSPACESTATS table contains a date indicating the last time the index was used. Any time the index is used to satisfy a SELECT, FETCH, searched UPDATE, searched DELETE, or to enforce a referential constraint, the date is updated.

This helps to solve the problem of determining whether or not an index is being used. Standard operating advice is to DROP or delete anything that is not going to be used. But trying to determine whether something is actually used or not can be difficult at times.

You could always query your PLAN_TABLEs or the plan and package dependency tables in the DB2 Catalog for static SQL. But what about dynamic SQL? That is more difficult. Now, as of DB2 V9, you can simply query the LASTUSED column to see when the index was last used. The LASTUSED date is by partition. So, for a partitioned index, the last used date of each partition in the index should be checked.

Of course, you will have to give it some time because you might have an index supporting a rarely used query. Most shops have queries and programs that run quarterly, or even annually, but nevertheless are very important... and you wouldn't want to drop indexes on those queries even though they do not run frequently because when they do run, they are important...

Examine the LASTUSED column over time to determine which indexes are truly not being used, and then DROP the unused indexes.

Thursday, February 16, 2012

Update on DB2 Developer's Guide, 6th edition

I know a lot of my readers are waiting on the updated edition of my book, DB2 Developer's Guide, so I thought I'd post a short update on the progress. The technical edits are over and production will be starting soon. The book is scheduled now for publication in early May 2012 and is available to be pre-ordered now on amazon com.



The book has been completely updated and is now up-to-date with DB2 10 for z/OS. Just think of the things that have been added to DB2 since the last time the book was updated: Universal table spaces, pureXML, SECADM, hashes, new data types, INSTEAD OF triggers, temporal support, and much, much more.

Consider pre-ordering a copy today so you'll get it as soon as it comes off the presses!

Thursday, January 26, 2012

A Forced Tour of Duty


Mainframe developers are well aware of the security, scalability, and reliability of mainframe computer systems and applications. Unfortunately, though, the bulk of new programmers and IT personnel are not mainframe literate. This should change. But maybe not for the reasons you are thinking.

Yes, I am a mainframe bigot. I readily admit that. In my humble opinion there is no finer platform for mission critical software development than the good ol’ mainframe. And that is why every new programmer should have to work a tour of duty on mainframe systems and applications after graduating from college.

Why would I recommend such a thing? Well, it is because of the robust system management processes and procedures which are in place and working extremely well within every mainframe shop in the world. This is simply not the case for Windows, Unix, and other platforms. By working on mainframe systems newbies will learn the correct IT discipline for managing mission critical software.

What do I mean by that? How about a couple of examples: It should not be an acceptable practice to just insert a CD and indiscriminately install software onto a production machine. Mainframe systems have well-documented and enforced change management procedures that need to be followed before any software is installed into a production environment.

Nor should it be acceptable to just flip the switch and reboot the server. Mainframe systems have safeguards against such practices. And mainframes rarely, if ever, need to be restarted because the system is hung or because of a software glitch. Or put in words PC dudes can understand: there is no mainframe “blue screen of death.” Indeed, months, sometimes years, can go by without having to power down and re-IPL the mainframe.

And don’t even think about trying to get around security protocols. In mainframe shops there is an entire group of people in the operations department responsible for protecting and securing mainframe systems, applications, and data. Security should not be the afterthought that it is in the Windows world.

Ever wonder why there are no mainframe viruses? A properly secured operating system and environment make such a beast extremely unlikely. And with much of the world’s most important and sensitive data residing on mainframes, don’t you think the hackers out there would just love to crack into those mainframes more frequently?

Project planning, configuration management, capacity planning, job scheduling and automation, storage management, database administration, operations management, and so on – all are managed and required in every mainframe site I’ve ever been involved with. When no mainframe is involved many of these things are afterthoughts, if they’re even thought of at all.

Growing up in a PC world is a big part of the problem. Although there may be many things to snark about with regard to personal computers, one of the biggest is that they were never designed to be used the way that mainframes are used. Yet we call a sufficiently “pumped-up” PC a server – and then try to treat it like we treat mainframes. Oh, we may turn it on its side and tape a piece of paper on it bearing a phrase like “Do Not Shut Off – This is the Production Server”… but that is a far cry from the glass house that we’ve built to nourish and feed the mainframe environment.

Now to be fair, strides are being made to improve the infrastructure and best practices for managing distributed systems. Some organizations have built an infrastructure around their distributed applications that rivals the mainframe glass house. But this is more the exception than the rule. With time, of course, the policies, practices, and procedures for managing distributed systems will improve to mainframe levels.

But the bottom line is that today’s distributed systems – that is, Linux, Unix, and Windows-based systems – typically do not deliver the stability, availability, security, or performance of mainframe systems. As such, a forced tour of duty supporting or developing applications for a mainframe would do every IT professional a whole world of good.