Tuesday, November 28, 2006

On DB2 Naming Conventions

What's in a name? The establishment and enforcing of naming conventions is often one of the first duties to be tackled when implementing new software. Adequate thought and preparation is required in order for such a task to be successful. What amount of effort should be extended in the creation of appropriate database naming standards? Are current industry standards acceptable?

Shakespeare, many, many years ago, may have said it best when he wrote: "What's in a name? That which we call a rose by any other name would smell as sweet." But, if that is true, then why do those of us in IT spend so much time and effort developing and enforcing naming standards? Maybe what something is called is not quite so trivial a matter after all!

Well, we know what Shakespeare was trying to convey: the name by which we call something has no effect upon the actual object. Calling a desk a lamp will not turn it into a lamp: it is still a desk. Let's not forget this.

In today's blog, though, I want to offer a basic approach to database naming standards, and DB2 for z/OS naming standards, in particular.

Tables and Their Cousins

In an today's SQL DBMSes the primary access point for data is the "table." A table consists of multiple rows, each with a fixed and unchanging number of defined columns. But there are several table alternatives that behave just like tables. For example, DB2 allows the following:

  • ALIAS - an alternative name that can be used in SQL statements to refer to a table or a view in the same or a remoteDB2 subsystem.
  • SYNONYM - an alternative name that can be used in SQL statements to refer to a table or a view in the same DB2 subsystem. Synonyms are accessible only by the synonym owner.
  • VIEW - an alternative representation of data from one ormore tables or views.
These three alternative means of access are similar in one way: they all present data by means of values in rows and columns. An end user need not know whether he or she is querying a table, an alias, a synonym, or a view -- the results are the same, namely data represented by values in rows and columns.

So why do some shops impose different naming standards on views than they do on tables? It makes sense to use the exact same naming convention for tables, views, aliases, and synonyms. These four database objects all logically refer to the same thing - a representation of data in terms of columns and rows.

Why would you want to implement different naming conventions for each of these objects? Let's examine the pros and cons. Consider your current table naming conventions. If your shop is typical, you will find a convention that is similar to this:

Format: aaa_dddddddddddddd

Example: ORD_ITEM

Here aaa is a three character application identifier and the remainder is a free-form alphanumeric description of the table. In the example we have the three character ORD (representing perhaps "order entry" followed by an underscore and ITEM. So this is the "item" table in the order entry system.

If your standards are significantly different, pause for a moment to ask yourself why. The format shown is almost an industry standard for table naming. You most surely do not want to force every DB2 table to begin with a T (or have a strategically embedded T within the table name). The name of a DB2 table should accurately and succinctly convey the contents of the data it contains. The naming convention displayed in Figure 1 accomplishes this.


So this brings us to our second naming convention recommendation: avoid embedding a 'T', or any other character, into table names to indicate that the object is a table. Likewise, indicator characters should be avoided for any other table-like object (i.e. aliases, synonyms, and views). Although most shops avoid embedding a 'T' in the table name, many of these same shops do embed a character into alias, synonym, and especially view names. The primary reason given is that the character makes it easy to determine what type of object is being accessed just by looking at the name.

There are two reasons why this is a bad idea. The first is a semantic reason, the second a flexibility issue. In semantic terms, an object's name need only identify the object, not the object's type. Consider the following arguments: How are people named? Usually one can ascertain the gender of someone simply by knowing their name but would you banish all males named Chris, Pat, or Terry? Or maybe all females named Chris, Pat, and Terry? After all, men and women are different. Shouldn't we make sure that all men's names are differentiated from women's names? Maybe we should start all men's names with an 'M' and all women's names with a W? If we did, we'd sure have a lot of Marks and Wendys, wouldn't we? The point here is that context enables us to differentiate men from women, when it is necessary. The same can be said of database objects.

How about another example: how are COBOL program variables named? Do you name your 01, 05, and 77 level variable names differently in your COBOL programs? For example, do all 01 levels start with 'O' (for one), all 05 levels start with 'F', and all 77 levels start with 'S'? No? Why not? Isn't this the same as forcing views to start with V (or having a strategically embedded V within the name)?

What about the naming of pets? Say I have a dog, a cat, and a bird. Now, I wouldn't want to get them confused, so I'll make sure that I start all of my dog names with a D, cat names with a C, and bird names with a B. So, I'll feed C_FELIX in the morning, take D_ROVER for a walk after work, and make sure I cover B_TWEETY's cage before I go to bed. Sounds ridiculous, doesn't it?

The whole point of this tirade is that if we don't manufacture hokey names in the real world, why would we want to do it with our DB2 objects? There is really no reason to embed special characters into DB2 objects names to differentiate them from one another. It is very practical and desirable to name DB2 objects in a consistent manner, but that consistent manner should be well-thought-out and should utilize the system to its fullest capacity wherever possible.

The second reason for this rule is to increase flexibility. Say, for example, that we have a table that for some reason is significantly altered, dropped, or renamed. If views are not constrained by rigid naming conventions requiring an embedded 'V' in the name, then a view can be constructed that resembles the way the table used to look. Furthermore, this view can be given the same name as the old table. This increases system flexibility. Most users don't care whether they are using a table, view, synonym, or alias. They simply want the data. And, in a relational database, tables, views, synonyms, and aliases all logically appear to be identical to the end user: as rows and columns.

Although it is true that there are certain operations that cannot be performed on certain types of views, the users who need to know this will generally be sophisticated users. For example, very few shops allow end users to update any table they want using QMF, SPUFI, or some other tool. Updates, deletions, and insertions (the operations which are not available to some views) are generally coded into application programs and scheduled in batch or executed on-line in transactions. The end user does need to query tables dynamically. Now you tell me, which name will your typical end user remember more readily when he needs to access his marketing contacts: MKT_CONTACT or VMKTCT01?

Further Arguments For Indicators

Some folks believe they have very valid reasons for embedding an object type indicator character into database objects - view names, in particular. Let's examine these arguments.

Point Embedding a V into our view names enables our DBAs to quickly determine which objects are views and which are tables.

Counterpoint Many of these shops do not embed a T into the table name, but feel that a V in the view name is necessary. It is believed that the DBA will be able to more easily discern views from tables. But, rarely do these shops force an S into synonym names or an A into alias names. Even if they do, it is usually overkill. Any good DBA already knows which objects are tables and which are views, and if he or she doesn't, a simple query against the system catalog will clarify the matter. For example, in DB2, this query will list all table-like objects:

SELECT NAME, CREATOR, "TABLE"
FROM SYSIBM.SYSTABLES
WHERE TYPE = "T"
UNION ALL
SELECT NAME, CREATOR, "ALIAS"
FROM SYSIBM.SYSTABLES
WHERE TYPE = "A"
UNION ALL
SELECT NAME, CREATOR, "SYNONYM"
FROM SYSIBM.SYSSYNONYMS
UNION ALL
SELECT NAME, CREATOR, "VIEW"
FROM SYSIBM.SYSVTREE
ORDER BY 3, 1;

Point It is necessary to code view names differently so that users understand that they are working with a view and not all operations can be performed on the view.

Counterpoint All operations can be performed on some views but not all operations can be performed on all tables! What if the user does not have the security to perform the operation? For example, what is the difference, from the user's perspective, between accessing a non-updateable view and accessing a table where only the SELECT privilege has been granted?

Use It or Lose It

Another common problem with database naming conventions is unnecessary size restrictions. Using DB2 as an example, most objects can have a name up to 18 characters long (even more after moving to DB2 V8). But, in many instances, shops establish naming standards that do not utilize all of the characters available. This is usually unwise, which brings us to our third recommendation: Unless a compelling reason exists, ensure that your standards allow for the length of database object names to utilize every character available.

Here is a list of maximum and recommended lengths for names of each DB2 object:


----------------------------------------------------------
Recommended
DB2 Object Max Length (V7) Max Length(V8) Length
---------- --------------- -------------- -----------
STOGROUP 8 128 128
Database 8 8 8
Tablespace 8 8 8
Table 18 128 128
View 18 128 128
Alias 18 128 128
Synonym 18 128 128
Column 18 128 128
Check Constraint 18 128 128
Ref. Constraint 8 128 128
Index 18 128 8
----------------------------------------------------------




Notice that, except for indexes, the recommended length is equal to the maximum length for each object. Why are indexes singled out in DB2? This is an example of a compelling reason to bypass the general recommendation. Developers can explicitly name DB2 indexes, but they cannot explicitly name DB2 index spaces. Yet, every DB2 index requires an index space - and an index space name. The index space name will be implicitly generated by DB2 from the index name. If the index name is 8 characters or less in length, then the index space name will be the same as the index name. However, if the index name is longer than 8 characters, DB2 will use an internal, proprietary algorithm to generate a unique, 8-byte index space name. As this name cannot be determined prior to index creation, it is wise to limit the length of index names to 8 characters. Also, since the only folks who will be interested in the index space name are DBAs, there is no reason to make the names more descriptive for general end users... and DBAs are used to dealing with cryptic names (some of them even like it).

This is a good example of the maxim that there are exceptions to every rule.

Another exception may be that users of DB2 V8 may want to impose V7 length restrictions on their objects. You might want to do this to discourage verbose names (128 is a BIG extension over 18 bytes). Maybe you have applications that cannot easily use or display such wordy object or column names.

Embedded Meaning

One final troublesome naming convention used by some shops is embedding specialized meaning into database object names. The name of an object should reflect what that object is or represents. However, it should not attempt to define the object and describe all of its metadata.

With this in mind, it is time for another recommendation: Do not embed specialized meaning into database object names.

Let's examine this in more detail by means of an example. Some shops enforce DB2 index naming conventions where the type of index is embedded in the index name. Consider the following example:

Format: Xaaaaayz

Example: XORDITCU

Here X is a constant, aaaaa is unique five character description (perhaps derived from the table name), y is an indicator for clustering (either C for clustered or N for nonclustered), and z is an indicator for index type (P for primary key, F for foreign key, U for unique, and N for non-unique).

So, the example is a unique, clustering index on the ORD_ITEM table. Note two potential problem areas with this standard:

  • An embedded X identifies this object as an index.

  • Embedded meaning in the form of indicators detailing the type of index.

The embedded indicator character 'X', although unnecessary, is not as evil as indicator characters embedded in table-like objects. Indexes are not explicitly accessed by users. Therefore, obscure or difficult-to-remember naming conventions are not as big of a problem. The same arguments hold true for tablespace names. In fact, indicator characters may actually be helpful to ensure that tablespaces and indexes are never named the same. Within the same database, DB2 does not permit a tablespace to have the same name as any index, and vice versa. DB2 uses a name generation algorithm to enforce uniqueness if it is attempted. So, if you must use indicator characters in database names, use them only in objects which are never explicitly accessed by end users.

The second potential problem area poses quite a bit of trouble. Consider the following cases which would cause the embedded meaning in the index name to be incorrect:

  • The primary key is dropped.

  • A foreign key is dropped.

  • The index is altered from non-unique to unique (or vice versa) using a database alteration tool.

In each of these cases we would have to re-name the index or we would have indexes with embedded characters that did not accurately depict the metadata for the index. Both are undesireable.

And there are additional problems to consider with this naming convention, as well. What if an index is defined for a foreign key, but is also unique? Should we use an 'F' or a 'U'? Or do we need another character?

The naming convention also crams in whether the index is clustering ('C') or not ('N'). This is not a good idea either. Misconceptions can occur. For example, in DB2, if no clustering index is explicitly defined, DB2 will use the first index created as a clustering index. Should this index be named with an embedded 'C' or not? And again, what happens if we switch clustering indexes - we would have to re-name indexes.

Let's look at one final example from the real world to better illustrate why it is a bad idea to embed specialized meaning into database object names. Consider what would happen if we named corporations based upon what they produce. When IBM began, they produced typewriters. If we named corporations like some of us name database objects, the company could have been named based upon the fact that they manufactured typewriters way back when. So IBM might be called TIBM (the T is for typewriters).

Now guess what, TIBM doesn't make typewriters any longer. What should we do? Rename the company or live with a name that is no longer relevant? Similar problems ensure with database object names over time.

Synopsis

Naming conventions evoke a lot of heated discussion. Everybody has their opinion as to what is the best method for naming database objects. Remember, though, that it is best to keep an open mind.

It is just like that old country novelty hit "A Boy Named Sue." Johnny Cash may have been upset that his father gave him a girl's name, but that was before he knew why. In a similar manner, I hope that this blog topic caused you to think about naming conventions from a different perspective. If so, I will consider it a success.

Tuesday, November 21, 2006

Character Versus Numeric Data Types

Most DBAs have faced the situation where one of their applications requires a four-byte code that is used to identify products, accounts, or some other business object, and all of the codes are numeric and will stay that way. But, for reporting purposes, users or developers wish the codes to print out with leading zeroes. So, the users request that the column be defined as CHAR(4) to ensure that leading zeroes are always shown. But what are the drawbacks, if any, to doing this?

Well, there are drawbacks! Without proper edit checks, INSERTs and UPDATEs could place invalid alphabetic characters into the alphanumeric column. This can be a very valid concern if ad hoc data modifications are permitted. Although it is uncommon to allow ad hoc modifications to production databases, data problems can still occur if all of the proper edit checks are not coded into every program that can modify the data. But let's assume (big assumption) that proper edit checks are coded and will never be bypassed. This removes the data integrity question.

There is another problem, though, that is related to performance and filter factors. Consider the possible number of values that a CHAR(4) column and a SMALLINT column can assume. Even if programmatic edit checks are coded for each, DB2 is not aware of these and assumes that all combinations of characters are permitted. DB2 uses base 37 math when it determines access paths for character columns, under the assumption that 26 alphabetic letters, 10 numeric digits, and a space will be used. This adds up to 37 possible characters. For a four-byte character column there are 3**74 or 1,874,161 possible values.

A SMALLINT column can range from -32,768 to 32,767 producing 65,536 possible small integer values. The drawback here is that negative codes and/or 5 digit codes could be entered. Both do not conform to the 4 digit maximum. However, if we adhere to our proper edit check assumption, the data integrity problems will be avoided here, as well.

DB2 will use the HIGH2KEY and LOW2KEY values to calculate filter factors. For character columns, the range between HIGH2KEY and LOW2KEY is larger than numeric columns because there are more total values. The filter factor will be larger for the numeric data type than for the character data type which may influence DB2 to choose a different access path. For this reason, favor the SMALLINT over the CHAR(4) definition.

The leading zeroes problem should be able to be solved using other methods. It is not necessary to store the data in the same format that users wish to display it. For example, when using QMF, you can ensure that leading zeroes are shown in reports by using the "J" edit code. Other report writes offer similar functionality. And report programs can be coded to display leading zeroes easily enough by moving the host variables to appropriate display fields.

In general, it is wise to choose a data type which is closest to the domain for the column. If the column is to store numeric data, favor choosing a numeric data type: SMALLINT, INTEGER, DECIMAL, or floating point. Same goes for temporal data (that is, choose DATE, TIME, or TIMESTAMP instead of a character or numeric data type). In addition, always be sure to code appropriate edit checks to ensure data integrity - but remember, fewer need to be coded if you choose the correct data type because DB2 automatically prohibits data that does not conform to the data type for each column.

Wednesday, November 08, 2006

DB2 Access Paths and Change Management Procedures

No one working as a DB2 DBA or performance analyst would deny that one of the most important components of assuring efficient DB2 performance is making sure that DB2 access paths are appropriate for your DB2 programs. Binding programs with EXPLAIN(YES) specified is important to ensure that we know what access paths DB2 has chosen for each SQL statement. Without the information that EXPLAIN puts in the PLAN_TABLE we would be "flying blind."

Anyway, programs need to be rebound periodically to ensure that DB2 has forumlated access paths based on up-to-date statistics and to ensure that you are taking advantage of all the latest and greatest DB2 optimizer features. Whether you are implementing changes into your DB2 applications, upgrading to a new version of DB2, or simply trying to achieve optimum performance for existing application plan and packages, an exhaustive and thorough REBIND management process is a necessity.

But not every organization does what is necessary to keep access paths up-to-date with the current state of their data. Oh, there are always reasons given as to why the acknowledged “best practice” of REORG/RUNSTATS/REBIND is not followed religiously. But these reasons are not always reasonable - especially when there are approaches to overcome them.

But let's approach this subject from another perspective: that is, from a change management procedures perspective. On the mainframe, change has traditionally been strictly controlled. But one exception has been DB2 access paths. In a mainframe shop everything we do is tightly controlled. If we make even a minor change to an application program, that program is thoroughly tested before it ever reaches a production environment. The program progresses through unit testing, QA testing, volume testing, and so on. As developers, we do a good job of testing a change to minimize the risk that the change might have unintended consequences.

We do the same type of due diligence with most other changes in the mainframe world. Database changes are planned and thoroughly tested. Many shops use change manager software to ensure that database changes are implemented appropriately and in a controlled manner. System software (e.g. CICS, WebSphere, etc.), including subsystem and DB2 changes, are all subject to strict change control procedures. This is done to minimize disruption to the production work being conducted by our business folks.

But, if you think about it, there is one exception to this tight change control environment: Binds and Rebinds are typically done in the production environments without the benefit of oversight or prior testing. This lack of change control results in unpredictable performance impacts. In most shops, programs are moved to production and bound there. Indeed, we are at the mercy of the DB2 optimizer, which generates access paths on the fly when we Bind or Rebind our programs. Any issues with inefficient access paths are then dealt with in a reactive mode. That is, problems are addressed after the fact.

One of the biggest reasons for not implementing strict change control processes for access paths is the lack of built-in methods for ensuring access path change control discipline. Let’s face it, manually evaluating thousands of packages and tens of thousands of SQL statements can be quite impractical. But there are things that can be done to help alleviate this problem.

Rebinding does not always produce DB2 performance improvements—and in some cases rebinding can cause DB2 performance to degrade. Typically, 75% to 90% of all rebinds are unnecessary. Bind ImpactExpert (from NEON Enterprise Software) manages the bind and rebind processes to assure optimal application performance by checking what access path changes will be before your rebind, and then only rebinding those programs where performance would improve. How does it do this?

Well, Bind ImpactExpert helps to eliminate the unpredictable results of rebinds that DBAs experience daily. By filtering out the rebinds that are likely to have a negative impact on DB2 performance, Bind ImpactExpert guarantees improved or consistent performing DB2 applications.

And one of the most popular features of Bind ImpactExpert is its ability to eliminate the unpredictability of rebinds when moving between DB2 releases. The EarlyPrecheck feature assists you in moving to new DB2 releases. The EarlyPrecheck feature allows you to evaluate and correct access path problems for both dynamic and static SQL prior to an actual DB2 version upgrade. You can perform access path evaluation weeks or months prior to the migration date to allow for in-depth analysis and pre-emptive correction of access path issues.

Bind ImpactExpert can also be stop unnecessary binds. It does this by determining whether a DBRM contains changed SQL and skipping the bind step for those that do not. This process eliminates unnecessary binds, optimizes and accelerates the compile procedure, reduces CPU usage, and reduces locks to the DB2 catalog.

All forward-thinking organizations should adopt a liberal Rebind process to ensure optimal access paths based on up-to-date statistics. Keeping abreast of data changes and making sure that your programs are optimized for the current state of the data is the best approach. This means regular executions of RUNSTATS, REORG, and Rebind.

So, if you are not rebinding your programs on a regular basis because you are afraid of degrading a few access paths, it is time to take a look at Bind ImpactExpert. If you never Rebind, not only are you forgoing better access paths due to data changes but you are also forgoing better access paths due to changes to DB2 itself. I mean, why would you want to penalize every DB2 program in your subsystem for fear that a program or two may have a few degraded access paths? Especially when NEON Enterprise Software offers an automated solution to the problem...

The bottom line is this: failing to keep your access paths aligned with your data is a sure recipe for declining DB2 application performance.

Monday, November 06, 2006

Try Out the XML Capability of DB2 9 for Free

Are you aware that there is a version of DB2 that you can use free of charge? It is called DB2 Express-C and it is basically IBM's way of removing price as being the issue in terms of you trying out and using DB2. Think of it as a way to use DB2 just like you would use an open source DBMS (except you don't get the source code).

According to IBM: DB2 Express-C is a version of DB2 Express Edition (DB2 Express) for the community. DB2 Express-C is a no-charge data server for use in development and deployment of applications including: XML, C/C++, Java, .NET, PHP, and more. DB2 Express-C can be run on up to 2 dual-core CPU servers, with up to 4 GB of memory, any storage system setup and with no restrictions on database size or any other artificial restrictions.

So, if you are wondering what it means for DB2 to support pureXML, then you might want to download DB2 Express-C and try it out for yourself. DB2 Express-C can run on AIx, HP-UX, Linux, Solaris, and Windows.

Now why would I be writing about a LUW product on a z/OS blog? Well, DB2 9 for LUW and DB2 9 for z/OS both support pureXML in the same way. So even if you are a DB2 for z/OS user, getting familiar with the XML support in DB2 Express-C can prepare you to help plan for how you might want to use XML in DB2 9 for z/OS when it becomes available.

Monday, October 30, 2006

New DB2 for iSeries Redbooks

I don't usually cover DB2 on the iSeries (you may still be calling it the DB2/400 or DB2 for the AS/400). But this week saw the release of two new redbooks on DB2 for iSeries, so I thought I'd mention them briefly here.

The first new redbook covers stored procedures, triggers, and user-defined functions, and it is called, appropriately enough, Stored Procedures, Triggers, and User-Defined Functions on DB2 Universal Database for iSeries. This redbook covers some of the announced features for stored procedures, triggers, and UDFs in V5R1, V5R2, V5R3, and V5R4. This redbook offers suggestions, guidelines, and practical examples on how to effectively develop DB2 for iSeries stored procedures, triggers, and UDFs.

The Ins and Outs of XML and DB2 for i5/OS is the second of the new DB2 for iSeries redbooks. As you might guess, this redbook focuses on XML. It covers the challenges of representing XML hierarchies in a relational DBMS, as well as provides an in-depth explanation of the most popular approaches to bridge the hierarchy / relational dilemma.

Also, if you are interested in XML and DB2 you might also want to check out the recently-published DB2 9 pureXML Guide, which covers pureXML in DB2 for LUW.

Friday, October 27, 2006

IBM Second in Software Sales

The San Francisco Chronicle reported today that IBM is officially the second largest software provider, bypassing former number two Oracle, but still behind number one Microsoft:

In its most recent quarter, IBM software generated $4.4 billion in revenue, and it's the company's main profit generator, with an 85 percent profit margin. Thirty-seven percent of IBM's profit comes from software. In 2005, it sold $15.8 billion worth of software, topping the $14.4 billion that Oracle posted in fiscal 2006, which ended June 30.

I find this news interesting. Often times IBM is excluded from software company discussions because they also sell hardward and services, so they are not always viewed as an ISV (independent software vendor). But I always thought this was unfair to IBM. After all, Oracle sells services, too. And Microsoft sells hardware, too - I mean, what is an XBOX if not hardware?

So give IBM their due... they are number two!

Thursday, October 26, 2006

DB2 9 for z/OS Article

Just a short blog post here to point everyone to an article I wrote for DB2 Magazine offering a high-level overview of what is coming with DB2 9 for z/OS. Note that there is no "V" before the "9" - that is courtesy of IBM marketing...

Anyway, the article is titled DB2 9 for z/OS Roars to Life. Check it out for a short introductory article covering the highlights of the next version of DB2 for z/OS, including coverage of XML, online schema evolution advances, new query capabilities, security and performance improvements, and a discussion of what features will be removed from DB2 9.

Monday, October 23, 2006

Check out New IBM Information Management Blog Site

IBM has launched a new blog site for Information Management. The site offers multiple blogs written by IBM product managers, technical architects, marketing managers and strategists. These folks will blog about issues and trends in information management... as well as IBM products that support information management.

Recent posts cover last week's IBM Information on Demand conference, Informix 4-GL, and the impact of unstructured information on business intelligence. I'm sure I'll be checking this site out regularly for tidbits of news on what IBM is up to in terms of managing data and information.

Sunday, October 22, 2006

DB2 9 pureXML Guide

Another new redbook that should be of interest to DB2 for z/OS users is the DB2 9 pureXML Guide. Although this redbook covers the pureXML feature from a DB2 LUW perspective, pureXML will also be a feature of DB2 9 for z/OS when it becomes generally available next year (2007).

The redbook discusses the pureXML data store, hybrid database design and administration. It also describes XML schema, industry standards, and how to manage XML schemas. Other features and examples covered include SQL/XML, XPath, and XQuery...

XQuery is the only one of these features that will not be included in DB2 9 for z/OS...

Saturday, October 21, 2006

DB2 for z/OS: Data Sharing in a Nutshell

Just a short blog entry this week to let everyone know about a new IBM redbook titled "DB2 for z/OS: Data Sharing in a Nutshell."

This redbook is an essential read for anyone who is interested in expanding their knowledge of DB2's usage of data sharing. According to the redbook website, "(t)his IBM Redbook is meant to facilitate your journey towards data sharing by providing a cookbook approach to the main tasks in enabling data sharing and workload balancing. It does not have all the answers, because it is a brief summary of a large field of knowledge, but it contains the key elements and it points you in the right direction to get more details. Throughout this document we assume that your sysplex environment is set up and a DB2 subsystem exists at a currently supported level."

This redbook offers 8 chapters highlighting how to implement and tune data sharing for optimum DB2 performance and availability. Download the manual (SG24-7322) for free from the web by clicking here.

Tuesday, October 10, 2006

IDUG Europe 2006 Synopsis

For those folks who do not follow my other blogs, I just wanted to make sure that you saw the series of posts I ran summarizing the European IDUG conference last week in Vienna, Austria. Here are the links for the four posts:

If, after reading about the just-completed European IDUG, you want to attend an IDUG event yourself, here are links to IDUG's upcoming conferences:

Hopefully I'll see you at an IDUG conference next year!

Wednesday, October 04, 2006

IBM Plans Initiative to Keep Mainframes Humming Along

Just read this story on Yahoo news about IBM's effort to simplify programming for mainframe computers. The story talks about IBM's on-going efforts to keep the mainframe relevant and useable. Even though it uses a lot of condescending terms like referring to mainframes as "warhorses" and "dinosaurs," the article is worth a quick read if you are a mainframe proponent.

The best news is that IBM plans to spend $100 million over the next five years to simplify mainframe operations and management. Now that is good news, indeed.

Sunday, September 24, 2006

Sequence Objects and Identity Columns

When designing DB2 databases a frequently heard request is for a column that contains sequentially generated numbers. For example, each row has a counter associated with it. When a new row is inserted, the counter should be incremented by one for the new row. This way, each new DB2 row has a unique “row number” associated with it. Until recently such a design was difficult to deliver.

Without sequence objects or identity columns an application program can implement similar functionality, but usually not in a manner that performs adequately as database usage scales. A common technique is to maintain a one-row table that contains the sequence number. Each transaction locks that table, increments the number, and then commits the change to unlock the table. In this scenario only one transaction at a time can increment the sequence number. A variation uses something like this:

SELECT MAX()+ 1
FROM ONEROW_TABLE
WITH RR;

The result is the next highest number to be used. This value is used by the application and ONEROW_TABLE must be updated with the incremented value. Performance bottlenecks will occur with this method when a lot of concurrent usage is required.

But now DB2 offers two methods of automatically generating sequential numbers for a column:
· Identity columns, and;
· SEQUENCE objects.

Identity Columns

Identity columns were formally added to DB2 as of Version 7, but were actually available as of the DB2 Version 6 refresh. The identity property is applied to a DB2 column using the IDENTITY parameter. A column thusly defined will cause DB2 to automatically generate a sequential value for that column when a row is added to the table. For example, identity columns might be used to generate primary key values or a value that somewhat mimics Oracle’s row number capability. Using identity columns helps to avoid some of the concurrency and performance problems that can occur when application programs are used to populate sequential values for a “counter” column.

When inserting data into a table that uses an identity column, the program or user will not provide a value for the identity column. Instead, DB2 automatically generates the appropriate value to be inserted.

Only one identity column can be defined per DB2 table. Additionally, the data type of the column must be SMALLINT, INTEGER, or DECIMAL with a zero scale, that is DECIMAL(n,0). The data type also can be a user-defined DISTINCT type based on one of these numeric data types. The designer has control over the starting point for the generated sequential values, and the number by which the count is incremented.

An example creating a table with an identity column follows:

CREATE TABLE EXAMPLE
(ID_COL INTEGER NOT NULL
GENERATED ALWAYS AS IDENTITY
START WITH 100
INCREMENT BY 10 ...);

In this example, the identity column is named ID_COL. The first value stored in the column will be 100 and subsequent INSERTs will add 10 to the last value. So the identity column values generated will be 100, 110, 120, 130, and so on.

Note, too, that each identity column has a property associated with it assigned using the GENERATED parameter. This parameter indicates how DB2 generates values for the column. You must specify GENERATED if the column is to be considered an identity column or the data type of the column is a ROWID. This means that DB2 must be permitted to generate values for all identity columns. There are two options for the GENERATED parameter: ALWAYS and BY DEFAULT.

GENERATED ALWAYS indicates that DB2 will always generate a value for the column when a row is inserted into the table. You will usually specify ALWAYS for your identity columns unless you are using data propagation.

GENERATED BY DEFAULT indicates that DB2 will generate a value for the column when a row is inserted into the table unless a value is specified. So, if you want to be able to insert an explicit value into an identity column you must specify GENERATED BY DEFAULT.

Additionally, you can specify what to do when the maximum value is hit. Specifying the CYCLE keyword will cause DB2 to begin generating values from the minimum value all over again. Of course, this can cause duplicate values to be generated and should only be used when uniqueness is not a requirement.

Actually, the only way to ensure uniqueness of your identity columns is to create a unique index on the column. The IDENTITY property alone will not guarantee uniqueness.

Sometimes it is necessary to retrieve the value of an identity column immediately after it is inserted. For example, if you are using identity columns for primary key generation you may need to retrieve the value to provide the foreign key of a child table row that is to be inserted after the primary key is generated. DB2 provides the IDENTITY_VAL_LOCAL() function that can be used to retrieve the value of an identity column after insertion. For example, you can run the following statement immediately after the INSERT statement that sets the identity value:

VALUES IDENTITY_VAL_LOCAL() INTO :IVAR;

The host variable IVAR will contain the value of the identity column.

Problems with Identity Columns

Identity columns can be useful, depending on your specific needs, but the problems that accompany identity column are numerous. Some of these problems include:


  • Handling the loading of data into a table with an identity column defined as GENERATED BY DEFAULT. The next identity value stored by DB2 to be assigned may not be the correct value that should be generated. This can be especially troublesome in a testing environment.
  • LOAD INTO PART x is not allowed if an identity column is part of the partitioning index.
    What about environments that require regular loading and reloading (REPLACE) for testing? The identity column will not necessarily hold the same values for the same rows from test to test.
  • Prior to V8, it was not possible to change the GENERATED parameter (such as from GENERATED BY DEFAULT to GENERATED ALWAYS).
  • The IDENTITY_VAL_LOCAL() function returns the value used for the last insert to the identity column. But it only works after a singleton INSERT. This means you cannot use INSERT INTO SELECT FROM or LOAD, if you need to rely on this function.
  • When the maximum value is reached for the identity column, DB2 will cycle back to the beginning to begin reassigning values - which might not be the desired approach.
If you can live with these caveats, then identity columns might be useful to your applications. However, in general, these "problems" make identity columns a very niche solution. IBM has intentions to rectify some of these problems over time in upcoming versions of DB2.

SEQUENCE Objects

But remember, DB2 has two methods of automatically generating sequential numbers. The first method is to define an identity column for the table; the second is to create a SEQUENCE object. A SEQUENCE object is a separate structure that generates sequential numbers.

New to DB2 V8, a SEQUENCE is a database object specifically created to generate sequential values. So, a using a SEQUENCE object requires the creation of a database object; using an identity column does not.

A SEQUENCE objects is created using the CREATE SEQUENCE statement.

When the SEQUENCE object is created it can be used by applications to “grab” a next sequential value for use in a table. SEQUENCE objects are ideal for generating sequential, unique numeric key values. A sequence can be accessed and incremented by many applications concurrently without the hot spots and performance degradation associated with other methods of generating sequential values.

Sequences are designed for efficiency and to be used by many users at the same time without causing performance problems. Multiple users can concurrently and efficiently access SEQUENCE objects because DB2 does not wait for a transaction to COMMIT before allowing the sequence to be incremented again by another transaction.

An example creating a SEQUENCE object follows:

CREATE SEQUENCE ACTNO_SEQ
AS SMALLINT
START WITH 1
INCREMENT BY 1
NOMAXVALUE
NOCYCLE
CACHE 10;

This creates the SEQUENCE object named ACTNO_SEQ. Now it can be used to generate a new sequential value, for example:

INSERT INTO DSN8810.ACT (ACTNO, ACTKWD, ACTDESC)
VALUES
(NEXT VALUE FOR ACTNO_SEQ, ‘TEST’, ‘Test activity’);


The NEXT VALUE FOR clause is known as a sequence expression. Coding the sequence expression causes DB2 to use the named SEQUENCE object to automatically generate the next value. You can use a sequence expression to request the previous value that was generated. For example:

SELECT PREVIOUS VALUE FOR ACTNO_SEQ
INTO :IVAR
FROM DSN8810.ACT;


As you can see, sequence expressions are not limited to INSERT statements, but can be used in UPDATE and SELECT statements, too.

Caution: If you specify the NEXT VALUE FOR clause more than once in the same SQL statement DB2 will return the same value for each NEXT VALUE FOR specification.

SEQUENCE Object Parameters

Similar to identity columns, a SEQUENCE object has parameters to control the starting point for the generated sequential values, and the number by which the count is incremented. You can also specify the data type to be generated (the default is INTEGER). You can also specify a minimum value (MINVALUE) and a maximum value (MAXVALUE) if you wish to have further control over the values than is provided by the data type chosen.

Again, as with identity columns, you can specify how the SEQUENCE should handle running out of values when the maximum value is hit. Specifying the CYCLE keyword will cause the SEQUENCE object to wrap around and begin generating values from the minimum value all over again.

A final consideration for SEQUENCE objects is caching. Sequence values can be cached in memory to facilitate better performance. The size of the cache specifies the number of sequence values that DB2 will pre-allocate in memory. In the previous example CACHE 10 indicates that ten sequence values will be generated and stored in memory for subsequent use. Of course, you can turn off caching by specifying NO CACHE. With caching turned off each new request for a sequence number will cause I/O to the DB2 Catalog (SYSIBM.SYSSEQUENCES) to generate the next sequential value.

SEQUENCE Object Guidelines

DB2 does not wait for an application that has incremented a sequence to commit before allowing the sequence to be incremented again by another application. Applications can use one sequence for many tables, or create multiple sequences for use of each table requiring generated key values. In either case, the applications control the relationship between the sequences and the tables.

The name of the SEQUENCE object indicates that we are going to use it to generate activity numbers (ACTNO), but its usage is not limited to that. Of course, failure to control the use of a SEQUENCE object can result in gaps in the sequential values. For example, if we use the ACTNO_SEQ object to generate a number for a different column, the next time we use it for ACTNO there will be a gap where we generated that number.


Other scenarios can cause gaps in a SEQUENCE, too. For example, issuing a ROLLBACK after acquiring a sequence number will not roll back the value of the sequence generator - so that value is lost. A DB2 failure can also cause gaps because cached sequence values will be lost.
Please note, too, that when sequences were introduced in non-mainframe DB2, syntax was supported that did not conform to the SQL standard. This non-standard syntax is supported on the mainframe as well:

  • NEXTVAL can be used in place of NEXT VALUE; and
  • PREVVAL can be used in place of PREVIOUS VALUE.
Choosing Between IDENTITY and SEQUENCE
Although both identity columns and SEQUENCE objects are useful for generating incremental numeric values, you will be confronted with situations where you will have to choose between the two. Consider the following criteria for when to use one instead of the other. Identity columns are useful when:

  • Only one column in a table requires automatically generated values
  • Each row requires a separate value
  • An automatic generator is desired for a primary key of a table
  • The LOAD utility is not used to load data into the table
  • The process of generating a new value is tied closely to inserting into a table, regardless of how the insert happens

SEQUENCE objects are useful when:

  • Values generated from one sequence are to be stored in more than one table
  • More than one column per table requires automatically generated values (multiple values may be generated for each row using the same sequence or more than one sequence)
  • The process of generating a new value is independent of any reference to a table
Unlike SEQUENCE objects, which are more flexible, identity columns must adhere to several rigid requirements. For example, an IDENTITY column is always defined on a single table and each table can have at most one IDENTITY column. Furthermore, when you create an IDENTITY column, the data type for that column must be numeric; not so for sequences. If you used a SEQUENCE object to generate a value you could put that generated into a CHAR column, for example. Finally, when defining an IDENTITY column you cannot specify the DEFAULT clause and the column is implicitly defined as NOT NULL. Remember, DB2 automatically generates the IDENTITY column’s value, so default values and nulls are not useful concepts.

Summary

Both identity columns and SEQUENCE objects can be used to automatically generate sequential values for DB2 columns. Prior to Version 8, identity columns are your only option. However, after you move to V8, SEQUENCE objects will provide more flexibility and be easier to use than the identity column option.

Happy sequential value generation with DB2!

Friday, September 15, 2006

The Path to an Executable DB2 Program

Here is another Q+A exchange that I thought might be useful to share with everyone here on the blog:

QUESTION: Could you please explain the difference between a package and a plan, the process of precompilation, compilation and running a COBOL DB2 program as well as a COBOL CICS DB2 program?

ANSWER: Well, I'll hit the highlights in response to your question but if you really want an in-depth answer then you should read the DB2 manuals (or a book like DB2 Developer's Guide).

What is a plan?

A plan is an executable module containing the access path logic produced by the DB2 optimizer. It can be composed of one or more DBRMs and packages. Before a DB2 for z/OS program (with static SQL) can be run, it must have a plan associated with it.

Plans are created by the BIND command. The plan is stored in the DB2 directory and accessed when its program is run. Information about the plan is stored in the DB2 catalog.

What is a package?

A package is a single, bound DBRM with optimized access paths. By using packages, the table access logic is "packaged" at a lower level of granularity than a plan -- at the package (or program) level.

To execute a package, you first must include it in the package list of a plan (usually, there are some exceptions, such as for triggers and user-defined functions). In general, packages are not directly executed, they are only indirectly executed when the plan in which they are contained executes -- as discussed previously, UDFs and triggers are exceptions to this rule. A plan can consist of one or more DBRMs, one or more packages or, a combination of packages and DBRMs.
What is program preparation?

A DB2 application program must go through a process known as program preparation before it can run successfully. Program preparation requires a series of code preprocessors that when enacted in the proper sequence, create an executable load module and a DB2 application plan. The combination of the executable load module and the application plan is required before any DB2 program can be run, whether batch or online. CICS programs require an additional preprocessing step.

The first step is precompilation. DB2 programs must be parsed and modified before normal compilation. The DB2 precompiler performs this task. At a high level, the precompiler basically searches for SQL, strips it out of the program and puts it into a DBRM. The output of precompilation is a DBRM with SQL and a modified source program.

The DBRM is bound using the BIND command, which is a type of compiler for SQL statements. In general, BIND reads SQL statements from DBRMs and produces a mechanism to access data as directed by the SQL statements being bound.

The modified source is compiled using the language compiler of your choice. The compiled source then is link-edited to an executable load module. The appropriate DB2 host language interface module also must be included by the link edit step. This interface module is based on the environment (TSO, CICS, or IMS/TM) in which the program will execute.

At the end of this series of steps, if all goes well, you will have an executable DB2 program.

Wednesday, September 06, 2006

Help for SAP Shops Using DB2 for z/OS

Just a quick blog post today to alert folks to a relatively new blog focusing entirely on SAP usage with DB2 for z/OS. According to the blogger, Omer Brandis:

Whether you have already implemented SAP on DB2 for z/OS, or are just seriously considering it, this is the blog for you. This blog will deal with real issues from the point of view of a true real-world professional, discussing the good, bad, and ugly of SAP on DB2 for z/OS.

Recent posts have covered offsite recovery, reorganizing SAP OFFICE, and hidden REORG jobs. If you use SAP and DB2 on the mainframe, be sure to check back in on this blog on a regular basis (no pun intended).

Friday, September 01, 2006

The Two Biggest DB2 Performance Things

DB2 performance is one of those perennial topics that people just can't seem to get enough of. I guess that is because the performance of applications is one of the bigger issues that end users complain about. And DBAs can be heroes if they can resolve performance problems quickly. It alos could be that performance problems are so ubiquitous because people keep on making the same design and coding mistakes...

With all of that in mind, let's take a look at what I think are the two biggest things you need to get control of to keep DB2 and SQL performance in check.

(1) Keep DB2 statistics up-to-date

Without the statistics stored in the DB2' system catalog, the optimizer will have a difficult time optimizing anything. These statistics provide the optimizer with information pertinent to the state of the tables that the SQL statement being optimized will access. The type of statistical information stored in the system catalog include:
  • Information about tables, including the total number of rows, information about compression, and total number of pages;
  • Information about columns, including number of discrete values for the column and the distribution range of values stored in the column;
  • Information about table spaces, including the number of active pages;
  • Current status of the index, including whether an index exists, the organization of the index (number of leaf pages and number of levels), the number of discrete values for the index key, and whether the index is clustered;
  • Information about the table space and index node groups or partitions.
Statistics populate the DB2 system catalog when you execute the RUNSTATS utility. Generally, this utility is invoked in batch jobs. Be sure to work with your DBA to ensure you accumulate statistics at the appropriate time, especially in a production environment.

(2) Build appropriate indexes

Perhaps the most important thing you can do to assure optimal DB2 application performance is create correct indexes for your tables based on the queries your applications use. Of course, this is easier said than done. But we can start with some basics. For example, consider this SQL statement:

SELECT LASTNAME, SALARY
FROM EMP
WHERE EMPNO = '000010'
AND DEPTNO = 'D01' ;


What index or indexes would make sense for this simple query? ""'First, think about all the possible indexes that you could create. Your first short list probably looks something like this:

  • Index1 on EMPNO
  • Index2 on DEPTNO
  • Index3 on EMPNO and DEPTNO
This is a good start, and Index3 is probably the best of the lot. It lets DB2 use the index to immediately look up the row or rows that satisfy the two simple predicates in the WHERE clause. Of course, if you already have a lot of indexes on the EMP table, you might want to examine the impact of creating yet another index on the table. Factors to consider include:

Modification impact: DB2 will automatically maintain every index you create. This means every INSERT and every DELETE to this table will insert and delete not just from the table, but also from its indexes. And if you UPDATE the value of a column that is in an index, you also update the index. So, indexes speed the process of retrieval but slow down modification.

Columns in the existing indexes: If an index already exists on EMPNO or DEPTNO, it might not be wise to create another index on the combination. However, it might make sense to change the other index to add the missing column. But not always, because the order of the columns in the index can make a big difference depending on the query. For example, consider this query:

SELECT LASTNAME, SALARY
FROM EMP
WHERE EMPNO = '000010'
AND DEPTNO > 'D01';

In this case, EMPNO should be listed first in the index. And DEPTNO should be listed second, allowing DB2 to do a direct index lookup on the first column (EMPNO) and then a scan on the second (DEPTNO) for the greater-than.

Furthermore, if indexes already exist for both columns (one for EMPNO and one for DEPTNO), DB2 can use them both to satisfy this query so creating another index might not be necessary.

Importance of this particular query: The more important the query, the more you might want to tune by index creation. If you are coding a query that the CIO will run every day, you want to make sure it delivers optimal performance. So building indexes for that particular query is important. On the other hand, a query for a clerk might not necessarily be weighted as high, so that query might have to make do with the indexes that already exist. Of course, the decision depends on the application's importance to the business-not just on the user's importance.

Index design involves much more than I have covered so far. For example, you might consider index overloading to achieve index-only access. If all the data that a SQL query asks for is contained in the index, DB2 might be able to satisfy the request using only the index. Consider our previous SQL statement. We asked for LASTNAME and SALARY, given information about EMPNO and DEPTNO. And we also started by creating an index on the EMPNO and DEPTNO columns. If we include LASTNAME and SALARY in the index as well, we never need to access the EMP table because all the data we need exists in the index. This technique can significantly improve performance because it cuts down on the number of I/O requests.

Keep in mind that making every query an index-only access is not prudent or even possible. You should save this technique for particularly troublesome or important SQL statements.

Summary

If you are just embarking on your journey into the wonderful world of DB2 performance management, please, start with the two items covered in this short blog entry. I've just scratched the surface of both areas and you can benefit by additional research and education in both statistics gathering and index design. And if you are a long-time DB2 professional, it can't hurt to bone up on these topics either. You might learn about some newer DB2 feature or function that you haven't used yet, or maybe just strengthen what you already know.

So what do you think? Are these the two most important DB2 performance things, or do you think something else is more important? Post your comments below...

Thursday, August 31, 2006

Accessing Partitioned Data

One area that tends to confuse some DB2 developers until they gain experience is how DB2 partitioning works. A common question I get goes something like this: “If a table is in a partitioned table space, say four partitions, for example, then to process the table in batch can I run four instances of the batch program in parallel, one against each partition. What do I code to do this?”

Well, the short and sweet answer to this question is “Yes, you can run four instances of a batch program in parallel if you so desire.” But there is a nuance to this misconception that might be missed here. The question lurking beneath the question is this: “How can I make sure I am accessing only data in one partition in each of the batch programs?”

To do this requires some programming work. The program will need to have a means of identifying which partition it should run against. So, you might code the program to accept an input parameter of 1, 2, 3, or 4. The program would read the parameter and translate it into the key range of values that should be read by the program. This is the LIMITKEY value for the particular partition as found in the partitioning index. You can retrieve this value from the DB2 catalog using the following query:

SELECT PARTITION, LIMITKEY
FROM SYSIBM.SYSINDEXPART
WHERE IXNAME = ?
AND IXCREATOR = ?
ORDER BY PARTITION;

Supply the index name and creator and this query will return the partition number and LIMITKEY for that partition. (If you include this query in the program you probably will want to include the PARTITION column in the WHERE clause and return only a single row.) The LIMITKEY is the high key value for that partition. Using this information you will need to write the queries in the program such that only values from the partition being processes will be retrieved. As long as the program adheres to that key range you should only process data from the one partition that holds that data.

Of course, none of this is necessary to get DB2 to operate in parallel. The best approach uses DB2 query parallelism because it minimizes the amount of programming work and has the potential to maximize performance. To signal DB2 to turn on parallelism you will need to BIND your program specifying the DEGREE(ANY) parameter. Then DB2 will choose the degree of parallelism for the batch program. You will only need to run the program once (instead of 4 times as in our example); DB2 will figure out how many tasks it can run in parallel for each query in the program. And there is no need to modify the program at all! This is far simpler than any alternate approach because DB2 handles chunking up the work into parallel tasks for you.

Tuesday, August 29, 2006

How are Indexes Being Used?

In keeping with my promise to periodically post blog entries based on questions I have received, here we have another question I have been asked:

If I have five indexes on a table, what is the best way to determine if all, some or none of the indexes are being used?

Here is the answer I sent:

The best approach would be to make sure you have run EXPLAIN on all of your production plans and packages. Then examine the PLAN_TABLE output for those indexes. For example:

SELECT *
FROM my.PLAN_TABLE
WHERE ACCESSNAME IN (put your list of indexes here);

Of course, this will not show the dynamic SQL that uses any of these indexes. There are tools on the market that can help you to gather SQL usage statistics and access path information for both static and dynamic SQL statements.

You might consider acquiring one of these tools if you use a lot of dynamic SQL. One such offering is NEON Enterprise Software’s SQL Performance Expert solution.

Thursday, August 24, 2006

VARCHAR versus Compression

A couple of days ago I posted a blurb giving advice on using variable character columns in DB2. After thinking about the topic a little bit more, I decided to post a follow-on topic: namely, comparing the use of VARCHAR to DB2 compression.

Even though these are two entirely different "things," they are each probably done for similar reasons - to save disk storage. VARCHAR does this by adjusting the size of the column to fit the actual length of text being stored; compression does this by sending rows of data through an algorithm to minimize its length. For those interested in the details of compression I refer you to Willie Favero's excellent blog where he has written a several-part series on compression -- here are the links to it: part one, part two, and part three.

So, what advice can I give on comparing the two? Well, you might want to consider forgoing the use of variable columns and instead turn on compression. With variable columns you always add overhead: there is a two-byte prefix for every VARCHAR column to store the length of the VARCHAR. If instead you use CHAR and turn on compression you no longer need the extra two bytes per row per variable column.

Also, keep in mind that compression reduces the size of the entire row. So not only will you be compressing the CHAR column (that used to be VARCHAR), but you will also give DB2 the opportunity to compress every other column in that row.

All in all, that means that comrpession can return better disk storage savings than variable columns, and all without the programmatic overhead of having to calculate and store the two-byte prefix for each previously variable column.

Of course, I don't want to give the impression that this should always be done... (remember the DBA's maxim: Almost never say "always or never.") And there are additional things to consider, such as:
  • Compression adds a compression dictionary to the table space so a compressed table space can actually be larger than a non-compressed table space (if it is very small to begin with).
  • Compression requires additional CPU cycles to compress and de-compress the data as it is inserted, modified, and read (of course, I/O can decrease because smaller rows will fit more on each page, so degraded CPU performance can be offset by improved I/O)

This is just an additional "thing to consider" when you are building your DB2 databases and trying to decide whether you should use VARCHAR or CHAR...

Monday, August 21, 2006

IBM Mainframes - Not Just for Big Shops Any More

Just a quick blog today to point you to an interesting article in the latest issue of IBM Systems Magazine - Mainframe Edition. The article, titled A New System for a New Market, points out that the System z9 Business Class (z9 BC) platform, the latest mainframe in IBM's product line announced in April 2006, is suitable for the small and medium business (SMB) space.

This offering brings high performance and scalability to the SMB market at a very reasonable cost (around $100k). With specialty engines that can be added (IFL, zIIP and zAAP), again at a reasonable cost, it looks like the IBM mainframe will not only remain viable for large shops, but it could expand out into smaller ones, too.

So, as most mainframe afficianados know, the mainframe is not dead. But, it may actually be able to grow with the new features and affordability being built into IBM's new mainframes.

Sunday, August 20, 2006

Advice on Using Variable Character Columns in DB2

One of the long-standing, troubling questions in DB2-land is when to use VARCHAR versus CHAR. The high-level advice for when to use VARCHAR instead of CHAR is for larger columns whose length varies considerably from row-to-row. Basically, VARCHAR should be used to save space in the database when your values are truly variable.

In other words, if you have a 10-byte column, it is probably not a good idea to make it variable... unless, of course, 90% of the values are only one or two bytes, then it might make some sense. Have you gotten the idea here that I'm not going to give any hard and fast rules? Hope so, cause I won't - just high-level guidance.

Another situation: say you have an 80 byte column where values range from 10 bytes to the full 80 bytes... and more than 50% of them are less than 60 bytes. Well, that sounds like a possible candidate for VARCHAR to me.

Of course, there are other considerations. Java programmers tend to prefer variable character columns because Java does not have a native fixed length character data type.

For traditional programming languages though, CHAR is preferred because VARCHAR requires additional programmatic handling (to set the length of each column when inserting or modifying the data).

OK, so what if you are trying to determine whether or not the appropriate decision was made when for VARCHAR columns instead of CHAR? You can use information from the DB2 Catalog to get a handle on the actual sizes of each VARCHAR column.

Using views and SQL it is possible to develop a report showing the lengths of the variable column values. First, determine which VARCHAR column you need information about. For the purposes of this example, let's examine the NAME column of SYSIBM.SYSTABLES. This column is defined as VARCHAR(18). Create a view that returns the length of the NAME column for every row, for example:

CREATE VIEW LENGTH_INFO
(COL_LGTH)
AS
SELECT LENGTH(NAME)
FROM SYSIBM.SYSTABLES;

Then, issue the following query using SPUFI to produce a report detailing the LENGTH and number of occurrences for that length:

SELECT COL_LGTH, COUNT(*)
FROM LENGTH_INFO
GROUP BY COL_LGTH
ORDER BY COL_LGTH;

This query will produce a report listing the lengths (in this case, from 1 to 18, excluding those lengths which do not occur) and the number of times that each length occurs in the table. These results can be analyzed to determine the range of lengths stored within the variable column. If you are not concerned about this level of detail, the following query can be used instead to summarize the space characteristics of the variable column in question:

SELECT 18*COUNT(*),
SUM(2+LENGTH(NAME)),
18*COUNT(*)-SUM(2+LENGTH(NAME)),
18,
AVG(2+LENGTH(NAME)),
18-AVG(2+LENGTH(NAME))
FROM SYSIBM.SYSTABLES;

The constant 18 will need to be changed in the query to indicate the maximum length of the variable column as defined in the DDL. This query will produce a report such as the one shown below:

SPACE SPACE TOTAL AVERAGE AVERAGE AVERAGE
USED AS USED AS SPACE SPACE AS SPACE AS SPACE
CHAR(18) VARCHAR(18) SAVED CHAR(18) VARCHAR(18) SAVED
--------- ----------- ------ -------- ----------- -------
158058 96515 61543 18 10 8



This information can then be analyzed to determine if the appropriate decision was made when VARCHAR was chosen. (Of course, the values returned will differ based on your environment and the column(s) that you choose to analyze.) Also, keep in mind that this report will not include the 2 byte prefix stored by DB2 for variable length columns.

I hope this high-level overview with advice on when to use VARCHAR versus CHAR has been helpful. If you have your own guidelines or queries that you use please feel free to post a comment to this blog and share them with everyone.



NOTE: You could skip the creation of the VIEW in the above query and just use a nested table expression (aka in-line view) instead.

Thursday, August 17, 2006

Greatest Software Ever?

I just stumbled across a very interesting article this afternoon and thought I'd share it with everybody through my blog. The article, published in Information Week is titled What's The Greatest Software Ever Written? And isn't that an intriguing question?

Well, I read through the article and other than a few quibbles here and there I'd have to say that the author did a good job of assembling his list. He spends quite a bit of time talking about the IBM 360 project - and well he should. This was one of the first truly huge software projects and it set the bar for what is expected of an operating system. It also was the catalyst for causing one of the best ever books on software development to be written - The Mythical Man Month. Written by Fred Brooks, the manager in charge of the IBM 360 project, this book outlines many of the truisms about software development that we acknowledge even today - more than 40 years later. If you work in IT and you haven't read The Mythical Man Month you really should buy a copy and read it immediately. Anyway, this blog isn't about that book, so let's move on.

I won't spoil it here and publish the list of greatest software - you will have to click on the link for the article and read it yourself (the actual list doesn't start until page 3 of the article, but don't just click right over to that page, read the whole thing).

Suffice it to say, several IBM projects make the list (I'm kinda partial to what came in at #2 -- it would've been my #1 actually). And I think perhaps that VisiCalc belongs on the list instead of the spreadsheet software that is listed - I mean, Dan Bricklin invented the entire spreadsheet category of software when Software Arts published VisiCalc back in the late 1970s.

But the article is good anyway and I'm sure it is almost impossible to publish a list like this without causing disagreement - and perhaps that is its intent any way. So take a moment and click over to the article and give it a read. And feel free to share your thoughts on it here by posting a comment or two.

Thursday, August 10, 2006

SHARE Travelers Take Heed

With the upcoming SHARE conference in Baltimore next week, there are sure to be many of you out there who will be traveling to the nation's capital region over the weekend. As you prepare to travel, be sure to factor in additional time at the airport due to the latest TSA warning.

Basically, in response to a recently thwarted terrorist plot in the UK, the threat level has been raised to High (or Orange) for all commercial aviation operating in or destined for the United States. That means the lines will be longer and the searches more thorough going through security at the airport.

Additionally, please read the TSA announcement and heed what it is saying. I am referring specifically to this: "Due to the nature of the threat revealed by this investigation, we are prohibiting any liquids, including beverages, hair gels, and lotions from being carried on the airplane." Please, for everyone's sake, leave your liquids at home:
  • You can get a drink after you pass through security.
  • Every hotel provides shampoo, conditioner, and lotion for free, so you don't need to bring them.
  • If you absolutely have to have your favorite brand, or some gel or spray, pack it in your checked bags.
And yes, please check your dang luggage! Although I am sometimes amused by idiots trying to jam a huge bag into the overhead bin, it becomes less amusing after a two hour amble through security. If you have a large bag check it!

And I'll see you all in Baltimore.

Wednesday, August 09, 2006

Where exactly is a DB2 plan stored?

The title of this posting is a question I received awhile ago. As I promised earlier on this blog, I will periodically post the answers I have given to e-mailed questions. So, here goes:

A DB2 "plan" is stored in the DB2 Directory and information about the plan is stored in the DB2 Catalog.

The DB2 Directory table that contains actual plans is SYSIBM.SCT02 (and SYSIBM.SPT01 contains actual packages). The plan is stored as an internal structure called a cursor table; packages are stored as package tables. As DB2 runs application programs, it loads the cursor tables for plans and package tables for packages from the DB2 Directory tables into the EDM Pool. This is where the "access path" code that determines how to get the actual DB2 data resides.

There is also metadata about plans and packages that you might find useful. This information includes data about the state, privileges, isolation level, release specification, and so. The DB2 Catalog contains information about plans in the following tables:

  • SYSIBM.SYSDBRM
  • SYSIBM.SYSPLAN
  • SYSIBM.SYSPLANAUTH
  • SYSIBM.SYSPLANDEP
  • SYSIBM.SYSSTMT

And, the DB2 Catalog contains information about packages in the following tables:

  • SYSIBM.SYSPACKAGE
  • SYSIBM.SYSPACKAUTH
  • SYSIBM.SYSPACKDEP
  • SYSIBM.SYSPACKLIST
  • SYSIBM.SYSPACKSTMT
  • SYSIBM.SYSPKSYSTEM
  • SYSIBM.SYSPLSYSTEM

Tuesday, August 08, 2006

Mainframe Weekly: A new mainframe-focused blog

Mainframe Weekly is a new blog featuring the insights of Trevor Eddolls. Trevor is an editor who has worked for Xephon for some time. Xephon publishes those "Update" journals - you know the ones, DB2 Update, CICS Update, etc. The ones that are full of content and don't accept any ads.

I've had the pleasure of writing for DB2 Update and working with Trevor for a number of years now, and I look forward to regularly reading his new blog. Recent entries there have covered CICS, DB2 and Viper, and storage technology.

Do yourself a favor and be sure to check in on Trevor's blog on a regular basis.

Monday, August 07, 2006

Upcoming SHARE Conference

Just a quick posting to remind everyone that the bi-annual SHARE conference is coming up next week in Baltimore, MD. For those of you who don't know about SHARE: the conference has offered education to IT professionals, improving the value of IT to businesses, and advancing careers since 1955 - the longest history of any IT user group.

SHARE in Baltimore runs from August 13-18, 2006 and, as usual, there will be a plethora of useful and educational sessions on multiple IT topics. According to the folks at SHARE there will be over 800 technical sessions to choose from. Session topics range from implementation of new software releases to in-depth looks at what the latest new technologies can do for your business.

I'll be there, and will be delivering two presentations on DB2. On Monday, August 14, (3:00 - 4:00 PM) I'll give an hour of DB2 for z/OS tips and techniques... and then on Tuesday, August 15, (8:00 - 9:00 AM) I'll be giving an overview of DB2 for z/OS performance tuning.

Hope to see you there!

Wednesday, August 02, 2006

New Mainframe Redbook

IBM published a new mainframe redbook this week that everyone who works with mainframe technology should download and have handy. It is titled Introduction to the New Mainframe: z/OS Basics and it offers a very nice high-level overview of mainframe technology circa 2006.

Of course, if you are a long-time mainframer much of this redbook may be too high level for you. But it will be just the right thing to have at your disposal when someone drops by to ask a basic question. Just e-mail them a copy of the book.

And even long-time experts still might want to at least leaf through it. It is divided into four broad sections, as follows:
  • Part 1. Introduction to z/OS and the mainframe
  • Part 2. Application programming on z/OS
  • Part 3. Online workloads for z/OS
  • Part 4. System programming on z/OS
So maybe you are a crackerjack programmer, but would like to know more about system programming. Maybe your online knowledge is skimpy. Or maybe you just want a nice overview of everything. Then this book is just the ticket!

Here is a brief synopsis out of the Preface of the redbook:

This IBM Redbook provides students of information systems technology with the background knowledge and skills necessary to begin using the basic facilities of a mainframe computer. It is the first in a planned series of textbooks designed to introduce students to mainframe concepts and help prepare them for a career in large systems computing.

This textbook can also be used as a prerequisite for courses in advanced topics or for internships and special studies. It is not intended to be a complete text covering all aspects of mainframe operation, nor is it a reference book that discusses every feature and option of the mainframe facilities.

Others who will benefit from this course include experienced data processing professionals who have worked with non-mainframe platforms, or who are familiar with some aspects of the mainframe but want to become knowledgeable with other facilities and benefits of the mainframe environment.

So download Introduction to the New Mainframe: z/OS Basics today...

Monday, July 31, 2006

Network World: Mainframes Still Relevant

Just finished reading a great new article at NetworkWorld.com called Working on mainframes not just for old fogies. Well, that is sure good to know. I may be getting older, but I sure don't think of myself as one of those "old fogies" yet... and I still work on mainframes.

The article talks about the continuing relevance of mainframe computing in a novel way -- by talking to several young mainframe newbies. Yes, there are twenty-somethings out there who are working on mainframes, they are just hard to find. And the article makes the point that all of us in the mainframe world know -- we need more young 'uns to learn the mainframe.

The article goes on to point out some interesting mainframe statistics from the industry analysts. According to Gartner "large mainframe users have been increasing their mainframe environments for years." They say that installed MIPs will continue to gorw at a CAGR of 15 to 20 percent through 2009. And the analysts at IDC seem to agree, with 2006 MIPS shipments up 14.2 percent.

If you work on mainframes be sure to click over and give the article a read. To me, anyway, it is invigorating to hear about young people embracing the mainframe. And the more younger people who learn about mainframe computing, the stronger the platform becomes... and that is good, too.

Friday, July 28, 2006

IBM Software Support Toolbar

It seems like everyone under the sun is offering a toolbar to add in to the Internet Explorer browser. Google has one, Yahoo has one, and so do countless others. Now, IBM has one, too.

The new IBM Software Support Toolbar plugs into your browser to allow you to search IBM's Support knowledge base using keywords, specific error codes or exact phrases. You can also use it to search or browse for product downloads including Fix Packs, Updates, Patches, etc. So if you have trouble remembering how to get to the IBM site for support, or just want a quicker way to get there, the IBM Software Support Toolbar could prove itself to be quite helpful.

The toolbar even allows you to seek specific results for a specific brand. So you can scroll to the "Information Management" brand and select from that sub-menu. That way you'd only be looking for the "database stuff"...

Wednesday, July 26, 2006

How and when to use DB2 scrollable cursors

As I've mentioned here before, sometimes I will use this blog to post answers to questions that have been sent to me. The question being answered today, is this: I have a select query that returns 1000 rows and I want to display the rows from 200 to 500. Could you please let me how to code after the where clause? I know how to get the first 100 rows using -- FETCH FIRST 100 rows only clause.

If you are using DB2 V7 or higher, consider using scrollable cursors. With scrollable cursors, you can move directly to the rows you want without having to FETCH every other row returned by the cursor.

In DB2 V7, scrollable cursors require you to use declared temporary tables, another new feature of DB2 Version 7. If you are using DB2 V8, dynamic scrollable cursors are available and thus temprorary tables are not required.

In V7, DB2 uses a declared temporary table to hold and maintain the data returned by a scrollable cursor. Scrollable cursors allow developers to move through the results of a query in multiple ways. The following key words are supported when fetching data from a scrollable cursor:
  • NEXT - will FETCH the next row, the same way that the pre-V7
  • FETCH statement functioned
  • PRIOR - will FETCH the previous row
  • FIRST - will FETCH the first row in the results set
  • LAST - will FETCH the last row in the results set
  • CURRENT - will re-FETCH the current row from the result set
  • BEFORE - positions the cursor before the first row of the results set
  • AFTER - positions the cursor after the last row of the results set
  • ABSOLUTE n - will FETCH the row that is n rows away from the first row in the results set
  • RELATIVE n - will FETCH the row that is n rows away from the last row fetched

For both ABSOLUTE and RELATIVE, the number n must be an integer. It can be either a positive or a negative number and it can be represented as a numeric constant or as a host variable. All of the FETCH options for scrollable cursors also reposition the cursor before fetching the data. For example, let's consider your problem of a cursor that returns 1000 rows, but you only want rows 200 through 500.

Consider the following cursor logic:

DECLARE csr1 SENSITIVE STATIC SCROLL CURSOR
FOR SELECT FIRSTNAME, LASTNME
FROM DSN8710.EMP
ORDER BY LASTNME
FETCH FIRST 1000 ROWS ONLY;

OPEN csr1;

FETCH ABSOLUTE 200 csr1 INTO :FN, :LN;

I used the FETCH FIRST 1000 ROWS ONLY clause to ensure that no more than 1,000 rows were returned. This clause is, of course, optional (and if not specified, DB2 will not limit the result set returned by the cursor). Then I open the cursor and FETCH row 200. This positions the cursor just after the 200 result row that was just fetched. After that, all you would need would be to create a loop that just issues FETCH NEXT 300 times and that would retrieve only rows 200 through 500.

Basically, scrollable cursors reduce the amount of time and effort required to move backward and forward through the results of SQL queries. But as helpful as scrollable cursors are, do not make every cursor a scrollable cursor. Scrollable cursors require substantially more overhead than a traditional, non-scrollable cursor. Analyze the requirements of your applications and deploy scrollable cursors only where it makes sense to do so. Also, be sure to discuss this with your DBAs before implementing as there will probably be some setup work required of the DBA group to facilitate this solution.

Good luck...

Tuesday, July 25, 2006

Free DB2 Statistics Health Check Software

NEON Enterprise Software SEGUS, Inc. is offering free software for checking the health of the statistics in your DB2 Catalog.

Statistics HealthCheck for DB2 z/OS enables you to quickly and effectively analyze and judge the quality of your DB2 Catalog statistics. Checking the health of your DB2 subsystem is especially important considering the heightened sensitivity of DB2 V8 and V9 to bad statistics.

The extensive rule system used by Statistics HealthCheck is based on IBM’s own recommendations for maintaining good statistics. Using a violation system, Statistics HealthCheck pinpoints precisely those objects that could benefit from a RUNSTATS utility health check—or that otherwise require statistics housekeeping.

Statistics HealthCheck is particularly useful as a prerequisite to Bind ImpactExpert during a V8 migration. First, Statistics HealthCheck identifies the RUNSTATs you need. Then, Bind ImpactExpert provides REBIND insurance to further guarantee consistent and improved access paths.

With Statistics HealthCheck, you know exactly what is “wrong” with your statistics so that you can proactively correct any problems that might negatively affect DB2 subsystem performance.

And come on, it is free after all. What do you have to lose by downloading it and trying it on your DB2 subsystems today?

--------------

Just a quick note to explain that since the links in this blog posting were no longer valid, I modified them. This offering is now being provided by SEGUS, and it is still a free download. The statistics health check software can be downloaded at the following link:

http://www.segus.com/index.php/productdetails/index/en_product_014_Statistics_HealthCheck