Wednesday, February 20, 2008

Let's Hear It for COBOL!

I know that this blog is supposed to be primarily about DB2, but I like to sneak in mainframe-related topics from time to time. And I read a great article today in Computerworld that I want to share with you. The article, titled Confession of a COBOL Programmer, talks up the need for COBOL coders. Just like with other mainframe technologies, as the baby boomers retire there is an insufficient supply of newbies available to step in and continue the care and feeding of the COBOL legacy.

I've written briefly about COBOL before, in my Data Management Today blog. COBOL is still all over the place and in no danger of dying off. According to the Computerworld article, 75% of the world's businesses data is still processed in COBOL, and about 90% of all financial transactions are in COBOL.

Yet there is a lingering perception "out there" that COBOL is dead (or at least dying). And as far as graduating seniors and new programmers are concerned, COBOL ain't cool! New programmers don't want to learn it and most universities don't teach it in their computer science or information science curricula. Just like the mainframe (which is alive and well, too), COBOL is ignored and a big problem is developing.

Analysts at Gartner estimate that there are 180 billion lines of COBOL code in existence and about 90,000 COBOL programmers. To convert all of that to something else "each programmer will require 100,000 hours to complete the conversion of 2 million lines. That works out to 12,500 eight-hour workdays. If we figure 250 workdays per year (though it’s unlikely any Cobol programmers are settling for just two weeks of vacation per year), these guys should be done in 50 years."

Who knows, when I retire (sometime in the far-off future) maybe I'll hang up a shingle and offer my services as a COBOL coder... after all, that is what I started out doing right out of college (all those years ago)...

Tuesday, February 05, 2008

Intelligent Automatic Reoptimization? [DB2 9 for z/OS]

Most seasoned SQL programmers know that when you use host variables, DB2 may not always come up with the absolutely most optimal access path at bind time. Without knowing the actual values of the host variables the optimizer has to make some best guesses as to how best to satisfy the SQL request.

Furthermore, we know that we can guide DB2 on how best to approach this situation using the REOPT parameter of the BIND command. Prior to DB2 V9, there were three options for REOPT:

REOPT(NONE) – DB2 will not reoptimize SQL at run time.

REOPT(ALWAYS) – DB2 will prepare SQL statements again at run time when the host variable values are known. This enables the DB2 optimizer to formulate the query execution plan using the actual host variable values, which can result in better performing access paths.

REOPT(ONCE) – DB2 will prepare SQL statements only once, using the first set of host variable values, no matter how many times the statement is executed by the program. The access path is stored in the Dynamic Statement Cache (DSC) and will be used for all subsequent executions of the same SQL statement. REOPT(ONCE) only applies to dynamic SQL statements and is ignored if you use it with static SQL statements. This option was introduced in DB2 V8.

What is New in V9?

DB2 9 for z/OS introduces a new REOPT option: REOPT(AUTO). The ideas behind REOPT(AUTO) is to come up with the optimal access path in the minimum number of prepares.

The basic premise of REOPT(AUTO) is to re-optimize only when host variable values change. Using this option, DB2 will examine the host variable values and will generate new access paths only when host variable values change and DB2 has not already generated an access path for those values.

REOPT(AUTO) only applies to dynamic statements that can be cached.

After migrating to DB2 9, consider re-evaluating programs bound specifying REOPT(ALWAYS) and REOPT(NONE). In many cases, switching to REOPT(AUTO) from REOPT(ALWAYS) can produce performance improvement; and in some cases you can use re-optimization with REOPT(AUTO) for programs bound REOPT(NONE) because of the fear of too frequent re-optimization causing a performance hit.

In particular, consider specifying REOPT(AUTO) for SQL statements that at times can take a relatively long time to execute, depending on the values of parameter markers. In particular, you should especially consider doing this when parameter markers refer to non-uniform data that is joined to other tables.

Friday, January 25, 2008

Get Control of Access Path Changes for Dynamic SQL when Migrating to a New Version of DB2

Are you making plans to migrate to a new version of DB2? Do you know what impact the access changes for dynamic SQL will have on performance? When migrating to a new DB2 version, access path changes for dynamic SQL are unpredictable – as is the impact those changes will have on application performance. Learn how you can use Bind ImpactExpert to eliminate the unwanted surprises in version migration by performing a “precheck” on dynamic SQL access path changes.

This webinar will be presented by myself (Craig Mullins) and Joe Brockert, Sr. Software Consultant for NEON Enterprise Software. We'll discuss the issues associated with dynamic SQL during a DB2 migration and offer a live demo of Bind ImpactExpert. Join us to see the solution that provides predictability in access path changes.

Enroll by clicking on this link.

Wednesday, January 16, 2008

Q+A: Locking

I get questions e-mailed to me all the time. Although I try to read and answer them all, sometimes I don't. I hope those whom I've ignored over the years will forgive me, but I can't always answer everything (not enough time/energy) and sometimes things get lost or drop through the cracks.

Anyway, at times I will take a question I get and blog about it in Q+A format. Today is one of those days!

The question was: I want to perform a retry on an INSERT under DB2 Z/OS when I get a deadlock/timeout. -911 causes a rollback automatically. Is there a ZPARM or other method of turning this off? I am inserting millions of rows and do not want a rollback to the last commit point.

Here is my answer:

Well, first of all, let me recommend that you minimize the size of your unit of work. If you are inserting millions of rows without a COMMIT you are likely causing locking issues in your environment. The pages you have locked while you are waiting for your millions of inserts to finish are all unavailable to any other user of the table (assuming page locking). That means any data on any page that you have locked cannot be read by anyone else until your unit of work is committed. Any other user, running at the same time as you are, trying to get to any page you have modified, would be getting -911 too.

That being said, you can control whether or not the work is rolled back automatically in CICS (on a thread basis) using an RDO parameter (or RCT if on an ancient CICS). The parameter is called ROLBE (RCT) or DROLLBACK (RDO). If it is set to YES a CICS SYNCPOINT ROLLBACK is issued and a -911 SQLCODE is returned to the program. If NO is coded, a CICS SYNCPOINT ROLLBACK is not issued and the SQLCODE is set to -913. You will have to programmatically either specify COMMIT or ROLLBACK for the unit of work.

In a batch environment you will need to code your programs to periodically issue COMMITs after so many modifications (or using some other method like a timer or loop counter). There is no method I am aware of to automatically control this behavior outside of looking into a third party product (for example, Softbase Checkpoint Restart, and others).

Blog Tagged

I have been blog tagged by Willie Favero. Don't worry, its not as painful as it sounds!

Basically, blog-tagging is a game, of sorts, that has been crawling its way through the blogosphere for awhile now. The way it works, when you are tagged by another blogger, you have to write a blog posting about yourself, with 8 things that others might not know. . . and then tag 8 other bloggers.

So here goes:
  1. I am an avid music fan. At last count, I have 5,281 CDs and albums (yes, I still have records). I know exactly how many I have because, geek that I am, I keep a list of them in a Filemaker database that I sync up with my Treo. I need that list on my Treo because, without it, I have been known to buy a CD I already own.
  2. I currently live in Texas, but I was born and raised in Pittsburgh, PA. Go Steelers (we'll get 'em next year)! My Mom, my brother, and his family still live in the Pittsburgh area and I get back to visit them at least once a year.
  3. I've also lived in the Chicago area. When people ask how I like it in Texas after living up North for so long, I tell 'em "I like it. I basically traded snow for humidity, and you don't have to shovel humidity!"
  4. I've written two books - DB2 Developer's Guide and Database Administration: The Complete Guide to Practices & Procedures... and I'm working on co-authoring another one on DB2 performance.
  5. I'm married, and I met my wife while working at PLATINUM technology. Remember them? A lot of good things happened during my days at PLATINUM! In fact...
  6. At one point, I used to write those monthly DB2 tips you DB2 people used to get in the mail from PLATINUM.
  7. I currently write four different columns for industry publications, as well as several blogs.
  8. I own a dog, an English springer spaniel named Jerry... I call him my Jerry Springer spaniel... and two cockatiels.

OK, I guess that means I now have to tag eight others. Willie beat me to the punch on a lot of my favorite DB2 bloggers, though. So I'll tag Peter Armstrong, Chris Foot, Chris Eaton, Trevor Eddolls, Dave Moore, Phil Nelson, Fred Sobotka, and Ralph Wilson.

You folks are now "it"...