Monday, October 26, 2015

An Update from IBM Insight 2015

The annual IBM Insight conference is being held this week in Las Vegas, as usual at the Mandalay Bay Resort and Conference Center. And I have the good fortunate to be in attendance.

If you follow me on Twitter (http://www.twitter.com/craigmullins) I am live tweeting many of the sessions I am attending at the conference. But for those of you who are not following along on Twitter, or just prefer a summary, here’s a quick overview of Monday’s highlights.

The day kicked off with the general session keynote which was delivered by a combination of Jake Porway (of DataKind), IBMers and IBM customers. The theme of the keynote was analytics and cognitive computing. The emphasis, in my opinion, of the event has kind of shifted from the data to what is being done with the data… in other words, the applications. And that is interesting, because the data is there to support the applications, right? That said, I’m a data bigot from way back, so it was a bit app-heavy for me.

That said, there were some insightful moments delivered during the keynote. Bob Picciano, Senior VP of IBM Analytics, kicked things off by informing the audience that true insight comes from exploiting existing data, dark data, and IoT data with agility driven by the cloud. That’s a lot of buzzwords, but it makes sense! And then executives from Whirlpool, Coca-Cola, and Twitter were brought out to talk about how they derive insight from their data and systems.

Perhaps the most interesting portion of the general session was the Watson discussion, led by Mike Rhodin, Senior VP, IBM Watson. We learned more about cognitive systems and how they get more valuable over time as they learn, which makes them unique in the world of computers and IT. IBM shared that there are more than 50 core technologies used by IBM Watson to deliver its cognitive computing capabilities. It was exciting to learn about systems that reason, never stop learning and drive more value over time.
Additional apps were discussed that let us learn about the various ways to choose wine, that nobody starts thinking about ice cream early in the week and that when the weather changes women buy warmer clothes; men buy beer and chips. You kind of had to be there, I guess!

Of course, this was only the first session of a long day. Additional highlights of the day included a high-level overview of the recently announced (but not yet released) DB2 12 for z/OS, the features that should be in a next generation database, and Gartner’s take on the state of big data and analytics. Let’s briefly address each of these one by one.
Firstly, DB2 12, which will be available in the ESP (early support program) in March 2016. There are a lot of nice new features that will be available in this new version. We’ll see a lot more in-memory capabilities which will speed up queries and processing. Buffer pools can be up to 16 TBs, even though today’s z systems can support only 10 TBs – IBM is planning for the future with that one!

And we’ll continue to see the blurring of the lines between static and dynamic SQL. How? Well, we’ll get RLF for static SQL and plan stability for dynamic SQL in DB2 12. IBM claims that we’ll be able to achieve up to 360 million txns/hour with DB2 12 for z/OS using a RESTful web API. Pretty impressive.

And there will be no skip-level migration for DB2 12... You have to migrate thru DB2 11 to get to 12.

OK, what about the features that should be in a next generation database? According to IBM a next gen DBMS should:
  • Deliver advanced in-memory technology
  • Be fast for both transactional and analytic workloads
  • Provide scalable performance
  • Be available, reliable, resilient, and secure
  • Be simple, intelligent and agile
  • And be easy to deploy and cloud-ready

Sounds about right to me!                                                                                        

And finally, let’s briefly take a look at the some of the Gartner observations on big data and analytics. The Gartner presentation was delivered by Donald Feinberg, long-time Gartner analyst on the topic of data and database systems. First of all, Feinberg rejects the term “big data” saying there is no such thing. It is all just data. He went on to claim that “big data” is perhaps the most ambiguous term out there, but it is also the most searched term at Gartner!

Feinberg also rejected the term data lake, saying “There is no such thing as a data lake, it is just a place to dump data.” He warned that it will  come back to bite organizations in a few years if they do not take the time to manage, transform, secure, etc. the data in the lake, turning it into a data reservoir, instead.

He also made the very interesting observation that BI/analytics was the number 1 priority for CIOs on Gartner’s annual CIO priorities survey and that it has been in that slot for 8 years running. But if analytics was really that much of a priority why haven't they gotten it done yet?

Of course, a lot more happened at IBM Insight today – and many more things were discussed. But I don’t want this blog post to become too unwieldy, so that is all I’m going to cover for now.

I’ll try to cover more later in the week as the conference progresses.


Monday, October 12, 2015

OK, So How Does the CURRENTDATA Parm Work?

There seems to be a LOT of confusion out there in DB2-land about the CURRENTDATA parameter, so I thought it might be helpful to blog about the subject.

So first things first: CURRENTDATA is a BIND parameter that can be specified when you bind or rebind your plans and packages. There are two options that can be specified: YES and NO. The default is NO.

But what does it do?

Well, the manuals tell us that CURRENTDATA indicates whether data currency is required for read-only and ambiguous cursors when the ISOLATION(CS) option is used. But that is not very clear... except that CURRENTDATA really only applies with an isolation level of cursor stability.

The general idea here is to enable the developer to exert a level of control by choosing between high performance or up-to-date data. Sure, in a perfect world, everybody would choose both. But in the real world we deal with tradeoffs and the CURRENTDATA parameter allows us to tell DB2 which of these options to prefer.

CURRENTDATA (NO)

The default, CURRENTDATA(NO), is the performance choice, and probably the best choice for most applications. When NO is chosen, you are telling DB2 that completely up-to-date data currency is not required for read-only and ambiguous cursors. That means that DB2 can perform block fetching for distributed, ambiguous cursors.

And that is probably what you want. However, if you are not using pristine coding practices, and your program tries attempts to DELETE WHERE CURRENT OF against an ambiguous cursor, DB2 will return a negative SQLCODE.

What is an Ambiguous Cursor?

At this point you might be asking “What is an ambiguous cursor?” Generally speaking, an ambiguous cursor is one where DB2 cannot tell if your program needs to use the cursor for updating or deleting. You can avoid ambiguous cursors by coding the FOR READ ONLY and FOR UPDATE OF clause on your cursors to inform DB2 of the program’s future intent for the data being accessed.

CURRENTDATA (YES)

What about CURRENTDATA(YES)? So far, we have discussed only the NO option, but you can also specify YES. Doing so indicates that the data should be current for read-only and ambiguous cursors. That means DB2 will acquire page (or row) locks to ensure the currency of the data and block fetching for distributed, ambiguous cursors will be curtailed.

Summary

Hopefully this short blog post helps to clear up a muddy topic. Of course, there is a lot more to DB2 locking and performance and I do not want to make it seem like this is the only thing you need to know in terms of the tradeoffs between data currency and performance.


But at least one parameter might be a little bit clearer to you today than it was yesterday…

Tuesday, October 06, 2015

Sorting by Day of the Week in DB2

Today's blog will walk you through a sorting "trick" that you can use when you are dealing with days of the week. A relatively common problem involves sorting a column that contains a day-of-the-week not in alphabetical order by in the order it appears in the week...

Assume that you have a table containing transactions, or some other type of interesting facts. The table has a CHAR(3) column containing the name of the day on which the transaction happened; let’s call this column DAY_NAME. Now, let’s further assume that we want to write queries against this table that orders the results by DAY_NAME. We’d want Sunday first, followed by Monday, Tuesday, Wednesday, and so on. How can this be done?

Well, if we write the first query that comes to mind, the results will obviously be sorted improperly:

SELECT   DAY_NAME, COL1, COL2 . . .
FROM     TXN_TABLE
ORDER BY DAY_NAME;

The results from this query would be ordered alphabetically; in other words
  1. FRI
  2. MON
  3. SAT
  4. SUN
  5. THU
  6. TUE
  7. WED

And it would rare to want this type of data sorted in that order, right? The more common need would be to sort the data the way it appears on the calendar.



One solution would be to design the table with an additional numeric or alphabetic column that would allow us to sort that data properly. By this I mean that we could add a DAY_NUM column that would be 1 for Sunday, 2 for Monday, and so on. Then we could SELECT DAY_NAME but sort by DAY_NUM.

But this is a bad fix. Not only does it requires a database design change, this "fix" also introduces the possibility for the DAY_NUM value and DAY_NAME value to get out of sync... unless we are very careful, or perhaps do not allow DAY_NUM to be changed other than via an INSERT trigger that automatically populates the correct number. But requiring a trigger adds even more complexity to this "fix" which really should indicate to us that it is not a very good proposal.

A better solution uses just SQL and requires no change to the database structures. All you need is an understanding of SQL and SQL functions – in this case, the LOCATE function.  Here is the SQL:

SELECT   DAY_NAME, COL1, COL2 . . .
FROM     TXN_TABLE
ORDER BY LOCATE('SUNMONTUEWEDTHUFRISAT',DAY_NAME);

The trick here is to understand how the LOCATE function works: it returns the starting position of the first occurrence of one string within another string. So, in our example, LOCATE finds the position of the DAY_NAME value within the string 'SUNMONTUEWEDTHUFRISAT', and returns the integer value of that position.

So,if DAY_NAME is WED, the LOCATE function in the above SQL statement returns 10... and so on. To summarize, Sunday would return 1, Monday 4, Tuesday 7, Wednesday 10, Thursday 13, Friday 16, and Saturday 19. This means that our results would be in the order we require.

(Note: Some other database management systems have a function similar to LOCATE called INSTR.) 

Of course, you can go one step further if you’d like. Some queries may need to actually return the day of week. You can use the same technique with a twist to return the day of week value given only the day’s name. To turn this into the appropriate day of the week number (that is, a value of 1 through 7), we divide by three, use the INT function on the result to return only the integer portion of the result, and then add one:

INT(LOCATE('SUNMONTUEWEDTHUFRISAT',DAY_NAME)/3) + 1;


Let’s use our previous example of Wednesday again. The LOCATE function returns the value 10. So, INT(10/3) = 3 and add 1 to get 4. And sure enough, Wednesday is the fourth day of the week. 

You can monkey with the code samples here to modify the results to your liking. For example, if you prefer Monday to be the first day of the week and Sunday the last day of the week, simply change the text string in the LOCATE function as follows:

LOCATE('MONTUEWEDTHUFRISATSUN',DAY_NAME)

My intent with this blog posting is actually twofold. Firstly, I wanted to share a method of sorting days-of-the-week... but secondly, I hope to also have piqued your interest in using SQL functions to achieve all types of different processing requirements. DB2 offers quite a lot of functions and it seems to me that many programmers fail to use functions to their fullest.

So if you are ever face-to-face with a difficult query request, step back and take a moment to consider how DB2 functions might be able to help!

Thursday, October 01, 2015

Understanding DB2 SELECT Syntax

Most DB2 programmers think they know how to correctly code simple SQL SELECT statements. And they usually are correct, as long as you keep that adjective “simple” in the assertion. When the statement requires more than SELECT...FROM…WHERE though, problems can ensue.

One of the biggest SELECT type of problem encountered by DB2 users is related to syntax. To paraphrase Mark Twain, sometimes what people think they know, just ain’t so.

How can you find the proper syntax for SELECT statements? The DB2 SQL Reference manual contains all of the syntax for DB2 SQL, but query syntax is separated from the rest of the language. Typically, users go to Chapter 5 of the SQL Reference that contains syntax diagrams, semantic descriptions, rules, and examples of the use of DB2 SQL statements. SELECT INTO is there, but SELECT is not in this section. Well, actually, there is a placeholder page that refers the reader back to Chapter 4. Chapter 4 contains the detailed syntax information and usage details for DB2 queries using SELECT. 


Another potentially confusing aspect of DB2 SELECT is the breakdown of SELECT into three sections: fullselect, subselect, and select-statement. This causes many developers to confuse which query options are available to the SELECT statements they want to code.

Let's take a look at each of these three sections:

First up is the select-statement, which is the form of a query that can be directly specified in a DECLARE CURSOR statement, or prepared and then referenced in a DECLARE CURSOR statement. It is the thing most people think of when they think of SELECT in all its glory. If so desired, it can be issued interactively using SPUFI. The select-statement consists of a fullselect, and any of the following optional clauses: WITH CTE, update, read-only, optimize-for, isolation, queryno and SKIP LOCKED DATA.

A CTE, or common table expression, defines a result table with a table-identifier that can be referenced in any FROM clause of the fullselect that follows. Multiple CTEs can be specified following a single WITH keyword. Each specified CTE can also be referenced by name in the FROM clause of subsequent common table expressions.

The next component is a fullselect, which can be part of a select-statement, a CREATE VIEW statement, a materialized query table, a temporary table or an INSERT statement. Basically, a fullselect specifies a result table. A fullselect consists of at least a subselect, possibly connected to another subselect via UNION, EXCEPT or INTERSECT. And ever since DB2 version 9 for z/OS, you can apply either or both ORDER BY and FETCH FIRST clauses. Prior to V9, this sometimes confused folks as they tried to put a FETCH FIRST n ROWS clause or an ORDER BY in a view or as part of an INSERT. That was not allowed! But it is now.

However, a fullselect does not allow any of the following clauses: FOR FETCH ONLY, FOR UPDATE OF, OPTIMIZE FOR, WITH, QUERYNO and SKIP LOCKED DATA. A fullselect specifies a result table – and none of these afore-mentioned clauses apply.

This sometimes confuses folks. I recently had a conversation with a guy who swore that at one point he created a view using the WITH UR clause and that it worked. It didn’t when we spoke and I’m sure it never did.

Finally, a subselect is a component of the fullselect. A subselect specifies a result table derived from the result of its first FROM clause. The derivation can be described as a sequence of operations in which the result of each operation is input for the next.

I know, this can all seem to be a bit confusing. But think of it this way: in a subselect you specify the FROM to get the tables, the WHERE to get the conditions, GROUP BY to get aggregation, HAVING to get the conditions on the aggregated data, and the SELECT clause to get the actual columns. In a fullselect you add in the UNION to combine subselects and other fullselects. Finally, you add on any optional clauses (as specified earlier) to get the select-statement.

Now what could be any easier?

Actually, it is not super easy. And if you add in some of the newer SQL capabilities, like OLAP functions or temporal time travel query clauses, it gets even more complicated.


I guess the bottom line is that you really should make sure you have the SQL Reference handy (see link above) if you are doing anything other than simple selecting… because you’ll probably need it.