Monday, November 25, 2013

DBA Rules of Thumb - Part 3 (Share)

Knowledge transfer is an important part of being a good DBA - both transfering your knowledge to others and participating in having others' knowledge transferred to you.

So the third DBA rule of thumb is this: Share Your Knowledge!

The more you learn as a DBA, the more you should try to share what you know with other DBAs. Local database user groups typically meet quarterly or monthly to discuss aspects of database management systems. Healthy local scenes exist for DB2, SQL Server, and Oracle: be sure to attend these sessions to learn what your peers are doing.

And when you have some good experiences to share, put together a presentation yourself and help out your peers. Sometimes you can learn far more by presenting at these events than by simply attending because the attendees will likely seek you out to discuss their experiences or question your approach. Technicians appreciate hearing from folks in similar situations... and they will be more likely to share what they have learned once you share your knowledge.

After participating in your local user group you might want to try your hand speaking at (or at least attending) one of the major database industry conferences. There are conferences for each of the Big Three DBMS vendors (IBM, Oracle, and Microsoft), as well as conferences focusing on data management, data warehousing, industry trends (Big Data, NoSQL), and for others too. Keep an eye on these events at The Database Site's database conference page.

Another avenue for sharing your knowledge is using one of the many online database forums. Web portals and web-based publications are constantly seeking out content for their web sites. Working to put together a tip or article for these sites helps you arrange your thoughts and to document your experiences. And you can garner some exposure with your peers by doing so because most web sites list the author of these tips. Sometimes having this type of exposure can help you to land that next coveted job. Or just help you to build your peer network.

Finally, if you have the time, considering publishing your experiences with one of the database-related print magazines. Doing so will take more time than publishing on the web, but it can bring additional exposure. And, of course, some of the journals will pay you for your material.

But the best reason of all to share your knowledge is because you want others to share their knowledge and experiences with you. Only if everyone cooperates by sharing what they know will we be able to maintain the community of DBAs who are willing and eager to provide assistance.


Here are some valuable links for regional and worldwide database user groups:


Monday, November 18, 2013

DBA Rules of Thumb - Part 2 (Automate)

Why should you do it by hand if you can automate DBA processes? Anything you can do probably can be done better by the computer – if it is programmed to do it properly. And once it is automated you save yourself valuable time. And that time can be spent tackling other problems, learning about new features and functionality, or training others.


Furthermore, don’t reinvent the wheel. Someone, somewhere, at some time many have already solved the problem you currently are attempting to solve. Look to the web for sites that allow you to download and share scripts. Or if you have budget money look to purchase DBA tools from ISVs. There are a lot of good tools out there, available from multiple vendors, that can greatly simplify the task of database administration. Automating performance management, change management, backup and recovery, and other tasks can help to reduce the amount of time, effort, and human error involved in managing database systems.

Of course, you can take the automation idea too far. There has been a lot of talk and vendor hype lately about self-managing database systems. For years now, pundits and polls have been asking when automation will make the DBA job obsolete. The correct answer is "never" - or, at least, not any time soon.

There are many reasons why DBAs are not on the fast path to extinction. Self-managing databases systems are indeed a laudable goal, but we are very far away from a “lights-out” DBMS environment. Yes, little-by-little and step-by-step, database maintenance and performance management is being improved, simplified, and automated. But you know what? DBAs will not be automated out of existence in my lifetime – and probably not in your children’s lifetime either.
Many of the self-managing features require using the built-in tools from the DBMS vendor, but many organizations prefer to use heterogeneous solutions that can administer databases from multiple vendors (Oracle, DB2, SQL Server, MySQL, etc.) all from a single console. Most of these tools have had self-managing features for years and yet they did not make the DBA obsolete.

And let’s face it, a lot of the DBMS vendors claims are more hyperbole than fact. Some self-managing features are announced years before they will become generally available in the DBMS. All vendors claims to the contrary, no database today is truly 100% self-contained. Every database needs some amount of DBA management – even when today’s best self-management features are being used.

What about the future? Well, things will get better – and probably more costly. You don’t think the DBMS vendors are building this self-management technology for free, do you? But let’s remove cost from the equation for a moment. What can a self-managing database actually manage?

Most performance management solutions allow you to set performance thresholds. A threshold allows you to set up a trigger that says something like “When x% of a table’s pages contain chained rows or fragmentation, schedule a reorganization.” But these thresholds are only as good as the variables you set and the actions you define to be taken upon tripping the threshold. Some software is bordering on intelligent; that is, it “knows” what to do and when to do it. Furthermore, it may be able to learn from past actions and results. The more intelligence that can be built into a self-managing system, the better the results typically will be. But who among us currently trusts software to work like a grizzled veteran DBA? The management software should be configurable such that it alerts the DBA as to what action it wants to take. The DBA can review the action and give a “thumbs up” or “thumbs down” before the corrective measure is applied. In this way, the software can earn the DBA’s respect and trust. When the DBA trusts the software, he can turn it on so that it self-manages “on the fly” without DBA intervention. But today, in most cases, a DBA is required to set up the thresholds, as well as to ensure their on-going viability.

Of course, not all DBA duties can be self-managed by software. Most self-management claims are made for performance management, but what about change management? The DBMS cannot somehow read the mind of its user and add a new column or index, or change a data type or length. This non-trivial activity requires a skilled DBA to analyze the database structures, develop the modifications, and deploy the proper scripts or tools to implement the change. Of course, software can help simplify the process, but software cannot replace the DBA.

Furthermore, database backup and recovery will need to be guided by the trained eye of a DBA. Perhaps the DBMS can become savvy enough to schedule a backup when a system process occurs that requires it. Maybe the DBMS of the future will automatically schedule a backup when enough data changes. But sometimes backups are made for other reasons: to propagate changes from one system to another, to build test beds, as part of program testing, and so on. A skilled professional is needed to build the proper backup scripts, run them appropriately, and test the backup files for accuracy. And what about recovery? How can a damaged database know it needs to be recovered? Because the database is damaged any self-managed recovery it might attempt is automatically rendered suspect. Here again, we need the wisdom and knowledge of the DBA.

And there are many other DBA duties that cannot be completely automated. Because each company is different, the DBMS must be customized using configuration parameters. Of course, you can opt to use the DBMS “as is,” right out-of-the-box. But a knowledgeable DBA can configure the DBMS so that it runs appropriately for their organization. Problem diagnosis is another tricky subject. Not every problem is readily solvable by developers using just the Messages and Codes manual and a help desk. What happens with particularly thorny problems if the DBA is not around to help?

Of course, the pure, heads-down systems DBA may (no, let's say should) become a thing of the past. Instead, the modern DBA will need to understand multiple DBMS products, not just one. DBAs furthermore must have knowledge of the business impact of each database under their care (more details here). And DBAs will need better knowledge of logical database design and data modeling – because it will advance their understanding of the meaning of the data in their databases.

Finally, keep in mind that we didn't automate people out of existence when we automated HR or finance. Finance and HR professionals are doing their jobs more efficiently and effectively, and they have the ability to deliver a better product in their field. That's the goal of automation. So, as we automate portions of the DBA’s job, we'll have more efficient data professionals managing data more proficiently.


This blog entry started out as a call to automate, but I guess it kinda veered off into an extended dialogue on what can, and cannot, be accomplished with automation. I guess the bottom line is this... Automation is key to successful, smooth-running databases and applications... but don't get too carried away by the concept.


I hope you found the ideas here to be useful... and feel free to add your own thoughts and comments below! 



Wednesday, November 13, 2013

DBA Rules of Thumb - Part 1

Over the years I have gathered, written, and assimilated multiple collections of general rules of the road that apply to the management discipline of Database Administration (DBA). With that in mind, I thought it would be a good idea to share some of these Rules of Thumb (or ROTs) with you in a series of entries to my blog.

Now even though this is a DB2-focused blog, the ROTs that I will be covering here are generally applicable to all DBMSs and database professionals.

The theme for this series of posts is that database administration is a very technical discipline. There is a lot to know and a lot to learn. But just as important as technical acumen is the ability to carry oneself properly and to embrace the job appropriately. DBAs are, or at least should be, very visible politically within the organization. As such, DBAs should be armed with the proper attitude and knowledge before attempting to practice the discipline of database administration.

Today's blog entry offers up an introduction, to let you know what is coming. But I also will share with you the first Rule of Thumb... which is

#1 -- Write Down Everything


During the course of performing your job as a DBA, you are likely to encounter many challenging tasks and time consuming problems. Be sure to document the processes you use to resolve problems and overcome challenges. Such documentation can be very valuable should you encounter the same, or a similar, problem in the future. It is better to read your notes than to try to recreate a scenario from memory.

Think about it like this... aren't we always encouraging developers to document their code? Well, you should be documenting your DBA procedures and practices, too!

And in Future Posts...

In subsequent posts over the course of the next few weeks I post some basic guidelines to help you become a well-rounded, respected, and professional DBA.

I encourage your feedback along the way. Feel free to share your thoughts and Rules of Thumb -- and to agree or disagree with those I share.

Wednesday, November 06, 2013

IBM Information on Demand 2013, Wednesday

Today's blog entry from Las Vegas covering this year's IOD conference will be my final installment on the 2013 event.

The highlight for Wednesday, for me anyway, was delivering my presentation to a crowded room of over a hundred folks who were interested in hearing about cost optimization and DB2 for z/OS. The presentation was kind of broken down into two sections. The first discussed subcapacity pricing and variable workload license charges (vWLC). IBM offers vWLC for many of its popular software offerings, including DB2 for z/OS. What that means is that you receive is a monthly bill from IBM based on usage. But the mechanics of exactly how that occurs are not widely known. So I covered how this works including a discussion of IMSU, Defined Capacity, the rolling four hour average (R4H) and the IBM SCRT (Sub Capacity Reporting Tool).

Basically, with VWLC your MSU usage is tracked and reported by LPAR. You are charged based on the maximum rolling four hour (R4H) average MSU usage. R4H averages are calculated each hour, for each LPAR, for the month. Then you are charged by product based on the LPARs it runs in. All of this information is collected and reported to IBM using the SCRT (Sub Capacity Reporting Tool). It uses the SMF 70-1 and SMF 89-1 / 89-2 records. So you pay for what you use, sort of. You actually pay based on LPAR usage. Consider, for example, if you have DB2 and CICS both in a single LPAR, but DB2 is only minimally used and CICS is used a lot. Since they are both in the LPAR you’d be charged for the same amount of usage for both. But it is still better than being charged based on the usage of your entire CEC, right?

I then moved along to talk about tuning ideas with cost optimization in mind including targeting monthly peaks, SQL tuning, using DC to extend a batch window, SQL tuning and some out of the box ideas.

I also spent some time today wandering through the Expo Center where IBM and many other vendors were talking about and demoing there latest and greatest technology. And I picked up some of the usual assortment of t-shirts, pins and other tchotchkes.

And I also attended a session called Fun With SQL that was, indeed, fun... but also pointed out how difficult it can be to code SQL on the fly in front of a room full of people!

Overall, this year's IOD was another successful conference. IOD is unmatched in my opinion in terms of the overall experience including education, entertainment, product news, meeting up with and talking to folks I haven't seen in awhile, and generating leads for consulting engagements. Of course, with 13,000+ attendees the conference can be overwhelming, but that means there is always something of interest going on throughout the day. And by the time Wednesday rolls around, most people are starting to get tired, me included.

Of course, I still have tonight and tomorrow morning before heading back home... so I may still post another little something later in the week once I've had a time to digest everything a little bit more.

In the interim, if you'd like other people's opinions and coverage of IOD, check out the blogs on the IOD hub at http://www.ibmbigdatahub.com/IOD/2013/blogs.

But for now, thanks IBM, for throwing another fantastic conference focusing on my life's work passion -- data!

IBM Information on Demand 2013, Tuesday

The second day of the IBM IOD conference began like the first, with a general session attended by most of the folks at the event. The theme of today's general session was Big Data and Analytics in Action. And Jake Porway was back to host the festivities.

The general session kicked off talking about democratizing analytics, which requires putting the right tools in people's hands when and where they want to use them. And also the necessity of analytics becoming a part of everything we do.

These points were driven home by David Becker of the Pew Charitable Trust when he took the stage with IBM's Jeff Jonas Chief Scientist and IBM Fellow. Becker spoke about the data challenges and troubles with maintaining accurate voting rolls. He talked about more than 12 million outdated records across 7 US states. Other issues mentioned by Becker included deceased people still legitimately registered to vote, people registered in multiple states, and the biggest issue, 51 million citizens not even registered.

Then Jonas told the story of how Becker invited him to attend some Pew meetings because he had heard about Jonas' data analytics expertise. After sitting through the first meeting Jonas immediately recognized the problem as being all about context. Jonas offered up a solution to match voter records with DMV records instead of relying on manual modifications.

The system built upon this idea is named ERIC, short for the Electronic Registration Information Center. And Pew has been wowed by the results. ERIC has helped to identify over 5 million eligible voters in seven states. The system was able to find voters who had moved, not yet registered and those who had passed away.

"Data finds data," Jonas said. If you've heard him speak in the past, you've probably heard him say that before, too! He also promoted the G2 engine that he built and mentioned that it is now part of IBM SPSS Modeler.

This particular portion of the general session was the highlight for me. But during this session IBMers also talked about Project NEO (the next generation of data discovery in the cloud), IBM Concert (delivering insight and cognitive collaboration), and what Watson has been up to.

I followed up the general session by attending a pitch on Big Data and System z delivered by Stephen O'Grady of Redmonk and IBM's Dan Wardman. Stepehen started off  the session and he made a couple of statements that were music to my ears. First, "Data doesn't always have to be big to lead to better decisions." Yes! I've been saying this for the past couple of years.

And he also made the observation that since data is more readily available, businesses should be able to move toward evidence-based decision-making. And that is a good thing. Because if instead we are making gut decisions or using our intuition, the decisions simply cannot be as good as those based on facts. And he backed it up with this fact: organizations  using analytics are 2.2x more likely to outperform their industry peers.

O'Grady also offered up some Big Data statistics that are worth taking a look at --> here

And then Wardman followed up with IBM's System z information management initiatives and how they tie into big data analytics. He led off by stating that IBM's customers are most interested in transactional data instead of social data for their Big Data projects. Which led to him to posit that analytics and decision engines need to exist where the transactional data exists -- and that is on the mainframe!

Even though the traditional model moves data for analytics processing, IBM is working on analytics on data without moving it. And that can speed up Big Data projects for mainframe users.

But my coverage of Tuesday at IOD would not be complete without mentioning the great concert sponsored by Rocket Software. Fun. performed and they rocked the joint. It is not often that you get to see such a new, young and popular band at an industry conference. So kudos to IBM and Rocket for keeping things fresh and delivering high quality entertainment. The band performed all three of their big hits ("Carry On", "We Are Young", and "Some Nights", as well as a bevy of other great songs including a nifty cover of the Stones "You Can't Always Get What You Want."

All in all, a great day of education, networking, and entertainment. But what will Wednesday hold? Well, for one thing, my presentation on Understanding The Rolling 4 Hour Average and Tuning DB2 to Control Costs.

So be sure to stop by the blog again tomorrow for coverage of IOD Day Three!