Monday, September 22, 2014

Rules for an Effective DB2 Monitoring Strategy

DB2, and relational databases in general, have a reputation of being (relatively) easy for users to understand; users specify what data to retrieve, not how to retrieve it. The layer of complexity removed for the users, however, had to be relegated elsewhere: to the code of DB2. And that means you sometimes have to dig into technical details of the DB2 optimizer or other arcane details to uncover performance issues.

DB2 also has a reputation as a large resource consumer. This reputation is largely because of DB2’s complexity. Because DB2 performance analysts must understand and monitor this complexity, they require an array of performance monitoring tools and techniques.

But I do not want to get into all of the potential tools and techniques in today’s short post. I plan to talk about the various types of DB2 performance and monitoring solutions that are available in upcoming posts.

Instead, today’s post just covers the high-level components of what is needed for an effective DB2 performance management strategy... An effective monitoring strategy includes the following:
  • Scheduled batch performance reports on the recent performance of DB2 applications and the DB2 subsystem; a history of these reports would be useful, too.
  • An online monitor that executes when DB2 executes to enable quick monitoring of performance problems as they occur.
  • Online monitors for all teleprocessing environments in which DB2 transactions execute (for example, CICS, IMS/TM, or TSO).
  • A monitoring solution that can track and report on dynamic distributed traffic.
  • End-to-end transaction monitoring capability, sometimes called Application Performance Management.
  • SQL query monitoring and explain analysis.
  • Regular monitoring of z/OS for memory use and VTAM for network use.
  • Scheduled reports from the DB2 Catalog and queries run against the RTS tables
  • Access to the DB2 DSNMSTR address space to review console messages.
  • Use of the DB2 -DISPLAY command to view databases, threads, and utility execution.


As I mentioned, I will cover the various types of performance tools and product offerings in upcoming posts. But for now, if you are interested in uncovering more information about third-party performance tools take a look at this link on my web site

Wednesday, September 17, 2014

Plan to Attend IBM Insight 2014

The IBM Insight conference is just around the corner and if you care about DB2, Big Data, Analytics, data warehousing, or really, anything at all about enterprise data, then Las Vegas is the place to be the week of October 26 thru 30, 2014.

But what is IBM Insight? Well, you may remember it as the IBM Information on Demand conference, or IOD for short, Yes, IBM has renamed the conference yet again. I'm sure a lot of you can remember when there were a bunch of different "Technical Conferences" like the IMS Tech Conference or the DB2 Tech Conference. Those all (and several others) got rolled up into IOD... which is now IBM Insight.


IBM claims to have changed the name because "It's no longer just about information; it's what you can do with the information."  And that kinda makes some sense... or at least it does to me!

With that bit of confusion out of the way, why should you attend the IBM Insight 2014 conference? Well, there is something for everybody in a data-related profession. Over 1500 presentations that run the gamut from DB2 to IMS to Cognos to BI to Big Data to analytics to... well, you get the idea. And there are five fast track groups that highlight several of IBM's important initiatives covering:

  • Watson and Cognitive Computing
  • Cloud
  • Security
  • Infrastructure
  • Mobile and Social Engagement



And with over 13,000 attendees the opportunity to network with your peers is unmatched! Add to that the over 350 exhibitors at the Expo hall and you will be able to view, review, and examine all kinds of interesting software to help you manage your enterprise data.


Not to mention the fact that I'll be presenting there on the Top 10 DB2 Things You Need to Know (Wednesday, October 29, at 3:00 pm). Well, OK, I guess I mentioned it.

So what are you waiting for? Register today... and there is still time (expires September 19, 2014) to get the early bird discount of $300 off!

Saturday, September 13, 2014

Submit an Abstract for IDUG NA 2015 in Philadelphia

Yes, it is time to start thinking about next year's IDUG DB2 Tech Conference already, especially if you are hoping to deliver a presentation there. The conference will be in the Philadelphia area in 2015, a first for IDUG... well, actually, the conference will be held at the Radisson Hotel Valley Forge in King of Prussia, PA - but that might as well be Philadelphia. I was born and raised in Pittsburgh, and we always thought that entire side of the state might as well be New Jersey, so it is all the same to me!

The conference information can be found at this link and you can either follow the Call for Presentations link at that page, or click here to submit your abstract.

Now why should you consider speaking at IDUG? If you have in the past, I'm sure you are wondering why somebody would even ask such a question. First of all, if you are accepted as a speaker, you get a free conference pass. And everybody can appreciate the benefit of some free education. But by putting together a presentation and preparing to speak in front of your peers you will learn more than you think! Sometimes the "teacher" learns more than the "students"... if you have never done it before, give it a try. Sure, it can be scary at first, but don't let that stop you. Learning how to present and speak in public can, and will, further your career!

Think about it, the number one fear of most people is public speaking... even more than the fear of death! You know what that means? If you are at a funeral, most people would rather be in the coffin than delivering the eulogy. That's just nuts!

And by going to IDUG you'll get a chance to network with IBMers, gold consultant, IBM champions, DBA, programmers, and more. Trust me... you don't want to miss out on this opportunity.

Thursday, September 04, 2014

The Importance of SLAs and RTOs

Assuring optimal performance is one of the most frequently occurring tasks for DB2 DBAs. Being able to assess the effectiveness and performance of various and sundry aspects of your DB2 systems and applications is one of the most important things that a DBA must be able to do. This can include online transaction response time evaluation, sizing of the batch window and determining whether it is sufficient for the workload, end-to-end response time management of distributed workload, and so on. 

But in order to accurately gauge the effectiveness of your current environment and setup, Service Level Agreements, or SLAs, are needed. SLAs are derived out of the practice of Service-level management (SLM), which is the “disciplined, proactive methodology and procedures used to ensure that adequate levels of service are delivered to all IT users in accordance with business priorities and at acceptable cost.”

In order to effectively manage service levels, a business must prioritize its applications and identify the amount of time, effort, and capital that can be expended to deliver service for those applications.

A service level is a measure of operational behavior. SLM ensures that applications behave accordingly by applying resources to those applications based on their importance to the organization. Depending on the needs of the organization, SLM can focus on availability, performance, or both. In terms of availability, the service level might be defined as “99.95 percent uptime from 9:00 a.m. to 10:00 p.m. on weekdays.” Of course, a service level can be more specific, stating that “average response time for transactions will be 2 seconds or less for workloads of 500 or fewer users.”

For an SLA to be successful, all parties involved must agree on stated objectives for availability and performance. The end users must be satisfied with the performance of their applications, and the DBAs and technicians must be content with their ability to manage the system to the objectives. Compromise is essential to reach a useful SLA.
In practice, though, many organizations do not institutionalize SLM. When new applications are delivered, there may be vague requirements and promises of subsecond response time, but the prioritization and budgeting required to assure such service levels are rarely tackled (unless, perhaps, if the IT function is outsourced). It never ceases to amaze me how often SLAs simply do not exist. I always ask for them whenever I am asked to help track down performance issues or to assess the performance of a DB2 environment.

Let's face it, if you do not have an established agreement for how something should perform, and what the organization is willing to pay to achieve that performance, then how can you know whether or not things are operating efficiently enough? The simple answer is: you cannot.

It may be possible for a system assessment to offer up general advice on areas where performance gains can be achieved. But in such cases -- where SLAs are non-existent -- it you cannot really deliver guidance on whether the effort to remediate the "problem areas" is worthwhile. Without the SLAs in place you simply do not know if current levels of performance are meeting agreed upon service levels, because there are no agreed-upon service levels (and, no, "subsecond respond time" is NOT a service level! Additionally, you cannot know what level of spend is appropriate for any additional effort needed to achieve the potential performance, because no budget has been agreed upon.

Another potential problem is the context of the service being discussed. Most IT professionals view service levels on an element-by-element basis. In other words, the DBA views performance based on the DBMS, the SysAdmin views performance based on the operating system or the transaction processing system, and so on. SLM properly views service for an entire application. However, it can be difficult to assign responsibility within the typical IT structure. IT usually operates as a group of silos that do not work together very well. Frequently, the application teams operate independently from the DBAs, who operate independently from the SAs, and so on.

To achieve end-to-end SLM, these silos need to be broken down. The various departments within the IT infrastructure need to communicate effectively and cooperate with one another. Failing this, end-to-end SLM will be difficult to implement.

The bottom line is that the development of SLAs for your batch windows, your transactions and business processes is a best practice that should be implemented at every DB2 shop (indeed, you can remove DB2 from that last sentence and it is still true).

Without SLAs, how will the DBA and the end users know whether an application is performing adequately? Not every application can, or needs to, deliver subsecond response time. Without an SLA, business users and DBAs may have different expectations, resulting in unsatisfied business executives and frustrated DBAs—not a good situation.
With SLAs in place, DBAs can adjust resources by applying them to the most-mission-critical applications as defined in the SLA. Costs will be controlled and capital will be expended on the portions of the business that are most important to the business. Without SLAs in place, an acceptable performance environment will be ever elusive. Think about it; without an SLA in place, if the end user calls up and complains to the DBA about poor performance, there is no way to measure the veracity of the claim or to gauge the possibility of improvement within the allotted budget.

Recovery Time Objectives (RTOs)

Additionally, the effectiveness of backup and recovery should be a concern to all DB2 DBAs. This requires that RTOs (Recovery Time Objectives) be established. An RTO is basically an SLA for the recovery of your database objects. Without RTOs, it is difficult (if not impossible) to gauge the state of recoverability and the efficacy of image copies being taken. 

Each database object should have an RTO assigned to it. The RTO needs to take into account the same type of things that an SLA considers. In other words, the business must prioritize its applications, DBAs must map database objects to the applications, and together they must identify the amount of time, effort, and capital that can be expended to assure the minimization of downtime for those applications.

Again, we are measuring operational behavior. The RTO ensures that, when problems occur requiring database recovery, the application outage is limited to what has been defined as tolerable for the business (in terms of uptime and cost to provide that uptime).
Again, as with an SLA, for the RTO to be successful, all parties involved must agree on stated objectives for downtime and time to recovery. The end users must be satisfied with the potential duration of their application’s downtime, and the DBAs and technicians must be content with their ability to recover the system to the objectives. And again, cost is a contributing factor. The RTO cannot simply be I need my application up in 5 minutes and I can’t spend any more money to do that, because that is not reasonable (or possible).

Without written RTOs, DBAs can provide due diligence to make sure that database objects are backed up and recoverable, but cannot really provide any guarantee in terms of how quickly the data can be recovered (or perhaps, to what point in time) when an outage occurs. Of course, the DBA can create and review backup policies and procedures to encourage a recoverable environment. But there won't be any way to ensure with any consistency that the backup plan can deliver the time-to-recovery needed by the business.

So why don't organizations create SLAs and RTOs as a regular course of business? 

And if your organization does create SLAs and RTOs, please share with us how doing so became a standard at your shop...