Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Tuesday, November 28, 2023

Top Ten Db2 Performance Tips - No. 10 Stay Up to Date

Keeping Db2 up to date is of paramount importance for ensuring the security, stability, and optimal performance of your database environment. For Db2 for z/OS this means formulating a plan for applying PTFs (Program Temporary Fixes) and correlating your DB2 fixes with your system, ideally following IBM's RSU (Recommended Service Upgrade) preventive service philosophy. For Db2 for LUW it means keeping up with fix packs.

Why is it important to stay current with maintenance? And what are the benefits of doing so?

  • Security: Cyber threats and vulnerabilities are ever-evolving, and database systems are potential targets for malicious activities. By staying up to date with the latest patches and fixes, you can ensure that your Db2 systems are protected with the latest security enhancements. Patching known vulnerabilities helps protect sensitive data, prevent unauthorized access, and reduce the risk of security breaches.
  • Bug Fixes and Stability: Software systems, including Db2, are not immune to bugs and glitches. The release of patches and fixes addresses identified issues, improving the overall stability and reliability of the database system. Applying these updates helps mitigate performance issues, software crashes, or data inconsistencies that may arise from known bugs. Keeping your Db2 environment updated with the latest fixes contributes to a smoother and more reliable database experience.
  • Performance: IBM regularly releases performance enhancements for Db2. Applying the latest patches and fixes ensures that you benefit from these improvements, allowing your database to operate at its optimal efficiency. The updates may include query optimization, resource management enhancements, or other performance-related optimizations, helping to maximize the speed and responsiveness of your Db2 systems and applications.

Another important aspect of staying up to date is tracking and upgrading the version and level of your Db2 systems. IBM clearly provides information on the product lifecycle for Db2 (and all of their products). Information for DB2 for z/OS is provided here. The first thing to be aware of is that you always want to be running supported software. 

If your software is not supported and you run into issues or problems, IBM will not provide support for unsupported software. When your software versions reach their EOS date and support is no longer available through standard Software S&S, you have the option to purchase Continuing Support (typically at a higher rate). It is usually more cost-effective and less risky to migrate to the new, supported version. Furthermore, documentation for unsupported versions of IBM software typically is not available in IBM Knowledge Center.

Therefore, it is wise to be watchful of the EOM and EOS announcements made for your current version of Db2. EOM is the End of Marketing date. When this date is reached you can no longer order that version of the software. IBM Db2 12 for a/OS reached its EOM date in June 2023. More important is the EOS, or End of Service, date. After this date is reached that version of the software is out of support. For Db2 12 for z/OS EOS is scheduled for 2023-12-31 (at the time this blog is being published). At times, EOS and EOM date may be revised due to customer and/or IBM needs.

I maintain links to the IBM product lifecycle pages on my web site at https://www.mullinsconsulting.com/db2.htm

Furthermore, as of Db2 12 for /OS, it is also important to keep up to date with the Function Level for your subsystems. Instead of requiring new versions or releases to provide new functionality, IBM is now using a continuous delivery model. So, instead of waiting years for a new version. This means that you should also be tracking function levels, applying maintenance when needed to support them, and activating new function levels on a continuous basis. Failing to do so means that you will be behind the curve when an eventual new version is released.

To access the latest patches and fixes for Db2, IBM provides resources and support channels that you can leverage:

  • IBM Fix Central: IBM Fix Central is a centralized portal where you can search, download, and apply patches, fixes, and updates for various IBM software products, including Db2. It offers an intuitive interface to locate the required fixes based on specific product versions and platforms.
  •  IBM Support Portal: The IBM Support Portal is a comprehensive resource for accessing product documentation, technical articles, and obtaining support for IBM software. It provides a wealth of information on Db2, including links to downloads, useful articles, and official product documentation.
  •  IBM Passport Advantage: If you have a valid software maintenance subscription or license agreement with IBM, you can access the latest patches, fixes, and updates through the IBM Passport Advantage website. This platform enables authorized users to download software updates and manage their entitlements.

You should also keep up to date by following useful online forums and user groups. IDUG offers a good central site for such groups, and useful DB2 information is also offered by SHARE. Both of these organizations offer vibrant community forums and discussion boards that can provide valuable insights into the latest information about Db2. Furthermore, community members often share their experiences, recommendations, and helpful tips for staying up to date with Db2 updates.

Finally, I have to promote my web site Mullins Consulting, Inc. where I frequently post new information about Db2 as well as my articles and books on Db2.

Summing It All Up

In conclusion, staying up to date with the latest patches and fixes for Db2 is essential for maintaining a secure, stable, and high-performing database environment. By applying the latest updates, you benefit from enhanced security, bug fixes, stability improvements, and performance optimizations. Leveraging the resources available to you (such as those discussed above), and engaging with the Db2 community forums ensures that you have access to the most recent updates and support to keep your Db2 environment current and reliable.

Thursday, March 02, 2017

The DB2 12 for z/OS Blog Series – Part 6: Transferring Ownership of Database Objects

When a database object is created it is given a qualified two-part name. This applies to tables, indexes, table spaces, distinct types, functions, stored procedures, and triggers. The first part is the schema name (or the qualifier), which is either implicitly or explicitly specified. The default schema is the authorization ID of the owner of the plan or package. The second part is the name of the object. 

But things can get confusing. When an object is created, an authorization ID is assigned as the owner of the object. This may, or may not, be used as the schema qualifier for the object. The object owner implicitly inherits privileges to reference, maintain and grant privileges to the object. 

Changing the owner of a database object used to be a difficult process. But DB2 12 for z/OS delivers a simple method of transferring the ownership of your database objects: the TRANSFER OWNERSHIP SQL statement. 

The primary purpose for this new feature is to make it easier to manage database objects that are owned by an employee who no longer works for your company. You can use TRANSFER OWNERSHIP to simply switch the ownership of the database objects to another employee. The new owner can be an authorization ID or a role. You can issue the statement interactively, or embed it in an application program. 

The TRANSFER OWNERSHIP statement does not change the schema of the transferred database object. 

In order to transfer ownership of a database object, you must either be the owner of the object or have SECADM authority. The basic syntax of the statement is as follows: 

TRANSFER OWNERSHIP OF object-name
 TO {USER authorization-name | 
     SESSION_USER | 
     ROLE role-name} 
 REVOKE PRIVILEGES 

Be careful if a package depends on the current owner's privileges. The dependent package will be invalidated unless the current owner is already explicitly granted those privileges from another source prior to the object ownership transfer. For example, after the ownership of a table is transferred and if a dependent package requires the SELECT privilege on that table by the current owner, the dependent package is invalidated unless the current owner has already been explicitly granted the SELECT privilege for that table before its ownership transfer. 

Here is a quick example transferring the ownership of a specific index to a different user, in this case, JOHNDOE. 

TRANSFER OWNERSHIP OF INDEX TR_P.XHIST02
 TO USER JOHNDOE 
REVOKE PRIVILEGES;

Thursday, January 26, 2012

A Forced Tour of Duty


Mainframe developers are well aware of the security, scalability, and reliability of mainframe computer systems and applications. Unfortunately, though, the bulk of new programmers and IT personnel are not mainframe literate. This should change. But maybe not for the reasons you are thinking.

Yes, I am a mainframe bigot. I readily admit that. In my humble opinion there is no finer platform for mission critical software development than the good ol’ mainframe. And that is why every new programmer should have to work a tour of duty on mainframe systems and applications after graduating from college.

Why would I recommend such a thing? Well, it is because of the robust system management processes and procedures which are in place and working extremely well within every mainframe shop in the world. This is simply not the case for Windows, Unix, and other platforms. By working on mainframe systems newbies will learn the correct IT discipline for managing mission critical software.

What do I mean by that? How about a couple of examples: It should not be an acceptable practice to just insert a CD and indiscriminately install software onto a production machine. Mainframe systems have well-documented and enforced change management procedures that need to be followed before any software is installed into a production environment.

Nor should it be acceptable to just flip the switch and reboot the server. Mainframe systems have safeguards against such practices. And mainframes rarely, if ever, need to be restarted because the system is hung or because of a software glitch. Or put in words PC dudes can understand: there is no mainframe “blue screen of death.” Indeed, months, sometimes years, can go by without having to power down and re-IPL the mainframe.

And don’t even think about trying to get around security protocols. In mainframe shops there is an entire group of people in the operations department responsible for protecting and securing mainframe systems, applications, and data. Security should not be the afterthought that it is in the Windows world.

Ever wonder why there are no mainframe viruses? A properly secured operating system and environment make such a beast extremely unlikely. And with much of the world’s most important and sensitive data residing on mainframes, don’t you think the hackers out there would just love to crack into those mainframes more frequently?

Project planning, configuration management, capacity planning, job scheduling and automation, storage management, database administration, operations management, and so on – all are managed and required in every mainframe site I’ve ever been involved with. When no mainframe is involved many of these things are afterthoughts, if they’re even thought of at all.

Growing up in a PC world is a big part of the problem. Although there may be many things to snark about with regard to personal computers, one of the biggest is that they were never designed to be used the way that mainframes are used. Yet we call a sufficiently “pumped-up” PC a server – and then try to treat it like we treat mainframes. Oh, we may turn it on its side and tape a piece of paper on it bearing a phrase like “Do Not Shut Off – This is the Production Server”… but that is a far cry from the glass house that we’ve built to nourish and feed the mainframe environment.

Now to be fair, strides are being made to improve the infrastructure and best practices for managing distributed systems. Some organizations have built an infrastructure around their distributed applications that rivals the mainframe glass house. But this is more the exception than the rule. With time, of course, the policies, practices, and procedures for managing distributed systems will improve to mainframe levels.

But the bottom line is that today’s distributed systems – that is, Linux, Unix, and Windows-based systems – typically do not deliver the stability, availability, security, or performance of mainframe systems. As such, a forced tour of duty supporting or developing applications for a mainframe would do every IT professional a whole world of good.

Tuesday, September 30, 2008

A Perfect Storm?

There is something of a perfect storm brewing in the world of data today. The world is becoming more automated, more connected, more wireless, and more complex. The next wave of database administration is intelligent automation. I refer to this as implementing software scrubbing bubbles that “work hard, so you don’t have to.” (Remember that commercial!)

As more of the tasks required of DBAs become more automated, the DBA will be freed to expand into other areas. So one front on this storm is the autonomic computing initiatives that automate DBA tasks. At the same time, IT professionals are being asked to know more about the business instead of just knowing the technology. So DBAs need to understand the business purpose and definition of the data they manage, as well as the technological underpinnings of the DBMS. The driving force here is predominantly regulatory compliance. This second front of the perfect storm will cause DBAs to work more closely with metadata to drive database archiving, data auditing, and security to ensure their organization complies with regulations like Sarbanes-Oxley, HIPAA, and others.

Regarding the wireless aspect of things, pervasive devices (PDA, handhelds, cell phones, etc.) will increasingly interact with database systems. DBAs will need to get involved there to ensure successful data synchronization. And database systems will work with disconnected data seamlessly, such as data generated by RFID tags.

Yet another big database trend is technology "suck." By that I mean the DBMS is as it sucks up technologies and functions that previously required you to purchase separate software. Remember when the DBMS had no ETL or OLAP functionality? Those days are gone. This will continue as the DBMS adds capabilities to tackle more and more IT tasks.

Another trend impacting DBAs will be a change in some of their roles as more and more of the recent DBMS features actually start being used in more production systems.

The net result of this perfect storm of changes is that data professionals are absolutely being required to do more... sometimes with less (less time, less money, less staff, etc.)

If you know the technology but are then required to know the business, this is doing more – much more. But the technology, in many cases, is also expanding. For example, DB2 9 incorporates native XML. Most DBAs are not XML savvy, but increasingly they will have to learn more about XML as the DBMS technology expands. And this is just one example.

Additionally, data is growing at an ever-increasing rate. Every year the amount of data under management increases (some analysts peg the compound annual rate of data growth at 125%) and in many cases the number of DBAs to manage that growing data is not increasing, and indeed, could be decreasing.

And, budgetary limitations can cause DBAs to have to do more work, to more data, with less resources. When a company reduces budget but demands more work, automation is an absolute necessity. Turning work over to the computer can help (although it is unlikely to solve every administrative issue). Sometimes IT professionals fight against the very thing they excel in – that is, automating work. If you think about it, every computer program is written to automate someone’s work – the write (word processing), the accountant (financials, payroll, spreadsheets), and so on. This automation did not put the executives whose work was automated out of a job; instead it made them more efficient. Yet, for some reason, there is a notion in the IT industry that automating IT tasks will eliminate jobs. You cannot automate a DBA out of existence – but you can make that DBA’s job more effective and efficient with DBA tools and autonomic computing.

And the sad truth of the matter is that there is still a LOT more than can, and should, be done in most companies. We can start with better automation of DBA tasks, but we shouldn't stop there!

Corporate governance is hot – that is, technologies to help companies comply with governmental regulations. Software to enable archiving for long-term data retention, auditing to determine who did what to which piece of data, and security to better protect data are all hot data technologies right now. But database security need to be improve and technologies for securing and auditing data need to be more pervasively implemented.

Metadata is increasing in importance. As data professionals really begin to meld together technology and business, they find that metadata is imperative. But most organizations do not have a metadata repository fully-populated and up-to-date that acts as a lexicon for business data.

And finally, something that isn’t nearly hot enough is data quality and integrity. Tools, processes, and database options that can be used to make data more accurate and reliable are not implemented appropriately with any regularity. So the data stored in our corporate databases is suspect. According to Thomas C. Redman, data quality guru, poor data quality costs the typical company at least ten percent (10%) of revenue. That is a significant cost! Data quality is generally bad in most organizations – and more needs to be done to address that problem.

With all of these thoughts in mind, are you prepared to weather this perfect storm?

Monday, August 06, 2007

Encryption [DB2 9 for z/OS]

DB2 V9 offers some encryption news, but we need to go back a version to start the story. You see, DB2 supports encryption in Version 8 through encryption functions that need to be explicitly coded in order to encrypt and decrypt data.

These functions (ENCRYPT and DECRYPT) allow you to encrypt and decrypt data at the column level. Because you can specify a different password for every row that you insert, you are encrypting data at the “cell” level in your tables. If you use these functions to encrypt your data, be sure to put some mechanism in place to manage the passwords that are used to encrypt the data. Without the password, there is absolutely no way to decrypt the data.

To assist you in remembering the password, you have an option to specify a hint (for the password) at the time you encrypt the data. The following SQL example shows an INSERT that encrypts the SSN ( social security number ) using a password and a hint:

INSERT INTO EMP (SSN)
VALUES(ENCRYPT('289-46-8832','TARZAN','? AND JANE'));

The password is “TARZAN” and the hint we’ve chosen to provide is “? AND JANE”… so the hint will prompt us to think of Tarzan as the companion of Jane.

In order to retrieve the encrypted data you will need to use the DECRYPT function supplying the correct password. This is shown in the following SELECT statement:

SELECT DECRYPT_BIT(SSN,'TARZAN') AS SSN
FROM EMP;


If we fail to supply a password, or the wrong password, the data is returned in an encrypted format that is unreadable.

The result of encrypting data using the ENCRYPT function is VARCHAR FOR BIT DATA. The encryption algorithm is an internal algorithm. For those who care to know, it uses Triple DES cipher block chaining (CBC) with padding and the 128-bit secret key is derived from the password using an MD5 hash.

When defining columns to contain encrypted data the DBA must be involved because the data storage required is significantly different. The length of the column has to include the length of the non-encrypted data + 24 bytes + the number of bytes to the next 8 byte boundary + 32 bytes for the hint.

OK, that is all V8 stuff and this series of blog postings is supposed to be about V9 functionality, right? So what about version 9? Well, DB2 9 for z/OS offers some nice improvements to encryption support. Firstly, DB2 can take advantage of encryption hardware advances.

CP Assist for Cryptographic Function, aka CPACF, is available on z990 hardware. CPACF can run on all the CPUs, but remember, this feature is available only on z990 and later machines, not the older z900. The z990 also introduces a PCIXCC card which is needed for the IBM Data Encryption Tool, but not for the DB2 encryption functions.

Note: The IBM Data Encryption Tool (available from IBM at an additional price) offers encryption for DB2 tables at the table level, whereas the encryption functions (free with DB2) offer encryption at the column level.

The CP Assist for Cryptographic Function delivers cryptographic support on every CP with Data Encryption Standard (DES), Triple DES (TDES), and Advanced Encryption Standard (AES)-128 bit data encryption/decryption, as well as Secure Hash Algorithm (SHA-1) and SHA-256 hashing. For a more detailed discussion of CPACF, associated technology and functionality, check out the following IBM redbook: IBM eServer zSeries 990 (z990) Cryptography Implementation (SG24-7070).

Basically, the net result is that the cost of encrypting DB2 data under V9 is reduced on the z990 hardware.

Additionally, IBM has added encryption support in the controllers of its storage devices.

Both the IBM TS1120 tape drive and IBM Ultrium 4 tape drives include data encryption capabilities within the drives. This support can allow you to avoid the need for host-based encryption of data or the use of specialized encryption appliances. In addition, IBM claims that the encryption does not significantly impact the performance of the drives so there should be minimal to no impact on the batch processing window when encrypting in this manner.

So far, we’ve been talking about encryption for data at rest. But DB2 9 for z/OS also improves support for encryption of data in transit. DB2 9 supports the Secure Socket Layer (SSL) protocol by implementing the z/OS Communications Server IP Application Transparent Transport Layer Security (AT-TLS) function. The z/OS V1R7 Communications Server for TCP/IP introduces the AT-TLS function in the TCP/IP stack for applications that require secure TCP/IP connections. AT-TLS performs transport layer security on behalf of the application, in this case DB2 for z/OS, by invoking the z/OS system SSL in the TCP layer of the TCP/IP stack. The z/OS system SSL provides support for TLS V1.0, SSL V3.0, and SSL V2.0 protocols.

So encryption of data over the wire is improved in z/OS 1.7. The Communications Server supports AT-TLS, which uses SSL data encryption. Now SSL encryption has been available on z/OS for a long time, but now DB2 9 for z/OS makes use of this facility and offers SSL encryption using a new secure port.

When acting as a requester, DB2 for z/OS can request a connection using the secure port of another DB2 subsystem. When acting as a server, and from within a trusted context (I’ll discuss trusted context in a later DB2portal blog entry), SSL encryption can be required for the connection.

So, little by little, better encryption support is being made available within the world of DB2 for z/OS.

Monday, April 09, 2007

New DB2 9 Security Redbook

Just a quick entry today to alert everyone that there is a new DB2 for z/OS redbook available on the topics of regulatory compliance, security, and audit. The redbook is titled Securing DB2 and Implementing MLS on z/OS and you can download it for free today over the web.

The redbook is 360 pages (including index) and covers the plethora of new security features in DB2 for z/OS. If you haven't looked at DB2's authorization and security functionality in awhile there is much to learn... and this redbook will be very illuminating.

That's all for today.