By Leon Katsnelson
By Susan Visser
By Bernie Spang
By the DB2 Guys
By Fred Ho
By Louis T. Cherian
By Shweta Shandilya
By Lawrence Weber
By Serge Rielau
By Dwaine Snow
In Part 1 of this article, I described the new statement-level monitoring feature in IBM DB2 10, highlighting the way the data is collected by DB2, what the data represents, and what this new feature could show developers and DBAs. This second part examines the more difficult and challenging task of how to code toward the feature.
Let me state upfront that limitations in this article’s format prevent me from including coding that you can directly cut and paste into a running program. Although I will offer some hints on how to do that, this article will not replace reading the manuals. On the other hand, anyone who has a go at this and encounters problems can contact me, and I will help you no matter where your problems are!
To begin, you need to begin a monitor-trace, which will make DB2 start collecting this data for all DB2 10 new-function mode (NFM)–bound packages. The data will exist in storage for a given package version as long as it is inside the environmental descriptor manager (EDM) pool. During this time, you can query the data from an Assembler program. When the package is ejected from the pool, you can tell DB2 to externalize the data to a system management facility (SMF) or generalized trace facility (GTF), but since packages should stay in the pool for a very long time, you have to query the data on the fly anyway. So this will be the focus of this article.
The data will be presented in instrumentation facility component identifier (IFCID) records similar to other information in DB2. The IFCID is 401 for static SQL. To be able to read these records on the fly, you have to write an instrumentation facility interface (IFI) program in Assembler to access it. This process might scare anyone who lacks Assembler skills, but most organizations will have a systems programmer on staff who can help you. IFI programming is documented in the application programming library in the DB2 Information Center.
To access the data using IFI, you have to establish a link between your program and a TRACE command with an OPx buffer—so your Assembler program has to start the trace that will signal DB2 to collect all this new data. The trace has to collect IFCID 401 for static SQL. (There is a similar IFCID for dynamic SQL, but this article looks only at static SQL to reduce complexity.) Be aware that the trace has to be started at a group level instead of at the individual IFCID 401. The group name covering the 401 is in fact 400, so your Assembler program should issue a START TRACE using IFI programming like the one shown in Figure 1.
After this command has been issued, DB2 starts the collection and you are then ready to query the data using an IFI READS (sync READ) with appropriate checking of return codes, as shown in Figure 2.Figure 2. Coding of a READS call
The second parameter (IFCA) contains return codes from DB2 that are similar to the SQLCA—if you are familiar with DB2 programming. The third parameter points to a huge buffer you provide, into which the data will be returned. The last two parameters, IFCIDS and QUALS, include some filtering information that is clearly documented in the manuals.
You can’t just query the data in an endless loop unless you want to use all the available CPU resources on the box. Instead, you will probably query the data and then go into an OS wait for a specified time before you repeat the query. Fortunately, you can call an IBM z/OS® service that will delay your job with an OS wait. You can use the standard ILBOWAT0 (see Figure 3), or even better, the new 31-bit services CEE3DLY and CEEDLYM.
The next step is understanding how to find the data for the 401 record in the buffer that is pointed to in the READS call. This is where the coding becomes tricky.
Every 401 record is limited to only 5,000 packages, which means the buffer returned will probably consist of many 401 records chained together by pointers, and where every 401 record will contain data for 5,000 packages. The challenge is to understand this chaining so you can code your Assembler program in a way that ensures you address every single set of package information mapped by the DSNDQW05 macro.
The IFCID records in DB2 are in a special format that can be confusing when you first see it. The data returned starts with a self-defining section that points to a product section (that defines which record you are dealing with) and a data section (that contains the actual 401 data). Schematically, it looks like Figure 4.
When you have built your program, it may be a good idea to start by dumping the storage after you have done your first successful READS-call—and then follow the pointers shown in Figure 4 to make your code correct.
When you can point to every 401 record after the READS call, you can take the fields one by one and format them. You can insert the values into your data warehouse or simply format them into a report. The only thing that could confuse you is that all timing fields are in the z/OS internal format called STCK. To get something meaningful out of these fields, they must be converted to a DB2 timestamp format. Fortunately, the z/OS service STCKCONV can be called from Assembler to do this for you. I will not go deeper into this service here, but if you have problems, contact me by email and I will be happy to assist.
In conclusion, statement-level monitoring might initially appear more challenging than it actually is. If you have Assembler skills in house, you can capitalize on this new feature and gain benefits that you’ve probably been wanting for many years. Your investment in time and resources will be well worth it for your organization.
I look forward to your comments and questions.
IBM Big Data, Integration and Governance 2013 Forums
Attend an event near you to learn how leading organizations are making sense of massive amounts and new types of information to create value
DB2 TechTalk: Deep Dive on BLU Acceleration in DB2 10.5, Super Analytics Super Easy
Thursday, May 30: 12:30 – 2:00 PM ET
Informix Chat with the Lab: Primary Storage Manager (PSM) a Parallel Backup Alternative to Ontape
Thursday, May 30: 11:30 – 1 PM ET
Big Data Executive Summit
June 7 (Dallas) and June 10 (San Francisco)
Big Data Seminar 2013, Featuring Krish Krishnan
June 14 in New York City
Hadoop Summit North America
marcus evans Pharma Data Analytics Conference
July 10-11 in Philadelphia
IBM Smarter Content Summit 2013
Big Data at the Speed of Business
Broadcast event replay now available
Information on Demand 2013: Early Bird Registration Now Open
November 3-7 in Las Vegas