Databases

Four Keys to Reducing High Transaction Costs: Part 1

How a top US retailer supported 250,000 SQL queries per second with IBM DB2 for Linux, UNIX and Windows on Black Friday 2011

Black Friday—the day after Thanksgiving in the US—is the busiest day of the year for retailers. It is followed by Cyber Monday and several more days of peak activity. During this period, the performance of retailers’ websites is critical to profitability for the entire year. Several leading retailers selected IBM® Commerce Server and IBM DB2® for Linux, UNIX and Windows (LUW) to power their e-business engines, and I’ve been lucky enough to work with many of them.

Delivering outstanding transaction performance is essential—but as many of these retailers know, that performance can come at a high cost. How can retailers maximize performance while reducing the potentially high transaction costs?

There is a methodology for identifying areas for cost reduction that I have been teaching and prescribing to prominent organizations for several years. Companies that took this advice have achieved tremendous success—and while the focus here is on e-commerce, the principles are universal and can apply to all online transaction processing (OLTP) databases, including SAP, Siebel, PeopleSoft, and Manhattan Associates, plus homegrown applications and many others.

 

Execute within your means

Your DB2 LUW servers have a certain processing capacity, perhaps like your own personal budget. You have to live within your means, and your servers must operate within their capacities. To thrive within capacity constraints, you have to manage your costs. You obviously can’t print money—that’s illegal—and I don’t advocate throwing more money at more hardware either, since performance problems cannot be cured with additional hardware (at least at reasonable costs).

 

Lower costs = more profit

This leads to the most fundamental point: you need to focus on costs of execution inside DB2. Many people look at db2top, db2pd, or other tools, and they examine rates. Rates can be entertaining, but they’re not very helpful because they tend to vary widely depending on the number of users, time of day, or business cycles.

Costs, on the other hand, are relatively constant in the absence of significant changes. It doesn’t matter if there are 100 users or 10,000—a product lookup transaction will execute a certain number of SQL statements, and these SQL statements will drive a certain number of logical and physical I/O operations while consuming CPU cycles. Although it is true that you can avoid I/O operations with more bufferpool and SORTHEAP memory, logical I/O to pages in memory will still consume precious CPU cycles.

How, then, can you lower CPU costs?

 

Finding the problem is half the problem

According to an ancient Chinese proverb, “A well-stated problem is a half-solved problem.” If you want OLTP e-commerce applications to run as fast as possible, then you need accurate answers to the pressing question “What costs the most?” Only after you understand where the high costs of execution occur can you start working to reduce these costs.

 

You can’t hide

Reducing costs involves making physical design changes. Specifically, indexes must be created, altered, or dropped according to the needs of the transaction workload. Don’t try to hide your index design problems behind huge bufferpools; this action will only exhaust your CPU capacity and may still leave you with locking problems. Transaction response times tend to degrade very rapidly as server CPU utilization begins to exceed 90 percent. CPU utilization needs to be managed, so the question becomes this: Which costs to CPU time should you try to avoid?

 

The keys to fighting high costs

One top US retailer I worked with focused on reducing costs by identifying four areas of potential trouble:

  • Hot spots: These are database tables that suffer from the highest read I/O cost. The challenge is to find out what portion of the database read I/O this cost represents.
  • Pain points: To find pain points, look for the highest-cost SQL statements, in the consolidated aggregate, that drive read I/O to hot spot tables.
  • Troublemakers: These are the tables that have endured the highest write I/O. Of these tables, what are the defined indexes on each and do any of these have low cardinalities?
  • Double trouble: These tables need to be reorganized if they suffer excessively from overflow accesses.

By concentrating on these areas, the retailer achieved some very impressive results. Let’s examine each area individually.

 

Hot spots

Since indexes are created on tables, and index design is the primary solution for reducing transaction processing costs, you must look at table I/O. To determine the average database transaction cost of table I/O, divide ROWS_READ for each table by (COMMITS_ATTEMPTED + ROLLBACKS_ATTEMPTED). Your e-commerce site has to perform exceptionally well during peak periods, so the read I/O cost for each table should be less than 10. If it isn’t, then you have either missing indexes or indexes that need improvement. You can also find the percentage of rows read for each table by dividing ROWS_READ by the sum of all ROWS_READ across all tables, and then multiplying that figure by 100. If any table has a high percentage and an I/O cost greater than 10, then you’ve got your “well-stated problem” and are halfway to your solution.

But what about the costs of pain points, troublemakers, and double trouble? Part 2 of this article will explore the other three key areas that can contribute to costs.

Previous post

Why You Can’t Afford Not to Have a Data Model

Next post

Business Glossary and the Enterprise Brain Drain

Scott Hayes

Scott Hayes is President & CEO of DBI Software, a company specializing in
DB2 LUW performance. He is an IBM DB2 GOLD Consultant and IBM Information
Management Champion, a frequent speaker at IDUG and regional DB2 User Group
events, the host of The DB2Night Show™ Webinar series, and author of the
widely read DB2 LUW Performance Blogs. He has worked with DB2 on
distributed platforms since V1 and he reads IBM documentation instead of
novels for entertainment.