By Leon Katsnelson
By Susan Visser
By Bernie Spang
By the DB2 Guys
By Fred Ho
By Louis T. Cherian
By Shweta Shandilya
By Lawrence Weber
By Serge Rielau
By Dwaine Snow
When a massive wave hits you, the only way to survive is to trust your instincts and ride it out. Organizations with hundreds of end users may find themselves in a similar situation when dealing with unacceptable application response times. A wave of unhappy users can happen any time an application is deployed—especially if you do not trust your infrastructure, and did not take the time to run the relevant stress tests before deploying.
Recent trends and technical articles1 show that if a company chose a relational database management system (RDBMS) without knowing how or if that RDBMS can escalate properly, issues may arise if the users’ loads increase. However, many software editors (including Cisco Systems and SugarCRM) are now considering adding IBM® Informix® to their supported platforms because other RDBMS editors offer insufficient performance and stability. In technical forums, questions about migrating various RDBMS implementations to Informix now appear more frequently.
Companies that run IBM Informix know that the product escalates efficiently, but beyond the fact that those same people have never seen CPUs burning on a stressed Informix server, no realistic statistics have been published in years. Why not? Was this because of competitors that wanted to “bury” Informix, a lack of actual performance data against competitors, or general marketplace indifference? The answer is irrelevant; what matters is determining how many terminal sessions of an average online transaction processing (OLTP) application the basic version of IBM Informix could stand.
The Transaction Processing Performance (TPC) Council is responsible for publishing the specifications, scenario and results of the appraised DBMS benchmark standard. This organization performs different types of DBMS benchmarks to set a ranking according to “as close to real life as possible” criteria.
Current TPC benchmarks include TPC-C, TPC-E and TPC-H, but TPC-C is the most representative benchmark for OLTP activity. Although a number of open-source TPC-C runners can be found on the Internet, most of them are written in Java. The following test uses a runner developed by a team from Universidad de Valladolid, Spain, managed by Professor Diego Llanos, and adapted to run against IBM Informix.
Without an official copy of TPC-C from the TPC Council, and without their approval, this test cannot be validated as an official TPC-C benchmark. Nevertheless, all rules of the TPC-C benchmark were respected. Basically, this exercise involves running a stress test against Informix Innovator-C to note how many terminal sessions (that is, TPC-C users’ sessions running in a transaction monitor) can run concurrently on a single Informix server.
The operation started with a source code that could be adapted for little cost. The following steps were necessary to prepare for the benchmark test:
1) Install a server based on Linux (Fedora 14, kernel 126.96.36.199 x86_64) with the following specifications:
Note that this configuration costs less than EUR900.
2) Install and configure IBM Informix Innovator-C Edition on the Linux server. Innovator-C Edition is free, so there was no cost. The version chosen was 11.70 FC4.
3) Adapt the TPC-C application from Universidad de Valladolid—initially developed in ESQL/C for PostgreSQL—to run against IBM Informix. This was a relatively light task because the main modifications were to PostgreSQL against Informix server mechanisms alterations. Also, the initial database creation statements were optimized to take advantage of RAW tables and prepared statements. Again, this benchmark does not measure database load statements, so the less time to load, the better.
4) Compile and debug the application.
5) Tune Informix.
6) Run the test, gradually increasing the stress until the system performance decreases. Following TPC-C rules, make sure the test passes.
Even with an unofficial test, these rules must be followed:
The test will measure the response times on five typical, different OLTP transactions. If less than 90 percent of each transaction type returns a response time less than the acceptable limit, the test fails. Each of those transactions will provide the minimum, maximum, average and 90th percentile response times, but there will not be any details about the failed tests. At the end, the test will be granted a global result expressed in tpmC-UVA, which is the number of valid transactions executed per minute.
Following step 6 of the action plan, the intent was to determine the breakpoint where the test would fail. The test started with 50 warehouses with 10 terminals per warehouse, for a total of 500 user terminals.
Run #1: These were the original performance parameters:
Result: Passed tpmC-UVA: 590.208
Looking at the results details, there was a margin that allowed the addition of more warehouses—although the “bench” binary shows 100 percent of CPU—but nothing can be done at this time to address this issue. For the general vmstat output, the average was 38 percent user time, with a minimum of 31 percent and a maximum of 49 percent. Regarding I/O wait, the average was 11 percent, with a maximum of 25 percent and a minimum of 6 percent. The original estimate was close, but there were some power margins, so the number of warehouses was increased to 55.
Run #2: This configuration produced the optimum results:
Result: Passed tpmC-UVA: 610.728
Informix Innovator-C still passed the test, but the server slowed down somewhat. The overall gain was only 20 tpmC for 50 more terminals, indicating that the next run would likely fail. The test showed that the checkpoints are consequently affecting the I/O wait system counter (up to 50 percent wait I/O), and though Informix transactions are not blocked by the checkpoints, the wait I/O barely impacted the whole system.
Run #3: This configuration caused Informix Innovator-C to reach its inflection point:
Result: Failed tpmC-UVA: 497.567
This result was expected. This configuration reached the limit of what Informix Innovator-C can handle.
This test successfully achieved the 55 warehouses for 10 terminals run, making a total of 550 terminal sessions, which is impressive for a test hosting both the application and the database server on the same client.
To continue testing Informix Innovator-C, another, more advanced client-server configuration was used—but remarkably, the results were identical. While the database server system monitoring showed only 15-20 percent average user CPU with 550 terminals, this testing detected that the “bench” binary that drives the whole benchmark could not tolerate a system load of over 550 terminal sessions, causing huge wait times for important transactions.
Also, SQLTRACE was double-checked to ensure that almost no query was executing more than a 20- second response time inside the database server. Fixing this issue may be the next phase of this project; alternatively, the same benchmark may be run through the official TPC Council.
Finally, these tests show that IBM Informix Innovator-C Edition is an excellent choice to start a deployment project for departmental applications, even with the somewhat limited specifications used for this benchmark test. Although not calculated here, the ratio of infrastructure cost to tpmC number should be extremely competitive—more so considering the low cost to administer IBM Informix Innovator-C, and its combination of more power and stability for limited budgets. Although this test did not use a lot of infrastructure area with enormous amounts of CPU, RAM and disk arrays, it demonstrated the capabilities and scalability of IBM Informix database servers at the entry-point level, on a single server. What if the test included the Informix Flexible Grid features, which are also available with the Innovator-C Edition? That’s a challenge for another day.
1 “A serious alternative to free RDBMS,” by Eric Vercelletto, September 30, 2011, http://en.vercelletto.com/2011/09/30/a-serious-alternative-to-free-rdbms
Test results accounting performed on 2012-02-23 at 19:17:31 using 55 warehouses.
Start of measurement interval: 45.016333 m
End of measurement interval: 225.016333 m
COMPUTED THROUGHPUT: 610.728 tpmC-uva using 55 warehouses.
252680 Transactions commited.
109931 Transactions within measurement time (130117 Total).
Percentage of “well done” transactions: 94.221%
Response time (min/med/max/90th): 0.008 / 3.327 / 107.466 / 2.920
Percentage of rolled-back transactions: 0.967%
Average number of items per order: 14859.225 .
Percentage of remote items: 0.001% .
Think time (min/avg/max): 0.000 / 12.060 / 120.000
109824 Transactions within measurement time (130300 Total).
Percentage of “well done” transactions: 95.213%
Response time (min/med/max/90th): 0.001 / 2.664 / 107.702 /
Percentage of remote transactions: 14.105% .
Percentage of customers selected by C_ID: 39.337% .
Think time (min/avg/max): 0.000 / 12.038 / 120.000
10963 Transactions within measurement time (13012 Total).
Percentage of “well done” transactions: 95.457%
Response time (min/med/max/90th): 0.007 / 2.545 / 105.946 /
Percentage of clients chosen by C_ID: 39.770% .
Think time (min/avg/max): 0.000 / 10.096 / 93.000
10982 Transactions within measurement time (13042 Total).
Percentage: 4.346% Percentage of “well done” transactions: 96.767%
Response time (min/med/max/90th): 0.000 / 1.114 / 99.241 / 0.080
Percentage of execution time < 80s : 99.727%
Execution time min/avg/max: 0.023/2.518/101.781
No. of skipped districts: 0 .
Percentage of skipped districts: 0.000%.
Think time (min/avg/max): 0.000 / 5.038 / 47.000
10980 Transactions within measurement time (13025 Total).
Percentage: 4.345% Percentage of “well done” transactions: 97.304%
Response time (min/med/max/90th): 0.003 / 2.630 / 98.372 / 2.720
Think time (min/avg/max): 0.000 / 5.023 / 47.000
Start time Elapsed time since test start (s) Execution time (s)
No vacuums executed.
>> TEST PASSED
The author wishes to thank Professor Diego R. Llanos from the Universidad de Valladolid, who allowed his work to be used for this test.
DB2 TechTalk: Deep Dive on BLU Acceleration in DB2 10.5, Super Analytics Super Easy
Thursday, May 30: 12:30 – 2:00 PM ET
Informix Chat with the Lab: Primary Storage Manager (PSM) a Parallel Backup Alternative to Ontape
Thursday, May 30: 11:30 – 1 PM ET
Big Data Seminar 2013, Featuring Krish Krishnan
June 14 in New York City
marcus evans Pharma Data Analytics Conference
July 10-11 in Philadelphia
IBM Smarter Content Summit 2013
Big Data at the Speed of Business
Broadcast event replay now available
Information on Demand 2013: Early Bird Registration Now Open
November 3-7 in Las Vegas