Pushing IBM Informix Innovator-C to its Limits

TPC-C-based stress testing on Linux

When a massive wave hits you, the only way to survive is to trust your instincts and ride it out. Organizations with hundreds of end users may find themselves in a similar situation when dealing with unacceptable application response times. A wave of unhappy users can happen any time an application is deployed—especially if you do not trust your infrastructure, and did not take the time to run the relevant stress tests before deploying.

Recent trends and technical articles1 show that if a company chose a relational database management system (RDBMS) without knowing how or if that RDBMS can escalate properly, issues may arise if the users’ loads increase. However, many software editors (including Cisco Systems and SugarCRM) are now considering adding IBM® Informix® to their supported platforms because other RDBMS editors offer insufficient performance and stability. In technical forums, questions about migrating various RDBMS implementations to Informix now appear more frequently.

Companies that run IBM Informix know that the product escalates efficiently, but beyond the fact that those same people have never seen CPUs burning on a stressed Informix server, no realistic statistics have been published in years. Why not? Was this because of competitors that wanted to “bury” Informix, a lack of actual performance data against competitors, or general marketplace indifference? The answer is irrelevant; what matters is determining how many terminal sessions of an average online transaction processing (OLTP) application the basic version of IBM Informix could stand.

The Transaction Processing Performance (TPC) Council is responsible for publishing the specifications, scenario and results of the appraised DBMS benchmark standard. This organization performs different types of DBMS benchmarks to set a ranking according to “as close to real life as possible” criteria.

Current TPC benchmarks include TPC-C, TPC-E and TPC-H, but TPC-C is the most representative benchmark for OLTP activity. Although a number of open-source TPC-C runners can be found on the Internet, most of them are written in Java. The following test uses a runner developed by a team from Universidad de Valladolid, Spain, managed by Professor Diego Llanos, and adapted to run against IBM Informix.


Objectives of the test

Without an official copy of TPC-C from the TPC Council, and without their approval, this test cannot be validated as an official TPC-C benchmark. Nevertheless, all rules of the TPC-C benchmark were respected. Basically, this exercise involves running a stress test against Informix Innovator-C to note how many terminal sessions (that is, TPC-C users’ sessions running in a transaction monitor) can run concurrently on a single Informix server.


Preliminary steps and selected configuration

The operation started with a source code that could be adapted for little cost. The following steps were necessary to prepare for the benchmark test:

1)     Install a server based on Linux (Fedora 14, kernel x86_64) with the following specifications:

  • 1-socket Intel quad-core Q9400 (four 2.66 Ghz processors)
  • 16 Gb of DDR-2 RAM
  • Four 500 Gb SATA II, 7,200 rpm disk drives

Note that this configuration costs less than EUR900.

2)     Install and configure IBM Informix Innovator-C Edition on the Linux server. Innovator-C Edition is free, so there was no cost. The version chosen was 11.70 FC4.

3)     Adapt the TPC-C application from Universidad de Valladolid—initially developed in ESQL/C for PostgreSQL—to run against IBM Informix. This was a relatively light task because the main modifications were to PostgreSQL against Informix server mechanisms alterations. Also, the initial database creation statements were optimized to take advantage of RAW tables and prepared statements. Again, this benchmark does not measure database load statements, so the less time to load, the better.

4)     Compile and debug the application.

5)     Tune Informix.

6)     Run the test, gradually increasing the stress until the system performance decreases. Following  TPC-C rules, make sure the test passes.


Benchmark rules

Even with an unofficial test, these rules must be followed:

  • Do not modify the database schema. The TPC-C database contains nine tables, each of which has a defined structure, identified indexes and integrity constraints. You must not modify, add or drop any of them.
  • Do not modify tables’ cardinality. There are strict rules about table cardinality; for instance, one warehouse will host 100,000 items, one district will contain 3,000 customers, and so on.
  • Do not modify the transaction’s application code. TPC-C uses five different transactions that are designed to reflect a typical OLTP application: New Order, Payment, Delivery, Order Status and Stock Level. The last one features a select count (distinct) and WHERE clauses on non-indexed columns, so the database engine is put slightly under pressure.
  • Each of the transaction’s categories has a maximum admissible response time. For each transaction class, a minimum of 90 percent of these transactions must execute as shown below. If negative, the test has failed.
  • Each test has a ramp-up (or warm-up) time and a measure time (the interval during which the performance is measured). The ramp-up time enables the server to accommodate the increasing load so that the measure time can be executed while the performance is stabilized.
  • The checkpoints interval has no precise rules, except that at least one must be executed during the measure time. This does not matter with Informix Innovator-C because the checkpoints block only a very small number of transactions. This test used 15-minute intervals, which is still realistic for a real-life system.
  • There is no rule for disk implementation. All four SATA II 7,200 rpm disks on the same SATA controller were used, in an attempt to balance the location of each table and index as accurately as possible.
  • For the shared memory amount, Informix Innovator-C is limited to 2 Gb for all the instances on the same machine. This test used the complete 2 Gb, leaving space for SHMVIRTSIZE, which is required by sort and similar operations.


What will this stress test measure?

The test will measure the response times on five typical, different OLTP transactions. If less than 90 percent of each transaction type returns a response time less than the acceptable limit, the test fails. Each of those transactions will provide the minimum, maximum, average and 90th percentile response times, but there will not be any details about the failed tests. At the end, the test will be granted a global result expressed in tpmC-UVA, which is the number of valid transactions executed per minute.


Running the test

Following step 6 of the action plan, the intent was to determine the breakpoint where the test would fail. The test started with 50 warehouses with 10 terminals per warehouse, for a total of 500 user terminals.

Run #1: These were the original performance parameters:

  • Number of warehouses: 50
  • Number of terminals per warehouse: 10
  • Warm-up time: 30 mn
  • Measure time: 60 mn

Result: Passed tpmC-UVA: 590.208

Looking at the results details, there was a margin that allowed the addition of more warehouses—although the “bench” binary shows 100 percent of CPU—but nothing can be done at this time to address this issue. For the general vmstat output, the average was 38 percent user time, with a minimum of 31 percent and a maximum of 49 percent. Regarding I/O wait, the average was 11 percent, with a maximum of 25 percent and a minimum of 6 percent. The original estimate was close, but there were some power margins, so the number of warehouses was increased to 55.

Run #2: This configuration produced the optimum results:

  • Number of warehouses: 55
  • Number of terminals per warehouse: 10
  • Warm-up time: 45 mn
  • Measure time: 240 mn

Result: Passed tpmC-UVA: 610.728

Informix Innovator-C still passed the test, but the server slowed down somewhat. The overall gain was only 20 tpmC for 50 more terminals, indicating that the next run would likely fail. The test showed that the checkpoints are consequently affecting the I/O wait system counter (up to 50 percent wait I/O), and though Informix transactions are not blocked by the checkpoints, the wait I/O barely impacted the whole system.

Run #3: This configuration caused Informix Innovator-C to reach its inflection point:

  • Number of warehouses: 60
  • Number of terminals per warehouse: 10
  • Warm-up time: 45 mn
  • Measure time: 60 mn

Result: Failed tpmC-UVA: 497.567

This result was expected. This configuration reached the limit of what Informix Innovator-C can handle.



This test successfully achieved the 55 warehouses for 10 terminals run, making a total of 550 terminal sessions, which is impressive for a test hosting both the application and the database server on the same client.

To continue testing Informix Innovator-C, another, more advanced client-server configuration was used—but remarkably, the results were identical. While the database server system monitoring showed only 15-20 percent average user CPU with 550 terminals, this testing detected that the “bench” binary that drives the whole benchmark could not tolerate a system load of over 550 terminal sessions, causing huge wait times for important transactions.

Also, SQLTRACE was double-checked to ensure that almost no query was executing more than a 20- second response time inside the database server. Fixing this issue may be the next phase of this project; alternatively, the same benchmark may be run through the official TPC Council.

Finally, these tests show that IBM Informix Innovator-C Edition is an excellent choice to start a deployment project for departmental applications, even with the somewhat limited specifications used for this benchmark test. Although not calculated here, the ratio of infrastructure cost to tpmC number should be extremely competitive—more so considering the low cost to administer IBM Informix Innovator-C, and its combination of more power and stability for limited budgets. Although this test did not use a lot of infrastructure area with enormous amounts of CPU, RAM and disk arrays, it demonstrated the capabilities and scalability of IBM Informix database servers at the entry-point level, on a single server. What if the test included the Informix Flexible Grid features, which are also available with the Innovator-C Edition? That’s a challenge for another day.

1 “A serious alternative to free RDBMS,” by Eric Vercelletto, September 30, 2011,


Detailed results of the 55 warehouses – 10 terminals run

Test results accounting performed on 2012-02-23 at 19:17:31 using 55 warehouses.

Start of measurement interval: 45.016333 m

End of measurement interval: 225.016333 m

COMPUTED THROUGHPUT: 610.728 tpmC-uva using 55 warehouses.

252680 Transactions commited.



109931 Transactions within measurement time (130117 Total).

Percentage: 43.506%

Percentage of “well done” transactions: 94.221%

Response time (min/med/max/90th): 0.008 / 3.327 / 107.466 / 2.920

Percentage of rolled-back transactions: 0.967%

Average number of items per order: 14859.225 .

Percentage of remote items: 0.001% .

Think time (min/avg/max): 0.000 / 12.060 / 120.000



109824 Transactions within measurement time (130300 Total).

Percentage: 43.464%

Percentage of “well done” transactions: 95.213%

Response time (min/med/max/90th): 0.001 / 2.664 / 107.702 /

Percentage of remote transactions: 14.105% .

Percentage of customers selected by C_ID: 39.337% .

Think time (min/avg/max): 0.000 / 12.038 / 120.000



10963 Transactions within measurement time (13012 Total).

Percentage: 4.339%

Percentage of “well done” transactions: 95.457%

Response time (min/med/max/90th): 0.007 / 2.545 / 105.946 /

Percentage of clients chosen by C_ID: 39.770% .

Think time (min/avg/max): 0.000 / 10.096 / 93.000



10982 Transactions within measurement time (13042 Total).

Percentage: 4.346% Percentage of “well done” transactions: 96.767%

Response time (min/med/max/90th): 0.000 / 1.114 / 99.241 / 0.080

Percentage of execution time < 80s : 99.727%

Execution time min/avg/max: 0.023/2.518/101.781

No. of skipped districts: 0 .

Percentage of skipped districts: 0.000%.

Think time (min/avg/max): 0.000 / 5.038 / 47.000



10980 Transactions within measurement time (13025 Total).

Percentage: 4.345% Percentage of “well done” transactions: 97.304%

Response time (min/med/max/90th): 0.003 / 2.630 / 98.372 / 2.720

Think time (min/avg/max): 0.000 / 5.023 / 47.000


Longest checkpoints:

Start time Elapsed time since test start (s) Execution time (s)

No vacuums executed.



Analysis of the results as per TPC Council methodology

TPC Clause 5.6.1










TPC Clause 5.6.2


TPC Clause 5.6.3


TPC Clause 5.6.4



The author wishes to thank Professor Diego R. Llanos from the Universidad de Valladolid, who allowed his work to be used for this test.
Previous post

Going Global with Data Mart Consolidation: Part 1

Next post

IDUG Delivers DB2 V10 Education

Eric Vercelletto

Eric Vercelletto is an international expert specializing in Informix technologies since 1986. He has been an Informix Software Consultant for more than 11 years, acting as technical support, trainer and strategic accounts technical consultant. He is general manager of Begooden IT Consulting, an IBM partner that provides services on IBM Informix implementation projects such as application and database design, implementation, QA, auditing, technical support and issue management, performance tuning and training.