By Leon Katsnelson
By Susan Visser
By Bernie Spang
By the DB2 Guys
By Fred Ho
By Louis T. Cherian
By Shweta Shandilya
By Lawrence Weber
By Serge Rielau
By Dwaine Snow
Traditionally, storage provisioning on the mainframe has been a slow and complex process. Apart from procurement and hardware installation, there are host activities for the Input/Output Definition File (IODF), SMS storage groups, and Automatic Class Selection (ACS) routines. IBM created zDAC to help the system programmer match the IODF to the storage controller, but that is only one aspect of the provisioning process. There are also considerations when DB2 System Point-in-Time Recovery is being utilized or Symmetrix Remote Data Facility (SRDF) and Peer-to-Peer Remote Copy (PPRC) are used for long distance array-based replication. The entire process is also mired in change control—which, while necessary, can be tedious and time-consuming.
The inevitable result of such a painful and extended process is that end users frequently request more storage than is immediately required so they can reduce the number of times they go to the well. Sometimes this extra space is not used—or worse, it is allocated and not used, which is a hidden form of waste. For example, if a DB2 linear dataset is created with an 8 GB PRIQTY, that space is utilized (according to all normal capacity planning and measurement tools) even if DB2 writes to only a fraction of the allocated dataset.
After the storage has been fully provisioned, there are still challenges that can lead to poor application performance. Some Fibre Channel drives can be as large as 600 GB. Just think how many MOD-9s can be carved out of one of those!
The storage administrator must consider a number of questions:
One of the key factors that makes this activity more complex is the recent reduction in the IOPS density of Fibre Channel drives. Spinning disks have become much, much larger but have not gotten significantly faster, so they have been delivering IOPS at approximately the same rate over the past few years. Figure 1 illustrates this trend.Figure 1. As spinning disks have become larger, I/O capability per GB has decreased at an alarming rate. (Source: Seagate.com)
IOPS density is the I/O capability of the drive (measured in I/Os per sec) divided by the size of the drive in gigabytes. It is clear from Figure 1 that four 146 GB Fibre Channel drives can deliver four times the I/Os that one 600 GB Fibre Channel drive can provide. The problem, however, is that 146 GB drives (and smaller) will be unavailable soon due to the availability of higher-density alternatives. This has been the trend over the last 15 years; smaller drives are being replaced by higher-capacity ones, forcing you to deploy DB2 systems on these very large actuators, and magnifying the possibility of contention between workloads sharing the same physical drives.
Figure 1 shows that current 600 GB Fibre Channel drives are capable of about 0.25 IOPS/GB, far less than the slow 9 GB drive represented by the first bar in the chart. This reduction in IOPS density is an insidious trend that cannot continue if applications are to achieve their SLAs. A quantum change is required.
This article is continued in Part 2.
DB2 TechTalk: Deep Dive on BLU Acceleration in DB2 10.5, Super Analytics Super Easy
Thursday, May 30: 12:30 – 2:00 PM ET
Informix Chat with the Lab: Primary Storage Manager (PSM) a Parallel Backup Alternative to Ontape
Thursday, May 30: 11:30 – 1 PM ET
Big Data Seminar 2013, Featuring Krish Krishnan
June 14 in New York City
marcus evans Pharma Data Analytics Conference
July 10-11 in Philadelphia
IBM Smarter Content Summit 2013
Big Data at the Speed of Business
Broadcast event replay now available
Information on Demand 2013: Early Bird Registration Now Open
November 3-7 in Las Vegas