By Leon Katsnelson
By Susan Visser
By Bernie Spang
By the DB2 Guys
By Fred Ho
By Louis T. Cherian
By Shweta Shandilya
By Lawrence Weber
By Serge Rielau
By Dwaine Snow
What were you doing when the clock struck midnight on January 1, 2000? If you had a job managing enterprise systems, you were probably toasting yourself for having upgraded those systems in time for the big deadline—and I’m sure you deserved that glass of bubbly. After all, you prevented your company from getting stuck on legacy applications that might not have worked as of 1/1/2000.
Perhaps unwittingly, you also helped usher in a golden age of enterprise resource planning (ERP) software. In the late 1990s, many organizations made large-scale investments in ERP solutions in part to avoid potential Y2K problems with legacy systems.
Those initial ERP investments—though based on weak return on investment (ROI) justifications—are now generating an enormous ROI 10 to 12 years later. So if you implemented an ERP in the late 1990s, you averted the Y2K crisis and delivered ongoing benefits to your organization. But are you ready for Y2K Part Two?
All those gigantic ERP implementations set the stage for another crisis that’s now looming on the horizon. After companies purchased their new ERP systems, they transferred countless gigabytes of enterprise data into these systems and launched an unprecedented wave of data collection that continues today. Most major companies now store their data in massive relational databases that can be accessed by employees across the enterprise using standard systems.
Having centralized, accessible corporate data is a good thing. Data is the raw ingredient of analysis. Analysis drives strategic decisions. Strategic decisions lead to greater profits, heightened competitive advantage, and higher productivity.
But you can have too much of a good thing. After about 10 years of frenzied activity in moving data into ERP systems, IT departments are now dealing with the effects of what can best be described as data cholesterol.
Data cholesterol is a condition in which the excessive buildup of data leads to sluggishness across your production systems. It extends to nonproduction data copies and affects the way all data is managed. Just as too much cholesterol in the human body can lead to serious health problems, data cholesterol hinders the smooth functioning of enterprise systems. It causes slower response times to customer service requests and report queries. It prolongs testing and reporting. It ripples through everything you do in IT, forcing you to use more labor to support your infrastructure. And it could expose your corporation to needless litigation.
None of the leading enterprise software vendors have provided an easy way for customers to archive or purge their data. Because they focused on creating integrated repositories—that is, on making it easy to get the data in—these vendors did not consider that customers might not want to keep that data forever. And given the complexities of ERP data models and their referential integrity, it is very difficult to pull out data without breaking something.
That’s why so many corporations now suffer from data cholesterol. Even midsized companies are amassing databases larger than one terabyte that are expanding at 30 to 70 percent each year. There are several reasons why databases of this size are not ideal (despite the fact that Moore’s Law keeps bringing down the price of hardware):
The painful combination of tighter IT budgets, data cholesterol buildup, and strict regulatory requirements has driven savvy companies to begin focusing on how they manage their data. That’s why they’re using the principles of enterprise data management (EDM) as they implement their data governance functions. EDM focuses on creating accurate, consistent, and lean data content and integrating it into business applications. Today’s EDM solutions address the critical issues of data growth risk management; data privacy compliance; nimble test-data management; e-discovery; and application upgrades, migrations, and retirements—providing an effective way to avoid data cholesterol’s potentially dire adverse effects.
How is your company dealing with data cholesterol? Let us know in the comments.
DB2 TechTalk: Deep Dive on BLU Acceleration in DB2 10.5, Super Analytics Super Easy
Thursday, May 30: 12:30 – 2:00 PM ET
Big Data Seminar 2013, Featuring Krish Krishnan
June 14 in New York City
marcus evans Pharma Data Analytics Conference
July 10-11 in Philadelphia
IBM Smarter Content Summit 2013
Big Data at the Speed of Business
Broadcast event replay now available
Information on Demand 2013: Early Bird Registration Now Open
November 3-7 in Las Vegas