By Leon Katsnelson
By Susan Visser
By Bernie Spang
By the DB2 Guys
By Fred Ho
By Louis T. Cherian
By Shweta Shandilya
By Lawrence Weber
By Serge Rielau
By Dwaine Snow
A few months ago, I wrote an article arguing in favor of controlled experimentation as a corporate strategy for learning about big data. This approach flies in the face of the common misconception that companies should only embrace mature technologies with clear ROI. This month, I’d like to examine another big data misconception: the myth that leveraging big data demands a big idea.
Sure, big ideas are fun. Some big ideas really do change the world, thankfully. But when you really dig into how big ideas are operationalized, it becomes clear that good old-fashioned hard work rules the day. I know this idea isn’t consistent with all the ill-informed hype—but unlike the hype, it happens to be true.
I was reminded of this recently by yet another LinkedIn exchange on a Forbes article by Bob Evans. In the article, Evans gushed about a piece that Constellation Research CEO Ray Wang published on a Harvard Business Review blog. The gist of the article is that there are only three buckets that big data opportunities fall into:
Wang’s three opportunities are fine, but they also feed into the hype that big ideas are the only place to start. This just isn’t true. In my experience, the pragmatic use cases are a much better place to start. I know it can be more interesting to focus on big ideas right out of the gate, but in most cases, the right opportunity is a modest and pragmatic one. Swinging for the fences first time up is simply NOT a best practice. In fact, it goes directly against the project methodology I created based on all of IBM’s years of big data project work.
Going after a big idea as your big data starting point may work for a venture-funded firm whose whole existence is based on a swing-for-the-fences new product. But for the vast majority of enterprises, it is simply bad methodology. I’d also strongly suggest that the Network Monetizer idea is about to come under serious pressure from privacy considerations (more on that later this year).
I’m not saying that Wang’s three opportunity buckets are conceptually incorrect, but he is skipping over dozens of near-term better places to start. Sometime business users just need to be able to run their reports faster—and there is nothing wrong with that. Perhaps you can make a case for differentiation as a place to start (provided your goal is to walk, not run, by simply understanding customer behavior rather than trying to comprehensively reinvent the customer experience and/or how the company functions).
But if reinventing the whole company with your first big data project is an iffy idea, where do you start? First, brush up on Fit for Purpose architectures. Then keep these guidelines in mind:
More on these ideas will follow in future columns. In the meantime, I’ve recorded several webcasts that cover these topics in an interactive format.
So what do you think? Does this all make sense? Do you have different or better ideas to propose? Let me know in the comments.
DB2 TechTalk: Deep Dive on BLU Acceleration in DB2 10.5, Super Analytics Super Easy
Thursday, May 30: 12:30 – 2:00 PM ET
Informix Chat with the Lab: Primary Storage Manager (PSM) a Parallel Backup Alternative to Ontape
Thursday, May 30: 11:30 – 1 PM ET
Big Data Seminar 2013, Featuring Krish Krishnan
June 14 in New York City
marcus evans Pharma Data Analytics Conference
July 10-11 in Philadelphia
IBM Smarter Content Summit 2013
Big Data at the Speed of Business
Broadcast event replay now available
Information on Demand 2013: Early Bird Registration Now Open
November 3-7 in Las Vegas