By Leon Katsnelson
By Susan Visser
By Bernie Spang
By the DB2 Guys
By Fred Ho
By Louis T. Cherian
By Shweta Shandilya
By Lawrence Weber
By Serge Rielau
By Dwaine Snow
In Part 1 of this article, I highlighted the essential need for Hadoop standards. The Apache community should submit Hadoop to a formal standardization process under an industry forum—either an established group or one that is specifically focused on big data. Under such an effort, the Hadoop industry should clarify the reference framework within which new Hadoop specifications are developed in a way that safeguards the community’s ability to evolve the core code bases, innovate vigorously, and differentiate competitively in areas that don’t jeopardize community-wide interoperability.
At minimum, the Hadoop industry reference framework would specify clear service layers by functional areas, with clear interfaces or abstractions to ensure interoperability across these layers. The core functional service layers should include:
The industry should develop the Hadoop reference framework to address the key big data use cases in which organizations are deploying this technology:
In addition, there should be standard industry performance benchmarks for Hadoop, addressing these use cases and their most characteristic workloads. The Hadoop market has matured to the point where users now have plenty of high-performance options, including IBM InfoSphere® BigInsights™. The core open-source Hadoop stack is common across most commercial solutions, including BigInsights. The core mapping and reducing functions are well defined and capable of considerable performance enhancement, leveraging proven approaches such as Adaptive MapReduce, which is at the heart of BigInsights. Customers are increasingly using performance as a key criterion to compare different vendors’ Hadoop offerings, and often use various sort benchmarks to guide their evaluations. Many are demanding that the industry adopt a clear, consensus approach to performance claims in some core operations, including NameNode operations, HDFS read/writes, MapReduce jobs (map, reduces, sorts, shuffles, and merges), and compression/decompression.
Hadoop standards must play well in the sprawling tableau of both established and emerging big data technologies. The larger picture is that the enterprise data warehouse is evolving into a virtualized cloud ecosystem in which relational, columnar, and other database architectures will coexist in a pluggable big data storage layer alongside HDFS, HBase, Cassandra, graph databases, and other NoSQL platforms.
Hadoop standards will form part of a broader, but still largely undefined, service-oriented virtualization architecture for inline analytics. Under this paradigm, developers will create inline analytic models that deploy to a dizzying range of clouds, event streams, file systems, databases, complex event processing platforms, and next-best-action platforms.
The Hadoop reference framework should be developed according to principles that preserve and extend interoperability with the growing range of other big data platforms in use, such as data warehousing, stream computing, in-memory, columnar, NoSQL, and graph databases.
In my opinion, Hadoop’s pivotal specification in this larger evolution is MapReduce. Within the big data cosmos, MapReduce will be a major unifying development framework supported by many database and integration platforms. Currently, IBM supports MapReduce models in both the Hadoop offering, InfoSphere BigInsights, and in the stream-computing platform, InfoSphere Streams.
In terms of particular new Hadoop specifications that would benefit the entire market and facilitate cross-platform interoperability, a multi-language query abstraction layer would be a much-needed addition to address the heterogeneous big data universe we’re living in. Such a specification would virtualize the diverse, confusing range of query languages—HiveQL, CassandraQL, JAQL, SQOOP (SQL to Hadoop), Sparql, and so on—in use within the Hadoop and NoSQL communities. Having a unified query abstraction layer would enable more flexible topologies of Hadoop and non-Hadoop platforms in a common big data architecture, reflecting the work of many early adopters that had to build custom integration code to support their environments.
Who will take the first necessary step to move the Hadoop community toward more formal standardization? That’s a big open issue.
IBM Big Data, Integration and Governance 2013 Forums
Attend an event near you to learn how leading organizations are making sense of massive amounts and new types of information to create value
DB2 TechTalk: Deep Dive on BLU Acceleration in DB2 10.5, Super Analytics Super Easy
Thursday, May 30: 12:30 – 2:00 PM ET
Informix Chat with the Lab: Primary Storage Manager (PSM) a Parallel Backup Alternative to Ontape
Thursday, May 30: 11:30 – 1 PM ET
Big Data Executive Summit
June 7 (Dallas) and June 10 (San Francisco)
Big Data Seminar 2013, Featuring Krish Krishnan
June 14 in New York City
Hadoop Summit North America
marcus evans Pharma Data Analytics Conference
July 10-11 in Philadelphia
IBM Smarter Content Summit 2013
Big Data at the Speed of Business
Broadcast event replay now available
Information on Demand 2013: Early Bird Registration Now Open
November 3-7 in Las Vegas