In Part 1 of this article, I highlighted the essential need for Hadoop standards. The Apache community should submit Hadoop to a formal standardization process under an industry forum—either an established group or one that is specifically focused on big data. Under such an effort, the Hadoop industry should clarify the reference framework within which new Hadoop specifications are developed in a way that safeguards the community’s ability to evolve the core code bases, innovate vigorously, and differentiate competitively in areas that don’t jeopardize community-wide interoperability.
At minimum, the Hadoop industry reference framework would specify clear service layers by functional areas, with clear interfaces or abstractions to ensure interoperability across these layers. The core functional service layers should include:
The industry should develop the Hadoop reference framework to address the key big data use cases in which organizations are deploying this technology:
In addition, there should be standard industry performance benchmarks for Hadoop, addressing these use cases and their most characteristic workloads. The Hadoop market has matured to the point where users now have plenty of high-performance options, including IBM InfoSphere® BigInsights™. The core open-source Hadoop stack is common across most commercial solutions, including BigInsights. The core mapping and reducing functions are well defined and capable of considerable performance enhancement, leveraging proven approaches such as Adaptive MapReduce, which is at the heart of BigInsights. Customers are increasingly using performance as a key criterion to compare different vendors’ Hadoop offerings, and often use various sort benchmarks to guide their evaluations. Many are demanding that the industry adopt a clear, consensus approach to performance claims in some core operations, including NameNode operations, HDFS read/writes, MapReduce jobs (map, reduces, sorts, shuffles, and merges), and compression/decompression.
Hadoop standards must play well in the sprawling tableau of both established and emerging big data technologies. The larger picture is that the enterprise data warehouse is evolving into a virtualized cloud ecosystem in which relational, columnar, and other database architectures will coexist in a pluggable big data storage layer alongside HDFS, HBase, Cassandra, graph databases, and other NoSQL platforms.
Hadoop standards will form part of a broader, but still largely undefined, service-oriented virtualization architecture for inline analytics. Under this paradigm, developers will create inline analytic models that deploy to a dizzying range of clouds, event streams, file systems, databases, complex event processing platforms, and next-best-action platforms.
The Hadoop reference framework should be developed according to principles that preserve and extend interoperability with the growing range of other big data platforms in use, such as data warehousing, stream computing, in-memory, columnar, NoSQL, and graph databases.
In my opinion, Hadoop’s pivotal specification in this larger evolution is MapReduce. Within the big data cosmos, MapReduce will be a major unifying development framework supported by many database and integration platforms. Currently, IBM supports MapReduce models in both the Hadoop offering, InfoSphere BigInsights, and in the stream-computing platform, InfoSphere Streams.
In terms of particular new Hadoop specifications that would benefit the entire market and facilitate cross-platform interoperability, a multi-language query abstraction layer would be a much-needed addition to address the heterogeneous big data universe we’re living in. Such a specification would virtualize the diverse, confusing range of query languages—HiveQL, CassandraQL, JAQL, SQOOP (SQL to Hadoop), Sparql, and so on—in use within the Hadoop and NoSQL communities. Having a unified query abstraction layer would enable more flexible topologies of Hadoop and non-Hadoop platforms in a common big data architecture, reflecting the work of many early adopters that had to build custom integration code to support their environments.
Who will take the first necessary step to move the Hadoop community toward more formal standardization? That’s a big open issue.
December 10, 2013 – 1 PM Eastern
December 11, 2013 – 12 PM Eastern
December 17, 2013 – 11:30 AM Eastern
December 18, 2013 – 1 PM Eastern
December 19, 2013 – 12:30 PM Eastern
January 21, 2014 – 11 AM Eastern