17 March 2013
The Hadoop news stream has calmed down significantly since the numerous announcements surrounding Strata and Hadoop Summit, but we have a lot of interesting articles and two exciting project announcements this week. There were also a number of impassioned discussions about the role of enterprise in Apache Hadoop, which probably could fill an entire issue. Rather than cataloging those links, though, I've included a single link to a complication of articles and kept the bulk of this issue technical.
Thanks to everyone who has been spreading the word about Hadoop Weekly -- it reached 400 subscribers over the past week. As always, feel free to send me feedback about the content of the newsletter via twitter @joecrobak or by reaching out to |redacted| (you might notice that this issue doesn't contain any event announcements -- I've found it difficult to keep on top of them -- but please let me know if you miss that section).
Hortonworks has partnered with Microsoft to help run Apache Hadoop on Windows, and they recently released a beta version of Hortonworks Data Platform with Windows support. In the process of building this support, an impressive amount of features and improvements were added to Hadoop. Some details including a list of the most notable work done is featured on the Hortonworks blog. They also have published a blog post with a tutorial for running Hadoop on Windows.
http://hortonworks.com/blog/hadoop-in-windows/ http://hortonworks.com/blog/installing-hadoop-on-windows/
Quantcast has used Hadoop to process data for the past several years. After identifying some inefficiencies in HDFS and MapReduce, they adopted Kosmos FS (KFS) and evolved the codebase to produce Quantcast FS (QFS) as well as implemented a new sort algorithm, QuantSort. In this interview, the leader of Quantcast's R&D team and the creator of KFS describe the motivation and history of these features.
http://www.odbms.org/blog/2013/03/big-data-improving-hadoop-for-petascale-processing-at-quantcast/
There has been a lot of conversations about proprietary vs. open source Hadoop stacks. Alex Popescu highlights a key paragraph from the ReadWrite article that spurred a large amount of conversation, as well as links to a number of articles expressing viewpoints on both sides of the debate.
http://nosql.mypopescu.com/post/45243494967/proprietary-hadoop-is-a-losing-strategy
Monash Research has an update on YARN, including information on when it should become stable. The article also talks about Tez (the new framework being developed to speed up Hive and Pig execution), comparing its features to those of Spark.
http://www.dbms2.com/2013/03/11/hadoop-execution-enhancements/
"Running the Largest Hadoop DFS Cluster" is a presentation by Hairon Kuang of Facebook about operating their 100 PB HDFS cluster. In the talk, she covers a number of interesting topics, such as Facebook's highly-available NameNode, their use of NameNode federation, and raid encoding for more efficient data storage.
http://www.infoq.com/presentations/Hadoop-HDFS-Facebook
WanDisco is offering free online training for Hadoop -- classes cover of HDFS, MapReduce and HBase.
http://blogs.wandisco.com/2013/03/04/wandisco-announces-free-online-hadoop-training-webinars/
Amazon and other cloud providers offer services for running Hadoop in the cloud, but there are often advantages of running and configuring your own cluster. In this post, the author details a number of interesting tricks for running Hadoop in AWS, including generating "rack" information based upon availability zone, using spot instances for task trackers, and a couple of other neat tricks. Worth a read if you're running or thinking of running Hadoop in the cloud.
http://metabroadcast.com/blog/hadoop-in-the-cloud
Many Hadoop clusters user gigabit ethernet, and we're starting to see more adoption of 10-gigabit ethernet. D.K. Panda from Ohio State has been evaluating the effect of infiband and other high performance networks on HDFS, MapReduce, and HBase workloads. He shares some preliminary results on small (<25 node) clusters in this presentation.
Tony Baer of Ovum provides a detailed look at Intel's Hadoop strategy. This is the first information I've heard about their software suite, which includes a deployment and configuration system to setup Intel's optimizations. In addition to optimizations, Intel is bringing hardware-based encryption to Hadoop (and they seem to hope to integrate this into Hadoop (see e.g. HADOOP-9331).
http://ovum.com/2013/03/12/intel-hardware-optimizes-hadoop/
Parquet is a new columnar format that's been jointly developed by Cloudera and Twittter (and Criteo). Unlike Trevni, the columnar storage format supported by Apache Avro, Parquet is not tied to a single serialization framework, and it will support Protobuf, Thrift, and Avro. Parquet also has a number of advanced features such as per-column encoding (run-length, delta, etc) and a file metadata footer. At this time, Parquet has MapReduce and Apache Pig bindings. Currently under development are Apache Hive SerDes, Cascading taps, and new types of data encoding (run-length, delta, etc).
https://twitter.com/squarecog/status/311503760423784448 http://blog.cloudera.com/blog/2013/03/introducing-parquet-columnar-storage-for-apache-hadoop/
Kiji is an api and set of tools for writing applications atop of HBase. The first component release (a few months ago) was KijiSchema, for creating tables in Kiji. This week, KijiMR was released, which is a set of MapReduce libraries for Kiji. KijiMR includes Bulk importers, Gathers&Producers for accessing and populating Kiji data via MR, command-line tools, and several other features.
http://www.kiji.org/2013/03/14/announcing-kijimr/