Data Eng Weekly

Data Eng Weekly Issue #251

11 February 2018

Lots of great content this week—LinkedIn's Apache HDFS load testing tool, a few articles on Apache Flink, example microservices built on Apache Kafka, and details on replacing Kafka's usage of ZooKeeper with etcd. There are also a couple of posts on Hadoop 3.0, and a year-ender from Confluent.


This post gives a pretty thorough overview of the architecture behind Apache Flink's incremental checkpointing feature that provides resiliency for stateful stream processing. Flink leverages RocksDB for local state, and it keeps track of which sstables (the underlying file format storing the data) need to be backed up to stable storage to create a snapshot. There's quite a bit that goes into this, as is described in the post.

This in-depth post compares ClickHouse, Druid, and Pinot, all open-source OLAP distributed storage engines. It describes the similarity between systems (e.g. the storage and indexing strategies), compares performance characteristics, and highlights key differences in data ingestion, data replication, and query execution.

Hortonworks has two posts on new features in HDF 3.1. The first covers the NiFi registry, which facilitates versioning of NiFi flows when promoting them across environments. The second looks at the Kafka integration, which has some neat features like support for ingesting data from the edge using the MiNiFi C++ agent. It also integrates with Apache Ranger for security and Apache Ambari for monitoring.

Confluent KSQL doesn't yet have support for runtime configuration of User Defined Functions (UDFs), but it is possible to write a custom function and to rebuild from source. This tutorial walks through the steps to do that.

This post walks through configuring Amazon EMR with Kerberos for security and using IAM roles to implement fine-grained access to data stored in Amazon S3. It also integrates with HUE and Hive, which are part of the demo.

LinkedIn has written about how they test Apache Hadoop DFS performance before upgrading versions using a load-testing tool called Dynamometer. It simulates a production load by bootstrapping from the NameNode FS image, running a large number of simulated DataNodes, and replaying real operations from the HDFS audit log. There are lots of interesting details and stories in the post. Dynamometer is also now available on github.

The dataArtisans' blog has coverage of a number of upcoming features in Apache Flink. These include a new deployment model with better support for Kubernetes, work on speeding up failure recovery, improved network stack performance, support for the Swift filesystem, and several updates to Flink Streaming SQL.

This post details a talk from QCon New York on Netflix's stream processing system. Built on Apache Kafka, Apache Flink, Apache Mesos, and more, the system analyzes data from video playback/discovery events. The post describes some of the challenges that Netflix encountered (such as getting access to live data and JAR conflicts) and strategies implemented (such as caching of data and data recovery) along the way.

This example implements the event-sourcing / CQRS model with Apache Kafka Streams. It includes several diagrams to explain what the service is doing and bundles code on github. That code includes Docker image definitions, making it easy to try out locally.

MapR has the third part in its series on stream processing to predict flight delays. This part looks at Kafka and Spark Streaming. The implementation makes heavy use of JSON, which is natively supported by MapR-DB.

The folks at Banzai have a fork of Apache Kafka that uses etcd rather than Apache ZooKeeper. The post describes their motivation, how hard it is to do the replacement, and some of the issues that they ran into along the way. They write about some simple testing, but you probably want to do quite a bit more before trying this in production.


Hello Fresh: Change the way people eat forever. Work with our data technology to deliver healthy meals to millions of customers, with a cutting-edge tech stack (Hadoop, Kafka, Impala, pyspark, AWS, Airflow) and time for personal and engineering development. Click the link for more info on becoming a Data Engineer at Hello Fresh in Berlin!


Confluent has written a year-ender celebrating milestones for 2017 for their company and Apache Kafka. Highlights include Confluent's growth, KSQL, Kafka 1.0, and exactly-once in Kafka.

This post notes that Hadoop 3.0's erasure coding provides huge improvements to storage efficiency for Hadoop clusters at the expense of network and CPU. This changes the way that companies will have to plan to grow capacity, and it may shift some of the cost calculations when it comes to on-prem vs. SaaS.

Erasure coding isn't the only new feature in Hadoop 3. This post summarizes several other major features like support for containers, support for additional standby NameNodes, and intra-node disk balancing.


Apache Atlas version 0.8.2 was released. Atlas provides governance for Hadoop ecosystem projects, including Hive and Storm. The new release includes search and UI improvements, fixes for high availability, and more.

Version 2.7.1 of Apache Lens, the analytics engine for Hadoop, Hive, services supporting JDBC, and more, was released. Major features include support for Java 8, improvements to per user configuration in the job scheduler, cube segmentation, retries to recover from transient errors, and UNION support across fact tables. There are also a number of bug fixes.

Apache Knox, the REST API gateway to Hadoop, released version 1.0.0. It has a few new features since the 0.14.0 release, but the major changes are to the package namespace.


Curated by Datadog ( )



Processing 100 Billion Events a Day at GumGum (El Segundo) - Thursday, February 15

Building a Big Data Stack on Kubernetes (San Jose) - Thursday, February 15

North Carolina

Jeff Dutton from HPE Talking Domain-Driven Design, CQRS, and Event Sourcing (Raleigh) - Monday, February 12

New Jersey

Apache NiFi Mardi Gras (Princeton) - Tuesday, February 13

New York

Event Processing at Scale + Advocating for Continuous Improvement (New York) - Thursday, February 15


Building Multi-Region & Multi-Cloud Services with Kafka (Kanata) - Thursday, February 15


Kafka Meetup with Confluent and Forefront Consulting (Stockholm) - Thursday, February 15

Apache Beam: Unified Batch and Stream Processing! (Stockholm) - Thursday, February 15


Human Talks (Montpellier) - Tuesday, February 13


Peek at Avast Big Data Kitchen Tools (Prague) - Thursday, February 15


Running Microservices on Apache Kafka (Tel Aviv-Yafo) - Wednesday, February 14