Data Eng Weekly

Data Eng Weekly Issue #304

10 March 2019

With two weeks worth of content to pull for this issue, there are a lot of great articles this issue. Topics covered include Apache Flink, Presto, FaunaDB, and Kafka. In news, there's a new public roadmap for Apache Flink, and an article about the continued strength of the Data Engineering profession.


The Apache Flink blog has a post describing Flink's monitoring internals, job- and jvm-specific key metrics to monitor, alert conditions for those metrics, and what might trigger the alert conditions.

Teads writes about how they use Amazon Redshift to power internal analytics systems with relatively low latency (<500ms in most cases). They have a custom-built Analytics Service that pulls data from BigQuery, performs some enrichment, and publishes new data marts to Redshift. The post describes how they decided to go with Redshift for this use case over two other managed services (BigTable and DynamoDB), and how they optimize RedShift for query latency and concurrency.

The Spring Framework has an integration with Apache Kafka that provides some good abstractions to eliminate boiler plate in a Java application. This post provides ab overview of these features and how to handle errors, map records to listeners based on header values, and more.

Prefect shows how the primitives of their workflow engine provide a lot of features with a small amount of effort. Their post that shows how to use Prefect to implement a standup bot covers features of the upcoming system like its functional API, configuration (and first class support for secrets), and execution logic.

If you've worked with big data long enough, you've probably run into a slow query caused by data skew. This post describes how to improve performance when joining skewed datasets by precomputing a bin id, which is added to the join constraints. There's a new python package that implements the algorithm for PySpark.

This article describes how serverless/functions-as-a-service (FaaS) fit with an event stream architecture, and the types of stream processing applications that work well with FaaS.

This tutorial has an interesting solution for enriching and indexing streaming data with the ELK (Elastic/Logstash/Kibana) stack. By using Logstash's JDBC input plugin, they stream results out of Apache Kafka using the Presto query engine. Presto supports lots of backends, so it's easy to join that streaming data to enrich it with data from other sources like a table in MySQL (as is demoed here).

AWS writes about the performance implications of the FileOutputCommitter (which renames files) for Amazon S3 (or other blob stores). They describe some of the performance improvements (which are built with the same strategy of the S3A file system committers) that they've built for Amazon EMR FS and Apache Spark+Apache Parquet. They have some benchmarks in the post—there are some significant speedups with these optimized committers.

The Kudu blog has a good post on building out a hybrid storage strategy with Apache Kudu and HDFS. The data can be queried as a unified data source in Apache Impala by creating a view that captures both data sources. The post describes a sliding window strategy for slowing moving data from Kudu to (cheaper) HDFS storage.

Jepsen writes about FaundaDB, which implements the Calvin protocol for distributed transactions. If you're not familiar with Calvin, there's a good introduction in the article. There are lots of details that are hard to summarize in a sentence or two, but I find it interesting to see what kinds of tests a distributed databases can be taken through and what to learn from them.

Qubole describes the challenges of Spark's Structured Streaming Checkpointing to an object store (like Amazon S3), which shares some technical similarities to the above post on output committers. Qubole has an implementation that leverages some improvements in Spark 2.4.0 to avoid rename operations of these checkpoints with a blob store backend.

This post introduces a pattern for validating/evolving schemas of data when loading into BigQuery. There are several code samples and examples in the post.

Qubole has an overview of resource groups in Presto, including an introduction to soft & hard limits and the key configuration parameters. The post also includes examples and scenarios.

Pravega has a deep dive into the implementation and architecture of the Segment Store, which is the data plane (used for append/reads/etc) in their streaming storage system.

ActionIQ writes about how they've implemented auto scaling for Luigi workers to improve throughput of their data pipelines. They use a clever strategy that adds workers based on the number of pending tasks and uses instance protection to ensure that a task completes before instance shutdown.

Version 4.0 of Apache Cassandra, which isn't yet released, has a new feature called virtual tables. Similar to the proc filesystem in linux, it exposes system metrics through read-only tables that can be queried the same as an application data stored in Cassandra.


Senior Data Engineer (Spark), N26, Berlin

Software Engineer - Data Platform, Fitbit, San Francisco, CA


The newly launched Data Council (Formerly DataEngConf) is in just over a month in San Francisco (April 17-18th). They are offering subscribers a $200 discount, using the code DataEngWeekly200.

Spark+AI summit, which is takes place in April, has a new data engineering track. This post looks at some of the talks from that track.

The Apache Flink project has published a roadmap, which looks at the upcoming improvement proposals and key Jira issues that they're tracking across several different areas.

Datanami notes that according to some job board and career site data, demand for Data Engineering continues to be strong. In fact, according to one metric, the rate of Data Engineering job titles is outpacing that of Data Scientist.


Fluent Kafka Streams Test is a new library from bakdata for testing Apache Kafka Streams applications. It provides convenience functions/glue for inputs, processing, and outputs using a JUnit extension. Their introductory post provides a number of examples for different types of applications.

Version 1.6.4 of Apache Flink was released this week. The changes mostly fix bugs, but there are also some small improvements.

The 0.9.2 release of Debezium, the change data capture tool, has been announced. It includes some fixes for Postgres, MySQL, and SQL Server connectors as well as a few new features and dependency updates.

Apache Daffodil (incubating), which is a tool fo converting between legacy binary / fixed width formats and JSON/XML, had its 2.3.0 release. Daffodil implements conversion using the Data Format Description Language, which has specifications for lots of legacy data formats for industries like health care and finance (e.g. point of sale systems).

Version 1.0 of the KSQL JDBC driver has been released. In this version, all KSQL commands are supported via SQL queries over JDBC.


Curated by Datadog ( )


Alluxio 2.0 Deep Dive + A Case of Real-Time Processing with Spark (San Mateo) - Thursday, March 14


How Kafka Has Become the Nervous System of a Modern Data Architecture (Maryland Heights) - Thursday, March 14


Apache Kafka: Optimizing Your Deployment (Chicago) - Thursday, March 14


Cleveland Big Data Meetup (Mayfield Village) - Monday, March 11

North Carolina

Utilizing Kafka to Create a Streaming ETL Platform (Charlotte) - Tuesday, March 12

New York

Kafka at the New York Times and Datadog (New York) - Tuesday, March 12

How to Work with Kafka in Ruby (New York) - Tuesday, March 12


Wayfair's Journey with Apache Kafka (Boston) - Tuesday, March 12

New Hampshire

Cloud Big Data, Data Science, and Machine Learning (Bedford) - Tuesday, March 12


The Blueprint Series: Principles of Modern Data Architecture (London) - Thursday, March 14


Discussions Around O16N & Data Ingestion Tooling (Amsterdam) - Thursday, March 14


Our First Kafka Meetup in Nurnberg! (Nurnberg) - Wednesday, March 13

Apache Kafka & the IoT (Frankfurt) - Wednesday, March 13


Apache Spark Test-Driven Development (Singapore) - Thursday, March 14


Evolution of the Data Pipeline in Agoda (Bangkok) - Thursday, March 14


Open Banking + Event-Driven Microservices Using Apache Kafka (Auckland) - Wednesday, March 13

Writing Big Data Pipelines: The Apache Beam Project (Wellington) - Thursday, March 14

Links are provided for informational purposes and do not imply endorsement. All views expressed in this newsletter are my own and do not represent the opinions of current, former, or future employers.