Data Eng Weekly


Data Eng Weekly Issue #312

27 May 2019

I ended up taking last week off, so we have a slightly more articles than my 10 per issue I've been targeting. So there's lots of good stuff to catch up on including several posts on data pipelines and data quality (from Amazon, Stitch Fix, and more) as well as some pertinent posts on SQL debugging, PostgreSQL's vacuum, Kubernetes health checks, and deduplicating events.

Technical

This article is a great introduction to Online Event Processing (OLEP), an alternative to transactions for heterogenous distributed systems. The authors also describe how OLEP can replace OLTP systems, and how you can implement atomicity (but repeatable reads across downstream systems is not currently possible). Lots of good stuff in this well-written, easy to consume article about how to build a system with a distributed log.

https://queue.acm.org/detail.cfm?id=3321612

Pylint-airflow is a new pylint plugin to find (stylistic) errors in your Airflow programs. It has a few checkers for now, covering things like unused XCom's and mixed dependency directions.

https://blog.godatadriven.com/introducting-pylint-airflow

Apache Flink 1.7 introduced temporal tables, which store a time series. This post looks at the Flink APIs for joining a stream with a temporal table (you must also define a temporal table function) using the canonical example of currency exchange rates.

https://flink.apache.org/2019/05/14/temporal-tables.html

It hasn't been that long since Google wrote about using deep-learning models to create database indexes. A new paper applies a similar technique to detecting Quality of Service violations in distributed systems. Similar to distributed tracing systems like Zipkin and Jaeger, services and clients are instrumented to capture latencies. This data is then used to train a Deep Neural Network that can predict latency violations.

https://blog.acolyer.org/2019/05/15/seer/

Amazon has a post describing how they use Deequ, a Spark library, for testing the quality of data sets. Deequ, which is open source, has a lot of builtin constraint verification functions, like evaluating minimum/maximum/null values, testing for correlations, and checking approximate counts (using HLL++).

https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/

A look at the new features of Apache Avro 1.9 - a new version of Jackson, a switch to the Java8 date time API, and support for zstandard. Lots of good stuff for a project that's at the heart of a lot of data platforms.

https://blog.godatadriven.com/apache-avro-1-9-release

A good walkthrough of debugging a slow SQL query for a major (>700x) improvement in performance. The author describes isolating the problem, finding a different solution, and verifying correctness.

https://parallelthoughts.xyz/2019/05/a-tale-of-query-optimization/

A look at the notion of a data mesh as the next evolution of data platforms to replace data warehouses and data lakes. These monolithic data repositories suffer from the same problems as a monolithic services architecture (like tight coupling), and the data mesh is meant to be product driven (with product owners for each data product) to improve agility. A common data infrastructure platform, to solve things like security and addressability of data (e.g. a s3 bucket or a Kafka topic), is also an important component. Lots of good ideas worth incorporating into your data platform.

https://martinfowler.com/articles/data-monolith-to-mesh.html

A look at your options for implementing a Kubernetes liveness problem (health check) for Kafka Streams (or any other JVM application that exposes some useful metrics via JMX). The post also describes how to expose metrics for ingestion into Prometheus.

https://blog.softwaremill.com/whats-the-proper-kubernetes-health-check-for-a-kafka-streams-application-c9c00a112581

Stitch Fix shares several best practices for writing maintainable data pipelines. These include leveraging SQL, writing maintainable SQL queries by utilizing Common Table Expressions, using a workflow engine, and implementing data quality checks. Also notable—at Stitch Fix, data scientists own their own pipelines end to end (including maintenance after launch).

https://multithreaded.stitchfix.com/blog/2019/05/21/maintainable-etls/

This post has details on PostreSQL's Vacuum functionality, which is used to reclaim disk space and optimize the database. For large databases, this process can take many days. Using pg_stat_progress_vacuum to measure the status of the vacuum, the authors plot several visualizations of the vacuum progress. With these, they explain what's going on (at a high level) and have several links if you want to dive into even more details.

http://dtrace.org/blogs/dap/2019/05/22/visualizing-postgresql-vacuum-progress/

A common approach to idempotent writes of data is to dedup based on a unique ID. This can be an expensive process, since writes cause reads (or multiple read operations, depending on the database). This post describes how to use bloom filters for deduplication, which in this use case provided major speed improvements as well as pretty impressive cost savings in infrastructure given the improved efficiency.

https://amplitude.engineering/dedupe-events-at-scale-f9e416e46ca9

Events

Curated by Datadog ( http://www.datadog.com )

UNITED STATES

Washington

Running Flink Applications in Kinesis Analytics (Seattle) - Thursday, May 30
https://www.meetup.com/seattle-flink/events/260865206/

Texas

Kafka on Kubernetes: Does It Really Have to Be “The Hard Way”? (Plano) - Tuesday, May 28
https://www.meetup.com/Dallas-Kafka/events/261230884/

Florida

Splice Machine and Hadoop with Clearsense (Jacksonville) - Tuesday, May 28
https://www.meetup.com/jaxbigdata/events/261614417/

Massachusetts

May Meetup Night (Danvers) - Thursday, May 30
https://www.meetup.com/Boston-Apache-Spark-User-Group/events/259193570/

BRAZIL

Data & Analytics Meetup (Sao Paulo) - Tuesday, May 28
https://www.meetup.com/tangerines-data-analytics/events/261621800/

SPAIN

Tips and Tricks about Apache Kafka in the Cloud for Java Developers (Madrid) - Wednesday, May 29
https://www.meetup.com/Madrid-Kafka/events/261481919/

FRANCE

Apache Kafka: Optimizing Your Deployment (Paris) - Tuesday, May 28
https://www.meetup.com/Distributed-Data-Paris/events/261044220/

Scio: A Scala DSL for Beam + Spark Best Practices (Paris) - Tuesday, May 28
https://www.meetup.com/Paris-Data-Engineers/events/260034292/

NETHERLANDS

The 10th Apache Kafka Meetup (Utrecht) - Tuesday, May 28
https://www.meetup.com/Kafka-Meetup-Utrecht/events/260303497/

GERMANY

From Batches to Events (Stuttgart) - Wednesday, May 29
https://www.meetup.com/devops-stuttgart/events/261347123/

POLAND

Context Buddy + Streaming Pipelines Using Apache Kafka (Krakow) - Tuesday, May 28
https://www.meetup.com/Krakow-Scala-User-Group/events/260926054/

ISRAEL

Modern Big Data and Analytics Architecture Patterns on AWS  (Tel Aviv-Yafo) - Tuesday, May 28
https://www.meetup.com/Big-Data-Analytics-Meetup/events/261221730/

Women in Big Data Israel: First Meetup! (Tel Aviv-Yafo) - Thursday, May 30
https://www.meetup.com/Women-in-Big-Data-Israel/events/260800288/

SINGAPORE

Apache Kafka Is More ACID Than Your Database (Singapore) - Wednesday, May 29
https://www.meetup.com/Singapore-Kafka-Meetup/events/261490148/

Traveloka: How We Run Cloud-Scale Apache Spark in Production Since 2017 (Singapore) - Wednesday, May 29
https://www.meetup.com/Spark-Singapore/events/261637175/

Links are provided for informational purposes and do not imply endorsement. All views expressed in this newsletter are my own and do not represent the opinions of current, former, or future employers.