13 January 2019
Several technical posts this week with advice on working with relational databases, Apache Airflow / ETL tools, and Apache Spark structured streaming. There are also posts on securing PII data in your data warehouse, the Kubernetes API, and performance improvements in CockroachDB. Finally, congrats to the data Artistans team on their acquisition and the Apache Airflow team for graduating to a top-level project.
A good overview of design considerations to enable continuous delivery in a software project built on a relational database. Specifically, they describe an approach to supporting multiple versions of table definitions in your application code to minimize breaking changes.
Astronomer writes about how they use Apache Airflow for MetaRouter, their event-routing platform. Among other topics, they discuss their migration from DC/OS to Kubernetes for Airflow, which included a switch to the celery executor.
An interesting look at the Kubernetes API for building a custom scheduler. While you'll likely not have a reason to implement your own scheduler, it does look like the Kubernetes API and deployment process make this easier than some other distributed systems.
A good introduction to the various stages of ETL, things to consider for each stage (e.g. auditing), types of data cleansing and transformation, common challenges (e.g. performance issues, data format changes), and more.
An example implementation of role-based access control and database layout in Snowflake to isolate and mask PII data. The solution allows all users to query all the tables, but only some users (roles) to access data without masking.
Confluent has a multi-part tutorial for building an event-driving application on the Confluent Platform. The exercises cover activities like streaming joins, stateful operations, and enrichment with KSQL.
CockroachDB writes about transaction pipelining, which is a new feature in version 2.1 of their database. The implementation details are covered in the article (and hard to summarize), but they achieve the impressive feat of turning latency that scales linearly with the number of DML statements to a constant overhead.
An introduction and tutorial to Apache Airflow for data management. Even if you're already familiar with Airflow, you might want to read this one for the long-running airline + weather analogy.
A good list of common rules for building a reliable data platform. The last one is a meta-rule—to use all of your tools (Airflow and Spark among them) to automate as much implementation of the other rules as possible.
Apache Spark 2.4.0 includes the ability to select different watermarking strategies for joining streams. This post describes the semantics of the different strategies and has some code examples that demonstrate the difference.
An intro into how to efficiently load data using the Java APIs using non-SQL standard
LOAD DATA and
COPY commands for MySQL and Postgres. For comparison, the author has also written on using JPA and JDBC to quickly insert data.
"Architecting Modern Data Platforms" is a new book from O'Reilly. Covering Hadoop, it discusses tools and offers advice about infrastructure (e.g. compute and networking architecture), platform (e.g. integrating with an identity provider), and operating in the cloud.
Apache Airflow was announced as a top-level Apache Software Foundation project.
Alibaba has acquired data Artisans, the company that was started by several members of the team behind Apache Flink. Datanami has more details on the history of data Artisans, and what we can expect going forward.
A good post on the "Feynman trap" that often occurs when looking for patterns in big data.
The new Cloudera has started talking more about their plans for unifying the Hortonworks Data Platform and CDH distributions. The combined product will be called the Cloudera Data Platform, and existing releases will be supported through January 2022.
LiteCLI is a new CLI for SQLite with auto-complete and other user-friendly features.
Version 5.1 of the Databricks Runtime is out with Azure improvements, Databricks Delta improvements, and the ability to install and import Python libraries for particular notebooks.
Apache Flume had its first release in over a year. The version 1.9.0 release has a large number of updates and improvements, including support for newer version fo HBase and Kafka.
Amazon Web Services has announced Amazon DocumentDB, which is a MongoDB-compatible document database. It has some novel features, including 6x replication across 3 availability zones.
Apache HBase 2.1.2, which includes 70 bug fixes and improvements over the 2.1.1 release, was announced.