Data Eng Weekly


Hadoop Weekly Issue #197

18 December 2016

This week's issue is likely that last of 2016, and it's a good one. There's a post form Uber about their Kafka auditing system, a post on Blue Apron's Google BigQuery-based data platform, and a great interview about Apache Apex and Apache Flink. As always, if you come across any great articles, please send them my way!

Technical

Vessel is a new tool for writing MapReduce programs in Elixir (a language for the Erlang VM). It makes use of Hadoop Streaming and presents an API very similar to that of Hadoop's Java APIs.

https://zackehh.com/vessel-a-bridge-between-elixir-and-hadoop/

Chaperone is Uber's system for auditing events as they pass through Kafka clusters. It's used to detect data loss, monitor end-to-end latency, and detect data duplication. Chaperone has an interesting architecture to audit each event exactly once and write the audit information back to Kafka for loading into monitoring dashboards and into an a database.

https://eng.uber.com/chaperone/

This post shows how to connect an MapR cluster with Apache Drill in Microsoft Azure to the Azure blobstore and Azure SQL database.

https://www.mapr.com/blog/connecting-drill-enabled-mapr-cluster-azure-resources-part-2

The Cloudera blog has a post describing many different resource management knobs for Apache Impala (incubating). These include admission control, which limits resource utilization across the cluster, per-query memory limits, and dynamic pools. The article also has examples of various types of pools (e.g. analytics and real-time user facing) and recommended settings for max memory, default memory limit, max running queries, and queue timeout.

http://blog.cloudera.com/blog/2016/12/resource-management-for-apache-impala-incubating/

Joins in streaming data have, until recently, been very difficult to implement. With higher-level APIs for streaming systems, like Apache Kafka Streams and Apache Beam, it's become much easier. To that end, Amazon Kinesis Analytics supports joins with static data in S3 and between streams. This post has tutorials for those two types of joins as well as an example of enriching data on an Amazon Kinesis stream by joining it with metadata in Amazon DynamoDB using AWS Lambda.

https://aws.amazon.com/blogs/big-data/joining-and-enriching-streaming-data-on-amazon-kinesis/

BlueApron has written about their move from a postgres-based data warehouse to Google BigQuery. They are using Apache AirFlow (incubating) to coordinate dumps from postgres into BigQuery and manage schema changes. For querying the data by data partition, they use a nifty trick of creating tables with the data partition included in the table name.

https://bytes.blueapron.com/bigquery-delivers-for-blue-apron-9acef1c1b417

This post, by way of talking through the goals of microservices, introduces the data dichotomy: "Data systems are about exposing data. Services are about hiding it." Essentially, while services solve some problems they introduce others (like what if the service doesn't support batch analytics queries). Apache Kafka and stream processing are one way to fight this dichotomy, by making streams and stateful processing easy.

https://www.confluent.io/blog/data-dichotomy-rethinking-the-way-we-treat-data-and-services/

If you want to get started with Sparklyr, the Spark bindings for R/dplyr, then Cloudera Director offers some automation to spin up a cluster in AWS. This post walks through the necessary steps.

http://blog.cloudera.com/blog/2016/12/automating-your-sparklyr-environment-with-cloudera-director/

Apache Spark 2.1 (which is not yet released) will get a speedup in planning phase of executing queries by only fetching metadata from the Hive metastore that's relevant to partitions being queried. For large tables with many partitions, this can lead to a massive speedup. This post describes the changes and how they were implemented.

https://databricks.com/blog/2016/12/15/scalable-partition-handling-for-cloud-native-architecture-in-apache-spark-2-1.html

News

SyncSort has a three-part interview with Ted Dunning on stream processing with Apache Flink and Apache Apex, how these compare to Apache Storm, and a bit about open-source communities. The first part has a great explanation of how Flink and Apex do snapshotting and how that relates to exactly-once delivery semantics.

http://blog.syncsort.com/2016/12/big-data/expert-interview-ted-dunning-part-1-of-mapr-on-advantages-and-use-cases-of-apache-flink-and-apex/

ZDNet has an article about how MapR is a rebel in the big data industry—by shipping proprietary instead of open-source systems. The other big competitors have a different stance: Cloudera is mostly open-source, and Hortonworks is 100% open-source. The article argues, though, that open-source isn't as big of a differentiator as it used to be, especially as there are lots of incompatibilities among open-source projects (the number of SQL engines is given as an example).

http://www.zdnet.com/article/mapr-a-rebel-with-a-cause/

Registration is now open for DataWorks Summit and Hadoop Summit, which takes place in Munich in April.

http://hortonworks.com/blog/dataworks-summit-hadoop-summit-registration-open/

Releases

Apache Phoenix 4.9.0 was released earlier this month. The new release adds support for atomic upsert and fixes over 40 bugs.

https://blogs.apache.org/phoenix/entry/announcing_phoenix_4_9_released

Version 2.1 of Hortonworks DataFlow, which is powered by Apache NiFi, Apache Kafka, and Apache Storm, was released. The release adds butter support for cloud services (including Amazon Kinesis and Microsoft Azure blob store and data lake, improves easy of use, and has new access control features.

http://hortonworks.com/blog/announcing-availability-hortonworks-dataflow-hdf-2-1/

Events

Curated by Datadog ( http://www.datadog.com )

UNITED STATES

California

Machine Learning and the Future of Big Data (San Francisco) - Monday, December 19
https://www.meetup.com/Women-in-Big-Data-Meetup/events/236124073/

CHILE

Big Data Meetup 2016.4 (Las Condes) - Monday, December 19
https://www.meetup.com/Big-Data-Chile/events/235992573/

POLAND

Speak Spark SQL 2.0 for Much Better Performance (Wroclaw) - Monday, December 19
https://www.meetup.com/25th-Level-Code-Wroclaw/events/236089593/

HUNGARY

Data Christmas 2016 (Budapest) - Monday, December 19
https://www.meetup.com/Budapest-Spark-Meetup/events/235782518/

ISRAEL

The Case for Kudu from Cloudera (Tel Aviv) - Monday, December 19
https://www.meetup.com/HadoopIsrael/events/236063981/

View to Big Data and Real-Time Analytics (Tel Aviv) - Tuesday, December 20
https://www.meetup.com/Cutting-edge-tech-and-Big-Data-case-studies/events/236063437/