Data Eng Weekly


Hadoop Weekly Issue #247

14 January 2018

It was a busy week with lots of releases—including new versions of Apache NiFi and Lenses, and two new open-source projects related to Kafka. In technical posts, there's coverage of the performance implications of meltdown patching, Samza, Kafka, Spark with Kubernetes, Apache Flink, and StreamSets.

Technical

Three posts on the impact of patching for meltdown. First, Appontics has written about their experiences with the effects of the meltdown patches in AWS. They run Kafka and Cassandra clusters, and they experienced some counterintuitive changes as a result to the changes in CPU efficiency. Second, The Last Pickle has a post on how Meltdown impacts Cassandra latency, and third, Databricks has written about the impact they've seen to Spark on AWS.

https://blog.appoptics.com/visualizing-meltdown-aws/
http://thelastpickle.com/blog/2018/01/10/meltdown-impact-on-latency.html
https://databricks.com/blog/2018/01/13/meltdown-and-spectre-performance-impact-on-big-data-workloads-in-the-cloud.html

This post describes several of the new features of the recently released Samza 0.14.0, including the new SQL support that's built with Apache Calcite.

http://www.i-programmer.info/news/197-data-mining/11445-apache-samza-adds-sql.html

The MapR blog has a two-part post on using Apache Kafka and Apache Spark (streams and ML apis) to build a real-time flight delay prediction application. The post includes code on github and an Apache Zeppelin notebook.

https://mapr.com/blog/fast-data-processing-pipeline-predicting-flight-delays-using-apache-apis-pt-1/
https://mapr.com/blog/fast-data-processing-pipeline-predicting-flight-delays-using-apache-apis-pt-2/

In a follow-up to their post on running Spark via Kubernetes, this post adds instructions for deploying Apache Zeppelin inside of a k8s cluster. The Banzai team has published an image to Docker Hub and sample configs to github to make the process easy.

https://banzaicloud.com/blog/zeppelin-spark-k8s-2/

The Google Cloud Platform blog argues that you should use a strongly consistent database whenever possible, because it makes application and business logic easier to implement. They give a high-level overview of Google Cloud Spanner, which provides "external consistency" guarantees including a comparison to multi-master replication and a brief intro to Cloud Spanner's TrueTime.

https://cloudplatform.googleblog.com/2018/01/why-you-should-pick-strong-consistency-whenever-possible.html

This tutorial describes how to run TiDB, a mysql-compatible and Google F1/Spanner-inspired distributed database, on Kubernetes.

https://banzaicloud.com/blog/tidb-kubernetes/

If you've been looking to try out Apache Pulsar (incubating) to see what all the hype is about, there are some terraform and ansible scripts to easily spin up a cluster in AWS. After a few setup steps, the automation will build a six-node, fully-configured cluster.

http://pulsar.apache.org/docs/latest/deployment/aws-cluster/

The dataArtisans blog has a post with tips for sizing an Apache Flink (or really, any distributed computing application) cluster by estimating disk and network throughput. It walks through a practical example and the related formulas to make these estimations for a five-node cluster.

https://data-artisans.com/blog/how-to-size-your-apache-flink-cluster-general-guidelines

This post describes how to use StreamSets to grab data from the Twitter API and copy it to a local file system for analysis.

https://streamsets.com/blog/streaming-data-twitter-analysis-spark/

News

Trafodion, which implements transactional SQL on Hadoop/HBase, has graduated from the Apache incubator to be a top-level project.

https://blogs.apache.org/foundation/entry/the-apache-software-foundation-announces27

InfoQ has published a new eMag on Streaming Architectures. Behind a email-wall, the content is from contributors employed by Google, Confluent, AWS, and more. It clocks in at over 30 pages, and there's quite a bit of good content about Beam, Kafka, DynamoDB, Flink, and more.

https://www.infoq.com/minibooks/emag-streaming-architecture

Hortonworks CEO Rob Bearden has recapped 2017, looking at product releases and major partnerships.

https://hortonworks.com/blog/2017-year-review/

Kafka Summit 2018 takes place in London in April. The schedule has been announced—there are four keynotes (including one by Martin Fowler) and 30 sessions featuring speakers from many different types of companies. Early bird registration is available through January 26th.

https://www.confluent.io/blog/kafka-summit-london-2018-agenda-announced/

DZone has an article, based on a survey of over 20 companies, that provides a good overview of types (and specific examples) of use cases across the big data ecosystem. Everything from analytics to real-time processing to machine learning.

https://dzone.com/articles/big-data-use-cases-3

Releases

Apache NiFi 1.5.0 was released, with improved support for Apache Kafka (processors for Kafka 1.0), integration with Apache Atlas for lineage, improvements to Kerberos handling, integration with the NiFi Registry to version/manage flow definitions, and more.

https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version1.5.0

There's a new reactive Scala client for Apache Pulsar, scala4s.

https://github.com/sksamuel/pulsar4s

Databricks has announced general availability of their Databricks Cache feature. It leverages SSDs and columnar compression to improve performance.

https://databricks.com/blog/2018/01/09/databricks-cache-boosts-apache-spark-performance.html

ShiftLeft has open-sourced a fork of Apache TinkerGraph, which improves memory usage and implements strict schema validation. The announcement describes the major memory optimizations—by adding a schema, object definitions can reduce overhead by replacing generic key-value pairs (which use lots of memory for HashMap$Node and other requirements).

https://blog.shiftleft.io/open-sourcing-our-specialized-tinkergraph-with-70-memory-reduction-and-strict-schema-validation-fa5cfb3dd82d

Kafka Webview is a web-based consumer for Kafka Clusters. It can do various consumer tasks (such as seeking to particular offsets and using custom deserializers). There's a Docker image on Docker hub, making it really easy to try it out.

https://github.com/SourceLabOrg/kafka-webview

Version 1.1 of the Lenses streaming platform for Apache Kafka was released. New features include a new topology visualizer, Kubernetes support (including for scaling up/down of Lenses SQL Processors), a ReduxJS web application library, improvements to Lenses SQL, and improved LDAP integration.

http://www.landoop.com/blog/2018/01/lenses-release/

The first release of Strimzi, which is a set of images and configuration templates for deploying Apache Kafka on Kubernetes/OpenShift, was announced.

https://www.redhat.com/archives/strimzi/2018-January/msg00006.html

A trio of security vulnerabilities in Apache Geode were disclosed. If you're running a version prior to 1.3.0, it's time to upgrade.

https://lists.apache.org/thread.html/666151062a3ba69af24f18331aa3112724144770c99dfce9277e9974@%3Cannounce.apache.org%3E
https://lists.apache.org/thread.html/0c0a0026369e8a6e1fec9a015a0e45c43049e7d0e84cdca5ff49a468@%3Cannounce.apache.org%3E
https://lists.apache.org/thread.html/65292b03f39031c8b1c9a8c9e863b714e201fe9ac58dcd4802d855ee@%3Cannounce.apache.org%3E

AWS' ETL-as-a-service, AWS Glue, has announced support for Scala as a scripting language. This post has an example of using it for some non-trivial ETL.

https://aws.amazon.com/blogs/big-data/aws-glue-now-supports-scala-scripts/

MapR has announced a new data governance tool, that provides lineage tracking (and visualization) across data sets.

https://mapr.com/why-mapr/data-governance/

Events

Curated by Datadog ( http://www.datadog.com )

California

Airflow, Streaming, and More (San Francisco) - Wednesday, January 17
https://www.meetup.com/SF-Big-Analytics/events/246039241/

Replicating Data from MapR-DB with StreamSets Data Collector (Santa Clara) - Wednesday, January 17
https://www.meetup.com/SF-Bay-Area-Data-Ingest-Meetup/events/245764097/

Washington

Stream Processing with Flink at Alibaba and OfferUp (Bellevue) - Wednesday, January 17
https://www.meetup.com/seattle-apache-flink/events/246458117/

Seattle Apache Kafka Meetup (Bellevue) - Thursday, January 18
https://www.meetup.com/Seattle-Apache-Kafka-Meetup/events/245919802/

Iowa

Hadoop 101 (West Des Moines) - Thursday, January 18
https://www.meetup.com/Des-Moines-Big-Data-Meetup/events/246460167/

Illinois

Event-Driven Architecture Using Apache Kafka (Chicago) - Wednesday, January 17
https://www.meetup.com/IllinoisJUGChicago/events/245656277/

Wisconsin

Hortonworks Data Flow + A Tidy Text Analysis of the Simpsons in R (Green Bay) - Tuesday, January 16
https://www.meetup.com/BAMDataScience/events/245315322/

Georgia

Real-Time Stream Processing with Apache Storm (Roswell) - Tuesday, January 16
https://www.meetup.com/Atlanta-Hadoop-Users-Group/events/246041014/

CANADA

Big Data & Machine Learning Pipelines: A Tale of Lambdas, Kappas, and Pancakes (Vancouver) - Tuesday, January 16
https://www.meetup.com/Vancouver-Amazon-Web-Services-User-Group/events/245946651/

SWEDEN

Streaming Analytics Made Easy: Hortonworks DataFlow and Druid (Stockholm) - Thursday, January 18
https://www.meetup.com/stockholm-hug/events/245307881/

SPAIN

First Stream Processing Meetup (Barcelona) - Thursday, January 18
https://www.meetup.com/Barcelona-Stream-Processing-Meetup/events/245334826/

GERMANY

Apache Spark Hands-On Workshop (Nurnberg) - Monday, January 15
https://www.meetup.com/Nuernberg-Big-Data/events/246033597/

First Meeting in Karlsruhe (Karlsruhe) - Monday, January 15
https://www.meetup.com/Karlsruhe-Big-Data-Meetup/events/245885440/

POLAND

Online Incremental Learning on Streams (Krakow) - Tuesday, January 16
https://www.meetup.com/datakrk/events/246089702/

Developing Kafka Streams Applications (Warsaw) - Tuesday, January 16
https://www.meetup.com/WarsawScala/events/246606668/

ROMANIA

Spark v2.2 Workshop (Bucharest) - Friday, January 19
https://www.meetup.com/Bucharest-Data-Science-Meetup/events/246067161/