Apache Spark: An Introductory Workshop for Developers
This one-day workshop lead by Dean Wampler, Ph.D. is designed to teach developers how to implement data analytics using Apache Spark for Reactive applications. Apache Spark is a distributed computing system written in Scala and developed initially as a UC Berkeley research project for distributed data programming. It has grown in capabilities and it recently became a top-level Apache project . In this workshop, developers will use hands-on exercises to learn the principles of Spark programming and idioms for specific problems, such as event stream processing, SQL-based analysis on structured data in files, integration with Reactive frameworks like Akka, as well as Hadoop and related tools, and advanced analytics, such as machine learning and graph algorithms.
After participating in this workshop you should
Understand how Spark works
Understand how to use the Spark Scala API to implement various data analytics algorithms for offline ( batch-mode) and event-streaming applications
Understand how to test and deploy Spark applications
Understand the basics of integrating Spark with Akka and Hadoop
Introduction - Why Spark?
Learning the Spark API
The Matrix API
The Inverted Index
Unstructured text analysis: NGrams
Real-time Event Streaming
Integration with Akka and Reactive Streams
SQL queries on structured data
Data formats - Text, Parquet, and Others
Integration with Hive
Integration with Hadoop and other Tools
Typesafe (Twitter: @Typesafe) is dedicated to helping developers build Reactive applications on the JVM. With the Typesafe Reactive Platform, you can create modern, event-driven applications that scale on multicore and cloud computing architectures. Typesafe Activator, a browser-based tool with reusable templates, makes it easy to get started with Play Framework, Akka and Scala. Backed by Greylock Partners, Shasta Ventures, Bain Capital Ventures and Juniper Networks, Typesafe is headquartered in San Francisco with offices in Switzerland and Sweden.