sparkle: Apache Spark applications in Haskell

CircleCI

sparkle [spär′kəl]: a library for writing resilient analytics applications in Haskell that scale to thousands of nodes, using Spark and the rest of the Apache ecosystem under the hood. See this blog post for the details.

This is an early tech preview, not production ready.

Getting started

The tl;dr using the hello app as an example on your local machine:

$ stack build hello
$ stack exec -- sparkle package sparkle-example-hello
$ stack exec -- spark-submit --master 'local[1]' --packages com.amazonaws:aws-java-sdk:1.11.253,org.apache.hadoop:hadoop-aws:2.7.2,com.google.guava:guava:23.0 sparkle-example-hello.jar

How to use

To run a Spark application the process is as follows:

  1. create an application in the apps/ folder, in-repo or as a submodule;
  2. add your app to stack.yaml;
  3. build the app;
  4. package your app into a deployable JAR container;
  5. submit it to a local or cluster deployment of Spark.

If you run into issues, read the Troubleshooting section below first.

Build

Linux

Requirements

  • the Stack build tool (version 1.2 or above);
  • either, the Nix package manager,
  • or, OpenJDK, Gradle and Spark (version 1.6) installed from your distro.

To build:

$ stack build

You can optionally get Stack to download Spark and Gradle in a local sandbox (using Nix) for good build results reproducibility. This is the recommended way to build sparkle. Alternatively, you’ll need these installed through your OS distribution’s package manager for the next steps (and you’ll need to tell Stack how to find the JVM header files and shared libraries).

To use Nix, set the following in your ~/.stack/config.yaml (or pass --nix to all Stack commands, see the Stack manual for more):

nix:
  enable: true

Other platforms

sparkle is not directly supported on non-Linux operating systems (e.g. Mac OS X or Windows). But you can use Docker to run sparkle natively inside a container on those platforms. First,

$ stack docker pull

Then, just add --docker as an argument to all Stack commands, e.g.

$ stack --docker build

By default, Stack uses the tweag/sparkle build and test Docker image, which includes everything that Nix does as in the Linux section. See the Stack manual for how to modify the Docker settings.

Package

To package your app as a JAR directly consumable by Spark:

$ stack exec -- sparkle package <app-executable-name>

Submit

Finally, to run your application, for example locally:

$ stack exec -- spark-submit --master 'local[1]' <app-executable-name>.jar

The <app-executable-name> is any executable name as given in the .cabal file for your app. See apps in the apps/ folder for examples.

See here for other options, including launching a whole cluster from scratch on EC2. This blog post shows you how to get started on the Databricks hosted platform and on Amazon’s Elastic MapReduce.

How it works

sparkle is a tool for creating self-contained Spark applications in Haskell. Spark applications are typically distributed as JAR files, so that’s what sparkle creates. We embed Haskell native object code as compiled by GHC in these JAR files, along with any shared library required by this object code to run. Spark dynamically loads this object code into its address space at runtime and interacts with it via the Java Native Interface (JNI).

Troubleshooting

jvm library or header files not found

You’ll need to tell Stack where to find your local JVM installation. Something like the following in your ~/.stack/config.yaml should do the trick, but check that the paths match up what’s on your system:

extra-include-dirs: [/usr/lib/jvm/java-7-openjdk-amd64/include]
extra-lib-dirs: [/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server]

Or use --nix: since it won’t use your globally installed JDK, it will have no trouble finding its own locally installed one.

Can’t build sparkle on OS X

OS X is not a supported platform for now. There are several issues to make sparkle work on OS X, tracked in this ticket.

Gradle <= 2.12 incompatible with JDK 9

If you’re using JDK 9, note that you’ll need to either downgrade to JDK 8 or update your Gradle version, since Gradle versions up to and including 2.12 are not compatible with JDK 9.

Anonymous classes in inline-java quasiquotes fail to deserialize

When using inline-java, it is recommended to use the Kryo serializer, which is currently not the default in Spark but is faster anyways. If you don’t use the Kryo serializer, objects of anonymous class, which arise e.g. when using Java 8 function literals,

foo :: RDD Int -> IO (RDD Bool)
foo rdd = [java| $rdd.map((Integer x) -> x.equals(0)) |]

won’t be deserialized properly in multi-node setups. To avoid this problem, switch to the Kryo serializer by setting the following configuration properties in your SparkConf:

do conf <- newSparkConf "some spark app"
   confSet conf "spark.serializer" "org.apache.spark.serializer.KryoSerializer"
   confSet conf "spark.kryo.registrator" "io.tweag.sparkle.kryo.InlineJavaRegistrator"

See #104 for more details.

java.lang.UnsatisfiedLinkError: /tmp/sparkle-app…: failed to map segment from shared object

Sparkle unzips the Haskell binary program in a temporary location on the filesystem and then loads it from there. For loading to succeed, the temporary location must not be mounted with the noexec option. Alternatively, the temporary location can be changed with

spark-submit --driver-java-options="-Djava.io.tmpdir=..." \
             --conf "spark.executor.extraJavaOptions=-Djava.io.tmpdir=..."

java.io.IOException: No FileSystem for scheme: s3n

Spark 2.2 requires explicitly specifying extra JAR files to spark-submit in order to work with AWS. To work around this, add an additional ‘packages’ argument when submitting the job:

spark-submit --packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.2,com.google.guava:guava:12.0

License

Copyright (c) 2015-2016 EURL Tweag.

All rights reserved.

sparkle is free software, and may be redistributed under the terms specified in the LICENSE file.

Sponsors

         Tweag I/O              LeapYear

sparkle is maintained by Tweag I/O.

Have questions? Need help? Tweet at @tweagio.

Changes

Change Log

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog.

[Unreleased]

Added

Changed

[0.7.4] - 2018-02-28

Changed

  • Fixed dynamic linking of sparkle when library dependencies don’t set the RPATH to $ORIGIN. PR #139
  • Linker flags -rpath and -z origin are no longer necessary.
  • Build with jvm-batching.

[0.7.3] - 2018-02-06

Added

  • More RDD method bindings: randomSplit, mean, zipWithUniqueId, reduceByKey, subtractByKey.

Changed

  • Use inline-java for PairRDD bindings under the hood.
  • Updated sparkle to build with distributed-closure-0.4.0.

[0.7.2] - 2017-12-25

Added

  • More RDD method bindings: sortBy.

[0.7.1] - 2017-12-13

Fixed

  • Use StaticPointers for PairRDD as a workaround for GHC bug #14204 occuring when mapping over a PairRDD (see issue #119)

[0.7] - 2017-12-09

[0.6] - 2017-07-16

Added

  • Support shipping anonymous objects that appear in inline-java quasiquotes. You’ll need to configure your app to use the Kryo serializer for this to work. See FAQ in README. This fixes #104.

Changed

  • Move parallelize, textFile and binaryRecords to Context module.
  • Functions such as sample now use the choice library to describe the semantics of boolean arguments in their types.
  • Use inline-java for RDD bindings under the hood.

[0.5] - 2017-02-21

Added

  • Bind to expm1
  • Add bindings to dayofmonth, current_timestamp and current_date.
  • Add support for the dataframe condition expressions
  • Add bindings to withColumnRenamed, columns, printSchema, Column.expr.
  • Bind DataFrame distinct.
  • Add bindings for log and log1p for Columns.
  • Add binding to Column.cast.
  • Add bindings getList and array for columns.
  • Add bindings: schema for rows, Metadata type, javaRDD, range, Row getters and constructors, StrucType constructors, createDataFrame, more DataType bindings.

Changed

  • Prevent Haskell exceptions from escaping apply.
  • Update sparkle to work with latest jni which uses ForeignPtr for java references.
  • Move StructType and friends to modules StructField, DataType and Metadata.
  • Rename createRow, rowGet, rowSize, joinPairRDD to have the same names as the java methods.

[0.4]

Added

  • Support for reading/writing Parquet files.
  • More RDD method bindings: repartition, treeAggregate, binaryRecords, aggregateByKey, mapPartitions, mapPartitionsWithIndex.
  • More complete DataFrame support.
  • Intero support.
  • stack ghci support.
  • Support Template Haskell splices and ANN annotations that use sparkle code.

Changed

Fixed

  • More reliable initialization of embedded shared library.
  • Cleanup temporary files properly.

[0.3] - 2016-12-27

Added

  • Dockerfile to build sparkle.
  • Compatibility with singletons-2.2.
  • Add the identity Reify/Reflect instances.
  • Change JNI bindings to use new JNI.String type, instead of ByteString. This new type guarantees the invariants required by the JNI API (null-termination in particular).

Changed

  • Remove Reify/Reflect instances for Int. Only instances for sized types remain.

Fixed

  • Fix type in Reify Int making it incorrect.

[0.2.0] - 2016-12-13

Added

  • New binding: getOrCreateSQLContext.

Changed

  • getOrCreate renamed to getOrCreateSparkContext.

[0.1.0.1] - 2016-06-12

Added

  • More bindings to more call*Method JNI functions.

Changed

  • Use getOrCreate to get SparkContext.

[0.1.0] - 2016-04-25

  • Initial release