Module documentation for 0.7.4
sparkle: Apache Spark applications in Haskell
sparkle [spär′kəl]: a library for writing resilient analytics applications in Haskell that scale to thousands of nodes, using Spark and the rest of the Apache ecosystem under the hood. See this blog post for the details.
This is an early tech preview, not production ready.
The tl;dr using the
hello app as an example on your local machine:
$ stack build hello $ stack exec -- sparkle package sparkle-example-hello $ stack exec -- spark-submit --master 'local' --packages com.amazonaws:aws-java-sdk:1.11.253,org.apache.hadoop:hadoop-aws:2.7.2,com.google.guava:guava:23.0 sparkle-example-hello.jar
How to use
To run a Spark application the process is as follows:
- create an application in the
apps/folder, in-repo or as a submodule;
- add your app to
- build the app;
- package your app into a deployable JAR container;
- submit it to a local or cluster deployment of Spark.
If you run into issues, read the Troubleshooting section below first.
- the Stack build tool (version 1.2 or above);
- either, the Nix package manager,
- or, OpenJDK, Gradle and Spark (version 1.6) installed from your distro.
$ stack build
You can optionally get Stack to download Spark and Gradle in a local sandbox (using Nix) for good build results reproducibility. This is the recommended way to build sparkle. Alternatively, you’ll need these installed through your OS distribution’s package manager for the next steps (and you’ll need to tell Stack how to find the JVM header files and shared libraries).
To use Nix, set the following in your
~/.stack/config.yaml (or pass
--nix to all Stack commands, see the Stack manual for
nix: enable: true
sparkle is not directly supported on non-Linux operating systems (e.g. Mac OS X or Windows). But you can use Docker to run sparkle natively inside a container on those platforms. First,
$ stack docker pull
Then, just add
--docker as an argument to all Stack commands, e.g.
$ stack --docker build
To package your app as a JAR directly consumable by Spark:
$ stack exec -- sparkle package <app-executable-name>
Finally, to run your application, for example locally:
$ stack exec -- spark-submit --master 'local' <app-executable-name>.jar
<app-executable-name> is any executable name as given in the
.cabal file for your app. See apps in the apps/ folder for
How it works
sparkle is a tool for creating self-contained Spark applications in Haskell. Spark applications are typically distributed as JAR files, so that’s what sparkle creates. We embed Haskell native object code as compiled by GHC in these JAR files, along with any shared library required by this object code to run. Spark dynamically loads this object code into its address space at runtime and interacts with it via the Java Native Interface (JNI).
jvm library or header files not found
You’ll need to tell Stack where to find your local JVM installation.
Something like the following in your
~/.stack/config.yaml should do
the trick, but check that the paths match up what’s on your system:
extra-include-dirs: [/usr/lib/jvm/java-7-openjdk-amd64/include] extra-lib-dirs: [/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server]
--nix: since it won’t use your globally installed JDK, it
will have no trouble finding its own locally installed one.
Can’t build sparkle on OS X
OS X is not a supported platform for now. There are several issues to make sparkle work on OS X, tracked in this ticket.
Gradle <= 2.12 incompatible with JDK 9
If you’re using JDK 9, note that you’ll need to either downgrade to JDK 8 or update your Gradle version, since Gradle versions up to and including 2.12 are not compatible with JDK 9.
Anonymous classes in inline-java quasiquotes fail to deserialize
When using inline-java, it is recommended to use the Kryo serializer, which is currently not the default in Spark but is faster anyways. If you don’t use the Kryo serializer, objects of anonymous class, which arise e.g. when using Java 8 function literals,
foo :: RDD Int -> IO (RDD Bool) foo rdd = [java| $rdd.map((Integer x) -> x.equals(0)) |]
won’t be deserialized properly in multi-node setups. To avoid this
problem, switch to the Kryo serializer by setting the following
configuration properties in your
do conf <- newSparkConf "some spark app" confSet conf "spark.serializer" "org.apache.spark.serializer.KryoSerializer" confSet conf "spark.kryo.registrator" "io.tweag.sparkle.kryo.InlineJavaRegistrator"
See #104 for more details.
java.lang.UnsatisfiedLinkError: /tmp/sparkle-app…: failed to map segment from shared object
Sparkle unzips the Haskell binary program in a temporary location on
the filesystem and then loads it from there. For loading to succeed, the
temporary location must not be mounted with the
Alternatively, the temporary location can be changed with
spark-submit --driver-java-options="-Djava.io.tmpdir=..." \ --conf "spark.executor.extraJavaOptions=-Djava.io.tmpdir=..."
java.io.IOException: No FileSystem for scheme: s3n
Spark 2.2 requires explicitly specifying extra JAR files to
in order to work with AWS. To work around this, add an additional ‘packages’
argument when submitting the job:
spark-submit --packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.2,com.google.guava:guava:12.0
Copyright (c) 2015-2016 EURL Tweag.
All rights reserved.
sparkle is free software, and may be redistributed under the terms specified in the LICENSE file.
sparkle is maintained by Tweag I/O.
Have questions? Need help? Tweet at @tweagio.
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog.
[0.7.4] - 2018-02-28
- Fixed dynamic linking of sparkle when library dependencies don’t set
$ORIGIN. PR #139
- Linker flags
-z originare no longer necessary.
- Build with
[0.7.3] - 2018-02-06
- Use inline-java for PairRDD bindings under the hood.
- Updated sparkle to build with distributed-closure-0.4.0.
[0.7.2] - 2017-12-25
[0.7.1] - 2017-12-13
- Use StaticPointers for
PairRDDas a workaround for GHC bug #14204 occuring when mapping over a PairRDD (see issue #119)
[0.7] - 2017-12-09
[0.6] - 2017-07-16
- Support shipping anonymous objects that appear in inline-java quasiquotes. You’ll need to configure your app to use the Kryo serializer for this to work. See FAQ in README. This fixes #104.
- Functions such as
samplenow use the choice library to describe the semantics of boolean arguments in their types.
- Use inline-java for RDD bindings under the hood.
[0.5] - 2017-02-21
- Bind to expm1
- Add bindings to dayofmonth, current_timestamp and current_date.
- Add support for the dataframe condition expressions
- Add bindings to withColumnRenamed, columns, printSchema, Column.expr.
- Bind DataFrame distinct.
- Add bindings for log and log1p for Columns.
- Add binding to Column.cast.
- Add bindings getList and array for columns.
- Add bindings: schema for rows, Metadata type, javaRDD, range, Row getters and constructors, StrucType constructors, createDataFrame, more DataType bindings.
- Prevent Haskell exceptions from escaping apply.
- Update sparkle to work with latest jni which uses ForeignPtr for java references.
- Move StructType and friends to modules StructField, DataType and Metadata.
- Rename createRow, rowGet, rowSize, joinPairRDD to have the same names as the java methods.
- Support for reading/writing Parquet files.
- More complete
- Intero support.
- Support Template Haskell splices and
ANNannotations that use sparkle code.
- More reliable initialization of embedded shared library.
- Cleanup temporary files properly.
[0.3] - 2016-12-27
- Dockerfile to build sparkle.
- Compatibility with singletons-2.2.
- Add the identity
- Change JNI bindings to use new
JNI.Stringtype, instead of
ByteString. This new type guarantees the invariants required by the JNI API (null-termination in particular).
Int. Only instances for sized types remain.
- Fix type in
Reify Intmaking it incorrect.
[0.2.0] - 2016-12-13
- New binding:
[0.1.0.1] - 2016-06-12
- More bindings to more
[0.1.0] - 2016-04-25
- Initial release