accelerate-llvm-ptx

Accelerate backend for NVIDIA GPUs

Version on this page:1.1.0.1
LTS Haskell 11.22:1.1.0.1
Stackage Nightly 2018-03-12:1.1.0.1
Latest on Hackage:1.3.0.0

See all snapshots accelerate-llvm-ptx appears in

BSD-3-Clause licensed by Trevor L. McDonell
Maintained by Trevor L. McDonell
This version can be pinned in stack with:accelerate-llvm-ptx-1.1.0.1@sha256:bc8fe44c5f3496dae692fafefe23fd09bd11581d35da2886d687f72135987b42,7401

Module documentation for 1.1.0.1

An LLVM backend for the Accelerate Array Language

Build Status Hackage Docker Automated build Docker status

This package compiles Accelerate code to LLVM IR, and executes that code on multicore CPUs as well as NVIDIA GPUs. This avoids the need to go through nvcc or clang. For details on Accelerate, refer to the main repository.

We love all kinds of contributions, so feel free to open issues for missing features as well as report (or fix!) bugs on the issue tracker.

Dependencies

Haskell dependencies are available from Hackage, but there are several external library dependencies that you will need to install as well:

  • LLVM
  • libFFI (if using the accelerate-llvm-native backend for multicore CPUs)
  • CUDA (if using the accelerate-llvm-ptx backend for NVIDIA GPUs)

Docker

A docker container is provided with this package preinstalled (via stack) at /opt/accelerate-llvm. Note that if you wish to use the accelerate-llvm-ptx GPU backend, you will need to install the NVIDIA docker plugin; see that page for more information.

$ docker run -it tmcdonell/accelerate-llvm

Installing LLVM

When installing LLVM, make sure that it includes the libLLVM shared library. If you want to use the GPU targeting accelerate-llvm-ptx backend, make sure you install (or build) LLVM with the ‘nvptx’ target.

Homebrew

Example using Homebrew on macOS:

$ brew install llvm-hs/homebrew-llvm/llvm-4.0

Debian/Ubuntu

For Debian/Ubuntu based Linux distributions, the LLVM.org website provides binary distribution packages. Check apt.llvm.org for instructions for adding the correct package database for your OS version, and then:

$ apt-get install llvm-4.0-dev

Building from source

If your OS does not have an appropriate LLVM distribution available, you can also build from source. Detailed build instructions are available on the LLVM.org website. Note that you will require at least CMake 3.4.3 and a recent C++ compiler; at least Clang 3.1, GCC 4.8, or Visual Studio 2015 (update 3).

  1. Download and unpack the LLVM-4.0 source code. We’ll refer to the path that the source tree was unpacked to as LLVM_SRC. Only the main LLVM source tree is required, but you can optionally add other components such as the Clang compiler or Polly loop optimiser. See the LLVM releases page for the complete list.

  2. Create a temporary build directory and cd into it, for example:

    $ mkdir /tmp/build
    $ cd /tmp/build
    
  3. Execute the following to configure the build. Here INSTALL_PREFIX is where LLVM is to be installed, for example /usr/local or $HOME/opt/llvm:

    $ cmake $LLVM_SRC -DCMAKE_INSTALL_PREFIX=$INSTALL_PREFIX -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=ON -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_LINK_LLVM_DYLIB=ON
    

    See options and variables for a list of additional build parameters you can specify.

  4. Build and install:

    $ cmake --build .
    $ cmake --build . --target install
    
  5. For macOS only, some additional steps are useful to work around issues related to System Integrity Protection:

    cd $INSTALL_PREFIX/lib
    ln -s libLLVM.dylib libLLVM-4.0.dylib
    install_name_tool -id $PWD/libLTO.dylib libLTO.dylib
    install_name_tool -id $PWD/libLLVM.dylib libLLVM.dylib
    install_name_tool -change '@rpath/libLLVM.dylib' $PWD/libLLVM.dylib libLTO.dylib
    

Installing Accelerate-LLVM

Once the dependencies are installed, we are ready to install accelerate-llvm.

For example, installation using stack just requires you to point it to the appropriate configuration file:

$ ln -s stack-8.0.yaml stack.yaml
$ stack setup
$ stack install

Note that the version of llvm-hs used must match the installed version of LLVM, which is currently 4.0.

libNVVM

The accelerate-llvm-ptx backend can optionally be compiled to generate GPU code using the libNVVM library, rather than LLVM’s inbuilt NVPTX code generator. libNVVM is a closed-source library distributed as part of the NVIDIA CUDA toolkit, and is what the nvcc compiler itself uses internally when compiling CUDA C code.

Using libNVVM may improve GPU performance compared to the code generator built in to LLVM. One difficulty with using it however is that since libNVVM is also based on LLVM, and typically lags LLVM by several releases, you must install accelerate-llvm with a “compatible” version of LLVM, which will depend on the version of the CUDA toolkit you have installed. The following table shows some combinations:

LLVM-3.3 LLVM-3.4 LLVM-3.5 LLVM-3.8 LLVM-3.9 LLVM-4.0
CUDA-7.0
CUDA-7.5
CUDA-8.0

Where ⭕ = Works, and ❌ = Does not work.

Note that the above restrictions on CUDA and LLVM version exist only if you want to use the NVVM component. Otherwise, you should be free to use any combination of CUDA and LLVM.

Also note that accelerate-llvm-ptx itself currently requires at least LLVM-3.5.

Using stack, either edit the stack.yaml and add the following section:

flags:
  accelerate-llvm-ptx:
    nvvm: true

Or install using the following option on the command line:

$ stack install accelerate-llvm-ptx --flag accelerate-llvm-ptx:nvvm

If installing via cabal:

$ cabal install accelerate-llvm-ptx -fnvvm

Changes

Change Log

Notable changes to the project will be documented in this file.

The format is based on Keep a Changelog and the project adheres to the Haskell Package Versioning Policy (PVP)

1.1.0.1 - 2018-01-08

Fixed

  • add support for building with CUDA-9.x

1.1.0.0 - 2017-09-21

Added

  • support for GHC-8.2
  • caching of compilation results (accelerate-llvm#17)
  • support for ahead-of-time compilation (runQ and runQAsync)

Changed

  • generalise run1* to polyvariadic runN*

Fixed

  • Fixed synchronisation bug in multidimensional reduction

1.0.0.1 - 2017-05-25

Fixed

  • #386 (partial fix)

1.0.0.0 - 2017-03-31

  • initial release