mpi-hs
MPI bindings for Haskell
https://github.com/eschnett/mpi-hs#readme
Version on this page: | 0.7.2.0 |
LTS Haskell 22.37: | 0.7.2.0 |
Stackage Nightly 2024-10-11: | 0.7.3.0 |
Latest on Hackage: | 0.7.3.0 |
mpi-hs-0.7.2.0@sha256:8c21c7eb76b4b369ecc44796ab5f8cea281b115ef3512ce704b3818c00390d38,15234
Module documentation for 0.7.2.0
- Control
- Control.Distributed
mpi-hs
MPI bindings for Haskell
- GitHub: Source code repository
- Hackage: Haskell package and documentation
- Stackage: Stackage snapshots
- Azure Pipelines: Build Status
Overview
MPI (the Message Passing Interface) is a widely used standard for distributed-memory programming on HPC (High Performance Computing) systems. MPI allows exchanging data (messages) between programs running in parallel. There are several high-quality open source MPI implementations (e.g. MPICH, MVAPICH, OpenMPI) as well as a variety of closed-source implementations. These libraries can typically make use of high-bandwidth low-latency communication hardware such as InfiniBand.
This library mpi-hs
provides Haskell bindings for MPI. It is based
on ideas taken from
haskell-mpi,
Boost.MPI
for C++, and MPI for
Python.
mpi-hs
provides two API levels: A low-level API gives rather direct
access to the actual MPI API, apart from certain “reasonable” mappings
from C to Haskell (e.g. output arguments that are in C stored via a
pointer are in Haskell regular return values). A high-level API
simplifies exchanging arbitrary values that can be serialized.
Example
This is a typical MPI C code:
#include <stdio.h>
#include <mpi.h>
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("This is process %d of %d\n", rank, size);
int msg = rank;
MPI_Bcast(&msg, 1, MPI_INT, 0, MPI_COMM_WORLD):
printf("Process %d says hi\n", msg);
MPI_Finalize();
return 0;
}
The Haskell equivalent looks like this:
{-# LANGUAGE TypeApplications #-}
import qualified Control.Distributed.MPI as MPI
import Foreign
import Foreign.C.Types
main :: IO ()
main = do
MPI.init
rank <- MPI.commRank MPI.commWorld
size <- MPI.commSize MPI.commWorld
putStrLn $ "This is process " ++ show rank ++ " of " ++ show size
let msg = MPI.fromRank rank
buf <- mallocForeignPtr @CInt
withForeignPtr buf $ \ptr -> poke ptr msg
MPI.bcast (buf, 1::Int) MPI.rootRank MPI.commWorld
msg' <- withForeignPtr buf peek
putStrLn $ "Process " ++ show msg' ++ " says hi"
MPI.finalize
The high-level API simplifies exchanging data; no need to allocate a buffer nor to use poke or peek:
{-# LANGUAGE TypeApplications #-}
import qualified Control.Distributed.MPI as MPI
import qualified Control.Distributed.MPI.Storable as MPI
main :: IO ()
main = MPI.mainMPI $ do
rank <- MPI.commRank MPI.commWorld
size <- MPI.commSize MPI.commWorld
putStrLn $ "This is process " ++ show rank ++ " of " ++ show size
let msg = MPI.fromRank rank :: Int
msg' <- MPI.bcast (Just msg) MPI.rootRank MPI.commWorld
putStrLn $ "Process " ++ show msg' ++ " says hi"
Installing
mpi-hs
requires an external MPI library to be available on the
system. How to install such a library is beyond the scope of these
instructions.
In some cases, the MPI library will be installed in /usr/include
,
/usr/lib
, and /usr/bin
, respectively. In this case, no further
configuration is necessary, and mpi-hs
will build out of the box
with stack build
.
For convenience, this package offers Cabal flags to handle several common cases where the MPI library is not installed in a standard location:
- OpenMPI on Debian Linux (package
libopenmpi-dev
): use--flag mpi-hs:openmpi-debian
- OpenMPI on Ubuntu Linux (package
libopenmpi-dev
): use--flag mpi-hs:openmpi-ubuntu
- OpenMPI on macOS with MacPorts (package
openmpi
): use--flag mpi-hs:openmpi-macports
- MPICH on Debian Linux (package
libmpich-dev
): use--flag mpi-hs:mpich-debian
- MPICH on Ubuntu Linux (package
libmpich-dev
): use--flag mpi-hs:mpich-ubuntu
- MPICH on macOS with MacPorts (package
mpich
): use--flag mpi-hs:mpich-macports
For example, using the MPICH MPI implementation installed via MacPorts
on macOS, you build mpi-hs
like this:
stack build --flag mpi-hs:mpich-macports
In the general case, if your MPI library/operating system combination
is not supported by a Cabal flag, you need to describe the location of
your MPI library in stack.yaml
. This might look like:
extra-include-dirs:
- /usr/lib/openmpi/include
extra-lib-dirs:
- /usr/lib/openmpi/lib
To remove default systems paths from the search paths, disable the
system-mpi
flag, which is enable by default: --flag mpi-hs:-system-mpi
.
Testing the MPI installation with a C program
To test your MPI installation independently of using Haskell, copy the
example MPI C code into a file mpi-example.c
, and run these commands:
mpicc -c mpi-example.c
mpicc -o mpi-example mpi-example.o
mpiexec -n 3 ./mpi-example
All three commands must complete without error, and the last command must output something like
This is process 0 of 3
This is process 1 of 3
This is process 2 of 3
where the lines will be output in a random order. (The output might even be jumbled, i.e. the characters in these three lines might be mixed up.)
If these commands do not work, then this needs to be corrected before
mpi-hs
can work. If additional compiler options or libraries are
needed, then these need to be added to the stack.yaml
configuration
file (for include and library paths; see extra-include-dirs
and
extra-lib-dirs
there) or the package.yaml
configuration file (for
additional libraries; see extra-libraries
there).
Examples and Tests
Running the example
To run the example provided in the src
directory:
stack build
mpiexec stack exec version
mpiexec -n 3 stack exec example1
mpiexec -n 3 stack exec example2
Running the tests
There are two test cases provided in tests
. The first (mpi-test
)
tests the low-level API, the second (mpi-test-storable
) tests the
high-level API:
stack build --test --no-run-tests
mpiexec -n 3 stack exec -- $(stack path --dist-dir)/build/mpi-test/mpi-test
mpiexec -n 3 stack exec -- $(stack path --dist-dir)/build/mpi-test-storable/mpi-test-storable
Related packages
There are three companion packages that provide the same high-level
API via different serialization packages. These are separate packages
to reduce the number of dependencies of mpi-hs
:
mpi-hs-binary
, based onData.Binary
in thebinary
packagempi-hs-cereal
, based onData.Serialize
in thecereal
packagempi-hs-store
, based onData.Store
in thestore
package