Installation

Before installing the Haskell bindings you need to install the BLAS and LAPACK packages. Please note, that additionally to the reference implementation in FORTRAN 77 there are alternative optimized implementations like OpenBLAS, ATLAS, Intel MKL.

Debian, Ubuntu

sudo apt-get install libblas-dev liblapack-dev

MacOS

You may install pkgconfig and LAPACK via MacPorts:

sudo port install pkgconfig lapack

However, the pkg-config files for LAPACK seem to be installed in a non-standard location. You must make them visible to pkg-config by

export PKG_CONFIG_PATH=/opt/local/lib/lapack/pkgconfig:$PKG_CONFIG_PATH

You may set the search PATH permanently by adding the above command line to your $HOME/.profile file.

Alternatively, a solution for all users of your machine would be to set symbolic links:

sudo ln -s /opt/local/lib/lapack/pkgconfig/blas.pc /opt/local/lib/pkgconfig/blas.pc
sudo ln -s /opt/local/lib/lapack/pkgconfig/lapack.pc /opt/local/lib/pkgconfig/lapack.pc

Introduction

Here is a small example for constructing and formatting matrices:

Prelude> import qualified Numeric.LAPACK.Matrix as Matrix
Prelude Matrix> import Numeric.LAPACK.Format as Fmt ((##))
Prelude Matrix Fmt> let a = Matrix.fromList (Matrix.shapeInt 3) (Matrix.shapeInt 4) [(0::Float)..]
Prelude Matrix Fmt> a ## "%.4f"
 0.0000 1.0000  2.0000  3.0000
 4.0000 5.0000  6.0000  7.0000
 8.0000 9.0000 10.0000 11.0000
Prelude Matrix Fmt> import qualified Numeric.LAPACK.Matrix.Shape as MatrixShape
Prelude Matrix Fmt MatrixShape> import qualified Numeric.LAPACK.Matrix.Triangular as Triangular
Prelude Matrix Fmt MatrixShape Triangular> let u = Triangular.upperFromList MatrixShape.RowMajor (Matrix.shapeInt 4) [(0::Float)..]
Prelude Matrix Fmt MatrixShape Triangular> (u, Triangular.transpose u) ## "%.4f"
 0.0000 1.0000 2.0000 3.0000
        4.0000 5.0000 6.0000
               7.0000 8.0000
                      9.0000

 0.0000
 1.0000 4.0000
 2.0000 5.0000 7.0000
 3.0000 6.0000 8.0000 9.0000

You may find a more complex introductory example at: http://code.henning-thielemann.de/bob2019/main.pdf

Formatting

We do not try to do fancy formatting for the Show instance. The Show instances of matrices emit plain valid Haskell code in one line where all numbers are printed in full precision. If matrices are part of larger Haskell data structures this kind of formatting works best. For human-friendly formatting in GHCi you need to add something like ## "%.4f" after a matrix or vector expression. It means: Print all numbers in fixed point representation using four digits for the fractional part. You can use the formatting placeholders provided by printf. The matrices have Hyper instances for easy usage in HyperHaskell.

Formatting is based on the Output type class that currently supports output as Text boxes for GHCi and HTML for HyperHaskell.

Matrix vs. Vector

Vectors are Storable.Arrays from the comfort-array package. An array can have a fancy shape like a shape defined by an enumeration type, the shape of two appended arrays, a shape that is compatible to a Haskell container type, a rectangular or triangular shape.

All operations check dynamically whether corresponding shapes match structurally. E.g. a|||b === c|||d composes a matrix from four quadrants a, b, c, d. It is not enough that a|||b and c|||d have the same width, but the widths of a and c as well as b and d must match. The type variables for shapes show which dimensions must be compatible. We recommend to use type variables for the shapes as long as possible because this increases type safety even if you eventually use only ShapeInt. If you use statically sized shapes you get static size checks.

A matrix can have any of these shapes as height and as width. E.g. it is no problem to define a matrix that maps a triangular shaped array to a rectangular shaped one. There are actually practical applications to such matrices. A matrix can be treated as vector, but there are limitations. E.g. if you scale a Hermitian matrix by a complex factor it will in general be no longer Hermitian. Another problem: Two equally sized rectangular matrices may differ in the element order (row major vs. column major). You cannot simply add them by adding the flattened arrays element-wise. Thus if you want to perform vector operations on a matrix the package requires you to “unpack” a matrix to a vector using Matrix.Array.toVector. This conversion is almost a no-op and preserves most of the shape information. The reverse operation is Matrix.Array.fromVector.

There are more matrix types that are not based on a single array. E.g. we provide a symbolic inverse, a scaling matrix, a permutation matrix. We also support arrays that represent factors of a matrix factorization. You obtain these by LU and QR decompositions. You can extract the matrix factors of it, but you can also multiply the factors to other matrices or use the decompositions for solving simultaneous linear equations.

Type tags for content constraints

Full matrices have additional type tags to distinguish between four cases of the size relations between the height and the width of a full matrix. In a matrix of type Matrix.Full vert horiz height width a the type variables mean:

vert  height
Small Small - Square matrix   height==width
Big   Small - Tall matrix     size height >= size width
Small Big   - Wide matrix     size height <= size width
Big   Big   - General matrix  height and width arbitrary

The relations are defined using two type tags in order to support matrix transposition without hassle. Using Small Small for square matrices and Big Big for general matrices appears to be arbitrary, but is chosen such that altering Small to Big generalizes the size relation.

Likewise we use the Triangular matrix type also to represent diagonal and symmetric matrices. For Matrix.Triangular lo diag up size a we have the cases:

lo    up
Empty Empty - Diagonal matrix
Empty Full  - Upper triangular matrix
Full  Empty - Lower triangular matrix
Full  Full  - Symmetric matrix

The diag type tag can be NonUnit or Unit. Unit can be used for matrices that always have a unit diagonal by construction. The property of a unit diagonal is preserved by some operations and enables some optimizations by LAPACK. E.g. solving with a unit triangular matrix does not require division and thus cannot fail due to division by zero. NonUnit is a bit of a misnomer. A NonUnit matrix can still have a unit diagonal, but in general it has not and no optimizations will take place.

Infix operators

The package provides fancy infix operators like #*| and \*#. They symbolize both operands and operations. E.g. in #*| the hash means Matrix, the star means Multiplication and the bar means Column Vector.

Possible operations are:

  • a_*_b - a multiplied by b

  • a_/_b - a multiplied by inverse b

  • a_\_b - inverse a multiplied by b

Possible operands are:

  • # - a matrix that is generalized through a type class

  • ## - a full matrix

  • \ - a diagonal matrix represented by a Vector

  • - - a row vector

  • | - a column vector

  • . - a scalar

For multiplication of equally shaped matrices we also provide instances of Semigroup.<>.

Precedence of the operators is chosen analogously to plain * and /. Associativity is chosen such that the same operator can be applied multiple times without parentheses. But sometimes this may mean that you have to mix left and right associative operators, and thus you may still need parentheses.

Changes

Change log for the lapack package

0.3.2

  • Orthogonal: project, affineKernelFromSpan, affineSpanFromKernel, leastSquaresConstraint, gaussMarkovLinearModel

  • Symmetric.fromHermitian, Hermitian.fromSymmetric

  • instance Monoid Matrix, especially mempty for matrices with static shapes.

  • Extent.Dimensions: turn from type family to data family

  • Start using doctest-extract for simple tests

0.3.1

  • Matrix.Symmetric: You can now import many functions for symmetric matrices from this module. This is more natural than importing them from Triangular.

0.3

  • Matrix data family

  • Matrix: ZeroInt -> ShapeInt, zeroInt -> shapeInt

  • Hermitian, BandedHermitian: covariance -> gramian

  • Square.eigensystem: Return left eigenvectors as rows of the last matrix. This is adjoint with respect to the definition in lapack-0.2 but it is consistent with the other eigenvalue and singular value decompositions.