The ‘store’ package provides efficient binary serialization. There are a couple
features that particularly distinguish it from most prior Haskell serialization
libraries:
Its primary goal is speed. By default, direct machine representations are used
for things like numeric values (Int, Double, Word32, etc) and buffers
(Text, ByteString, Vector, etc). This means that much of serialization
uses the equivalent of memcpy.
We have plans for supporting architecture independent serialization - see
#36 and
#31. This plan makes little endian
the default, so that the most common endianness has no overhead.
Instead of implementing lazy serialization / deserialization involving
multiple input / output buffers, peek an poke always work with a single
buffer. This buffer is allocated by asking the value for its size before
encoding. This simplifies the encoding logic, and allows for highly optimized
tight loops.
store can optimize size computations by knowing when some types always
use the same number of bytes. This allows us to compute the byte size of a
Vector Int32 by just doing length v * 4.
It also features:
Optimized serialization instances for many types from base, vector,
bytestring, text, containers, time, template-haskell, and more.
TH and GHC Generics based generation of Store instances for datatypes
TH generation of testcases.
Architecture limitations
Store does not currently work at all on architectures which lack efficient
unaligned memory access (for example, older ARM processors). This is not a
fundamental limitation, but we do not currently require ARM or PowerPC support.
See #37 and
#47.
Fixes a bug that could result in segfaults when reading corrupted data.
0.2.1.0
Release notes:
Adds experimental Data.Store.Version and deprecates Data.Store.TypeHash.
The new functionality is similar to TypeHash, but there are much fewer false
positives of hashes changing.
Other enhancements:
Now exports types related to generics
0.2.0.0
Release notes:
Core functionality split into store-core package
Breaking changes:
combineSize' renamed to combineSizeWith
Streaming support now prefixes each Message with a magic number, intended to
detect mis-alignment of data frames. This is worth the overhead, because
otherwise serialization errors could be more catastrophic - interpretting some
bytes as a length tag and attempting to consume many bytes from the source.