Fast binary serialization
|Version on this page:||0.5.0.1|
|LTS Haskell 20.23:||0.7.16|
|Stackage Nightly 2023-05-30:||0.7.16|
|Latest on Hackage:||0.7.16|
Module documentation for 0.5.0.1
The ‘store’ package provides efficient binary serialization. There are a couple features that particularly distinguish it from most prior Haskell serialization libraries:
Its primary goal is speed. By default, direct machine representations are used for things like numeric values (
Word32, etc) and buffers (
Vector, etc). This means that much of serialization uses the equivalent of
We have plans for supporting architecture independent serialization - see #36 and #31. This plan makes little endian the default, so that the most common endianness has no overhead.
Instead of implementing lazy serialization / deserialization involving multiple input / output buffers,
pokealways work with a single buffer. This buffer is allocated by asking the value for its size before encoding. This simplifies the encoding logic, and allows for highly optimized tight loops.
storecan optimize size computations by knowing when some types always use the same number of bytes. This allows us to compute the byte size of a
Vector Int32by just doing
length v * 4.
It also features:
Optimized serialization instances for many types from base, vector, bytestring, text, containers, time, template-haskell, and more.
TH and GHC Generics based generation of Store instances for datatypes
TH generation of testcases.
Utilities for streaming encoding / decoding of Store encoded messages, via the
- Initial release announcement
- Benchmarks of the prototype
- New ‘weigh’ allocation benchmark package,
created particularly to aid optimizing
- Updates to test-suite enabling
storeto build with newer dependencies.
Data.Store.Streamingmoved to a separate package,
Buildable with GHC 8.2
Fix to haddock formatting of Data.Store.TH code example
- Fixed compilation on GHC 7.8
- Less aggressive inlining, resulting in faster compilation / simplifier not running out of ticks
- Fixed testsuite
Breaking change in the encoding of Map / Set / IntMap / IntSet, to use ascending key order. Attempting to decode data written by prior versions of store (and vice versa) will almost always fail with a decent error message. If you’re unlucky enough to have a collision in the data with a random Word32 magic number, then the error may not be so clear, or in extremely rare cases, successfully decode, yielding incorrect results. See #97 and #101.
Performance improvement of the ‘Peek’ monad, by introducing more strictness. This required a change to the internal API.
API and behavior of ‘Data.Store.Version’ changed. Previously, it would check the version tag after decoding the contents. It now also stores a magic Word32 tag at the beginning, so that it fails more gracefully when decoding input that lacks encoded version info.
Deprecated in favor of 0.4.1
Fix to derivation of primitive vectors, only relevant when built with primitive-0.6.2.0 or later
Removes INLINE pragmas on the generic default methods. This dramatically improves compilation time on recent GHC versions. See #91.
instance Contravariant Size
Uses store-core-0.3.*, which has support for alignment sensitive architectures.
Adds support for streaming decode from file descriptor, not supported on windows. As part of this addition, the API for “Data.Store.Streaming” has changed.
- Fixes a bug that could could result in attempting to malloc a negative number of bytes when reading corrupted data.
- Fixes a bug that could result in segfaults when reading corrupted data.
- Adds experimental
Data.Store.TypeHash. The new functionality is similar to TypeHash, but there are much fewer false positives of hashes changing.
- Now exports types related to generics
- Core functionality split into
Streaming support now prefixes each Message with a magic number, intended to detect mis-alignment of data frames. This is worth the overhead, because otherwise serialization errors could be more catastrophic - interpretting some bytes as a length tag and attempting to consume many bytes from the source.
weigh based allocations benchmark.
Streaming support now has checks for over/undershooting buffer
- First public release