Streaming data processing library.
|Version on this page:||220.127.116.11|
|LTS Haskell 20.16:||18.104.22.168@rev:1|
|Stackage Nightly 2023-03-31:||22.214.171.124@rev:1|
|Latest on Hackage:||126.96.36.199@rev:1|
Module documentation for 188.8.131.52
conduit is a solution to the streaming data problem, allowing for production,
transformation, and consumption of streams of data in constant memory. It is an
alternative to lazy I/O which guarantees deterministic resource handling.
For more information about conduit in general, and how this package in particular fits into the ecosystem, see the conduit homepage.
ChangeLog for conduit
- Fix GHC 9.2 build #473
- Library and tests compile and run with GHC 9.0.1 #455
- More eagerly emit groups in
- Use lower-case imports (better for cross-compilation) #408
- Improve fusion framework rewrite rules
- Test suite compatibility with GHC 8.4.1 #358
- Drop monad-control and exceptions in favor of unliftio
- Drop mmorph dependency
- Deprecate old type synonyms and operators
- Drop finalizers from the library entirely
- Much simpler
- Less guarantees about prompt finalization
- No more
- Replace the
- Add the
Data.Conduit.Combinatorsmodules, stolen from
- Ensure downstream and inner sink receive same inputs in
the reskinning idea:
- Expose yieldM for ConduitM #270
- Fix test suite compilation on older GHCs
- In zipConduitApp, left bias not respected mixing monadic and non-monadic conduits #263
- Fix benchmark by adding a type signature
- Doc updates
- Canonicalise Monad instances #237
- mapAccum and mapAccumM should be strict in their state #218
- Some documentation improvements
fuse as synonyms for
1.2.2 Lots more stream fusion.
1.2 Two performance optimizations added. (1) A stream fusion framework. This is a non-breaking change. (2) Codensity transform applied to the
ConduitM datatype. This only affects users importing the
.Internal module. Both changes are thoroughly described in the following to blog posts: Speeding up conduit, and conduit stream fusion.
1.1 Refactoring into conduit and conduit-extra packages. Core functionality is now in conduit, whereas most common helper modules (including Text, Binary, Zlib, etc) are in conduit-extra. To upgrade to this version, there should only be import list and conduit file changes necessary.
1.0 Simplified the user-facing interface back to the Source, Sink, and Conduit types, with Producer and Consumer for generic code. Error messages have been simplified, and optional leftovers and upstream terminators have been removed from the external API. Some long-deprecated functions were finally removed.
0.5 The internals of the package are now separated to the .Internal module, leaving only the higher-level interface in the advertised API. Internally, switched to a
Leftover constructor and slightly tweaked the finalization semantics.
0.4 Inspired by the design of the pipes package: we now have a single unified type underlying
Conduit. This type is named
Pipe. There are type synonyms provided for the other three types. Additionally,
BufferedSource is no longer provided. Instead, the connect-and-resume operator,
$$+, can be used for the same purpose.
0.3 ResourceT has been greatly simplified, specialized for IO, and moved into a separate package. Instead of hard-coding ResourceT into the conduit datatypes, they can now live around any monad. The Conduit datatype has been enhanced to better allow generation of streaming output. The SourceResult, SinkResult, and ConduitResult datatypes have been removed entirely.
0.2 Instead of storing state in mutable variables, we now use CPS. A
Source returns the next
Source, and likewise for
Conduits. Not only does this take better advantage of GHC's optimizations (about a 20% speedup), but it allows some operations to have a reduction in algorithmic complexity from exponential to linear. This also allowed us to remove the
Prepared set of types. Also, the
State functions (e.g.,
sinkState) use better constructors for return types, avoiding the need for a dummy state on completion.
BufferedSource is now an abstract type, and has a much more efficient internal representation. The result was a 41% speedup on microbenchmarks (note: do not expect speedups anywhere near that in real usage). In general, we are moving towards
BufferedSource being a specific tool used internally as needed, but using
Source for all external APIs.
0.0 Initial release.