streamly

Beautiful Streaming, Concurrent and Reactive Composition

https://github.com/composewell/streamly

Version on this page:0.4.1
LTS Haskell 22.14:0.10.1
Stackage Nightly 2024-03-28:0.10.1
Latest on Hackage:0.10.1

See all snapshots streamly appears in

BSD-3-Clause licensed by Harendra Kumar
Maintained by [email protected]
This version can be pinned in stack with:streamly-0.4.1@sha256:74346377de440731ece0956799dbf804fa614479d8ce25dfeca3039cf33c1cb7,16159

Module documentation for 0.4.1

  • Streamly
    • Streamly.Prelude
    • Streamly.Time
    • Streamly.Tutorial

Streamly

Hackage Gitter chat Build Status Windows Build status Coverage Status

Streaming Concurrently

Streamly, short for streaming concurrently, provides monadic streams, with a simple API, almost identical to standard lists, and an in-built support for concurrency. By using stream-style combinators on stream composition, streams can be generated, merged, chained, mapped, zipped, and consumed concurrently – providing a generalized high level programming framework unifying streaming and concurrency. Controlled concurrency allows even infinite streams to be evaluated concurrently. Concurrency is auto scaled based on feedback from the stream consumer. The programmer does not have to be aware of threads, locking or synchronization to write scalable concurrent programs.

The basic streaming functionality of streamly is equivalent to that provided by streaming libraries like vector, streaming, pipes, and conduit. In addition to providing streaming functionality, streamly subsumes the functionality of list transformer libraries like pipes or list-t, and also the logic programming library logict. On the concurrency side, it subsumes the functionality of the async package. Because it supports streaming with concurrency we can write FRP applications similar in concept to Yampa or reflex.

Why use streamly?

  • Simplicity: Simple list like streaming API, if you know how to use lists then you know how to use streamly. This library is built with simplicity and ease of use as a design goal.
  • Concurrency: Simple, powerful, and scalable concurrency. Concurrency is built-in, and not intrusive, concurrent programs are written exactly the same way as non-concurrent ones.
  • Generality: Unifies functionality provided by several disparate packages (streaming, concurrency, list transformer, logic programming, reactive programming) in a concise API.
  • Performance: Streamly is designed for high performance. It employs stream fusion optimizations for best possible performance. Serial peformance is equivalent to the venerable vector library in most cases and even better in some cases. Concurrent performance is unbeatable. See streaming-benchmarks for a comparison of popular streaming libraries on micro-benchmarks.

For more details on streaming library ecosystem and where streamly fits in, please see streaming libraries. Also, see the Comparison with Existing Packages section in the streamly tutorial.

For more information on streamly, see:

  • Streamly.Tutorial module in the haddock documentation for a detailed introduction
  • examples directory in the package for some simple practical examples

Streaming Pipelines

Unlike pipes or conduit and like vector and streaming, streamly composes stream data instead of stream processors (functions). A stream is just like a list and is explicitly passed around to functions that process the stream. Therefore, no special operator is needed to join stages in a streaming pipeline, just the standard function application ($) or reverse function application (&) operator is enough. Combinators are provided in Streamly.Prelude to transform or fold streams.

The following snippet provides a simple stream composition example that reads numbers from stdin, prints the squares of even numbers and exits if an even number more than 9 is entered.

import Streamly
import qualified Streamly.Prelude as S
import Data.Function ((&))

main = runStream $
       S.repeatM getLine
     & fmap read
     & S.filter even
     & S.takeWhile (<= 9)
     & fmap (\x -> x * x)
     & S.mapM print

Concurrent Stream Generation

Monadic construction and generation functions e.g. consM, unfoldrM, replicateM, repeatM, iterateM and fromFoldableM etc. work concurrently when used with appropriate stream type combinator (e.g. asyncly, aheadly or parallely).

The following code finishes in 3 seconds (6 seconds when serial):

> let p n = threadDelay (n * 1000000) >> return n
> S.toList $ aheadly $ p 3 |: p 2 |: p 1 |: S.nil
[3,2,1]

> S.toList $ parallely $ p 3 |: p 2 |: p 1 |: S.nil
[1,2,3]

The following finishes in 10 seconds (100 seconds when serial):

runStream $ asyncly $ S.replicateM 10 $ p 10

Concurrent Streaming Pipelines

Use |& or |$ to apply stream processing functions concurrently. The following example prints a “hello” every second; if you use & instead of |& you will see that the delay doubles to 2 seconds instead because of serial application.

main = runStream $
      S.repeatM (threadDelay 1000000 >> return "hello")
   |& S.mapM (\x -> threadDelay 1000000 >> putStrLn x)

Mapping Concurrently

We can use mapM or sequence functions concurrently on a stream.

> let p n = threadDelay (n * 1000000) >> return n
> runStream $ aheadly $ S.mapM (\x -> p 1 >> print x) (serially $ repeatM (p 1))

Serial and Concurrent Merging

Semigroup and Monoid instances can be used to fold streams serially or concurrently. In the following example we compose ten actions in the stream, each with a delay of 1 to 10 seconds, respectively. Since all the actions are concurrent we see one output printed every second:

import Streamly
import qualified Streamly.Prelude as S
import Control.Concurrent (threadDelay)

main = S.toList $ parallely $ foldMap delay [1..10]
 where delay n = S.yieldM $ threadDelay (n * 1000000) >> print n

Streams can be combined together in many ways. We provide some examples below, see the tutorial for more ways. We use the following delay function in the examples to demonstrate the concurrency aspects:

import Streamly
import qualified Streamly.Prelude as S
import Control.Concurrent

delay n = S.yieldM $ do
    threadDelay (n * 1000000)
    tid <- myThreadId
    putStrLn (show tid ++ ": Delay " ++ show n)

Serial

main = runStream $ delay 3 <> delay 2 <> delay 1
ThreadId 36: Delay 3
ThreadId 36: Delay 2
ThreadId 36: Delay 1

Parallel

main = runStream . parallely $ delay 3 <> delay 2 <> delay 1
ThreadId 42: Delay 1
ThreadId 41: Delay 2
ThreadId 40: Delay 3

Nested Loops (aka List Transformer)

The monad instance composes like a list monad.

import Streamly
import qualified Streamly.Prelude as S

loops = do
    x <- S.fromFoldable [1,2]
    y <- S.fromFoldable [3,4]
    S.yieldM $ putStrLn $ show (x, y)

main = runStream loops
(1,3)
(1,4)
(2,3)
(2,4)

Concurrent Nested Loops

To run the above code with, lookahead style concurrency i.e. each iteration in the loop can run run concurrently by but the results are presented in the same order as serial execution:

main = runStream $ aheadly $ loops

To run it with depth first concurrency yielding results asynchronously in the same order as they become available (deep async composition):

main = runStream $ asyncly $ loops

To run it with breadth first concurrency and yeilding results asynchronously (wide async composition):

main = runStream $ wAsyncly $ loops

The above streams provide lazy/demand-driven concurrency which is automatically scaled as per demand and is controlled/bounded so that it can be used on infinite streams. The following combinator provides strict, unbounded concurrency irrespective of demand:

main = runStream $ parallely $ loops

To run it serially but interleaving the outer and inner loop iterations (breadth first serial):

main = runStream $ wSerially $ loops

Magical Concurrency

Streams can perform semigroup (<>) and monadic bind (>>=) operations concurrently using combinators like asyncly, parallelly. For example, to concurrently generate squares of a stream of numbers and then concurrently sum the square roots of all combinations of two streams:

import Streamly
import qualified Streamly.Prelude as S

main = do
    s <- S.sum $ asyncly $ do
        -- Each square is performed concurrently, (<>) is concurrent
        x2 <- foldMap (\x -> return $ x * x) [1..100]
        y2 <- foldMap (\y -> return $ y * y) [1..100]
        -- Each addition is performed concurrently, monadic bind is concurrent
        return $ sqrt (x2 + y2)
    print s

Of course, the actions running in parallel could be arbitrary IO actions. For example, to concurrently list the contents of a directory tree recursively:

import Path.IO (listDir, getCurrentDir)
import Streamly
import qualified Streamly.Prelude as S

main = runStream $ aheadly $ getCurrentDir >>= readdir
   where readdir d = do
            (dirs, files) <- S.yieldM $ listDir d
            S.yieldM $ mapM_ putStrLn $ map show files
            -- read the subdirs concurrently, (<>) is concurrent
            foldMap readdir dirs

In the above examples we do not think in terms of threads, locking or synchronization, rather we think in terms of what can run in parallel, the rest is taken care of automatically. When using aheadly the programmer does not have to worry about how many threads are to be created, they are automatically adjusted based on the demand of the consumer.

The concurrency facilities provided by streamly can be compared with OpenMP and Cilk but with a more declarative expression.

Reactive Programming (FRP)

Streamly is a foundation for first class reactive programming as well by virtue of integrating concurrency and streaming. See AcidRain.hs for a console based FRP game example and CirclingSquare.hs for an SDL based animation example.

Performance

Streamly has best in class performance even though it generalizes streaming to concurrent composition that does not mean it sacrifices non-concurrent performance. See streaming-benchmarks for detailed performance comparison with regular streaming libraries and the explanation of the benchmarks. The following graphs show a summary, the first one measures how four pipeline stages in a series perform, the second one measures the performance of individual stream operations; in both cases the stream processes a million elements:

Composing Pipeline Stages All Operations at a Glance

Contributing

The code is available under BSD-3 license on github. Join the gitter chat channel for discussions. You can find some of the todo items on the github wiki. Please ask on the gitter channel or contact the maintainer directly for more details on each item. All contributions are welcome!

This library was originally inspired by the transient package authored by Alberto G. Corona.

Changes

0.4.1

Bug Fixes

  • foldxM was not fully strict, fixed.

0.4.0

Breaking changes

  • Signatures of zipWithM and zipAsyncWithM have changed
  • Some functions in prelude now require an additional Monad constraint on the underlying type of the stream.

Deprecations

  • once has been deprecated and renamed to yieldM

Enhancements

  • Add concurrency control primitives maxThreads and maxBuffer.
  • Concurrency of a stream with bounded concurrency when used with take is now limited by the number elements demanded by take.
  • Significant performance improvements utilizing stream fusion optimizations.
  • Add yield to construct a singleton stream from a pure value
  • Add repeat to generate an infinite stream by repeating a pure value
  • Add fromList and fromListM to generate streams from lists, faster than fromFoldable and fromFoldableM
  • Add map as a synonym of fmap
  • Add scanlM', the monadic version of scanl’
  • Add takeWhileM and dropWhileM
  • Add filterM

0.3.0

Breaking changes

  • Some prelude functions, to whom concurrency capability has been added, will now require a MonadAsync constraint.

Bug Fixes

  • Fixed a race due to which, in a rare case, we might block indefinitely on an MVar due to a lost wakeup.
  • Fixed an issue in adaptive concurrency. The issue caused us to stop creating more worker threads in some cases due to a race. This bug would not cause any functional issue but may reduce concurrency in some cases.

Enhancements

  • Added a concurrent lookahead stream type Ahead
  • Added fromFoldableM API that creates a stream from a container of monadic actions
  • Monadic stream generation functions consM, |:, unfoldrM, replicateM, repeatM, iterateM and fromFoldableM can now generate streams concurrently when used with concurrent stream types.
  • Monad transformation functions mapM and sequence can now map actions concurrently when used at appropriate stream types.
  • Added concurrent function application operators to run stages of a stream processing function application pipeline concurrently.
  • Added mapMaybe and mapMaybeM.

0.2.1

Bug Fixes

  • Fixed a bug that caused some transformation ops to return incorrect results when used with concurrent streams. The affected ops are take, filter, takeWhile, drop, dropWhile, and reverse.

0.2.0

Breaking changes

  • Changed the semantics of the Semigroup instance for InterleavedT, AsyncT and ParallelT. The new semantics are as follows:

    • For InterleavedT, <> operation interleaves two streams
    • For AsyncT, <> now concurrently merges two streams in a left biased manner using demand based concurrency.
    • For ParallelT, the <> operation now concurrently meges the two streams in a fairly parallel manner.

    To adapt to the new changes, replace <> with serial wherever it is used for stream types other than StreamT.

  • Remove the Alternative instance. To adapt to this change replace any usage of <|> with parallel and empty with nil.

  • Stream type now defaults to the SerialT type unless explicitly specified using a type combinator or a monomorphic type. This change reduces puzzling type errors for beginners. It includes the following two changes:

    • Change the type of all stream elimination functions to use SerialT instead of a polymorphic type. This makes sure that the stream type is always fixed at all exits.
    • Change the type combinators (e.g. parallely) to only fix the argument stream type and the output stream type remains polymorphic.

    Stream types may have to be changed or type combinators may have to be added or removed to adapt to this change.

  • Change the type of foldrM to make it consistent with foldrM in base.

  • async is renamed to mkAsync and async is now a new API with a different meaning.

  • ZipAsync is renamed to ZipAsyncM and ZipAsync is now ZipAsyncM specialized to the IO Monad.

  • Remove the MonadError instance as it was not working correctly for parallel compositions. Use MonadThrow instead for error propagation.

  • Remove Num/Fractional/Floating instances as they are not very useful. Use fmap and liftA2 instead.

Deprecations

  • Deprecate and rename the following symbols:
    • Streaming to IsStream
    • runStreaming to runStream
    • StreamT to SerialT
    • InterleavedT to WSerialT
    • ZipStream to ZipSerialM
    • ZipAsync to ZipAsyncM
    • interleaving to wSerially
    • zipping to zipSerially
    • zippingAsync to zipAsyncly
    • <=> to wSerial
    • <| to async
    • each to fromFoldable
    • scan to scanx
    • foldl to foldx
    • foldlM to foldxM
  • Deprecate the following symbols for future removal:
    • runStreamT
    • runInterleavedT
    • runAsyncT
    • runParallelT
    • runZipStream
    • runZipAsync

Enhancements

  • Add the following functions:
    • consM and |: operator to construct streams from monadic actions
    • once to create a singleton stream from a monadic action
    • repeatM to construct a stream by repeating a monadic action
    • scanl' strict left scan
    • foldl' strict left fold
    • foldlM' strict left fold with a monadic fold function
    • serial run two streams serially one after the other
    • async run two streams asynchronously
    • parallel run two streams in parallel (replaces <|>)
    • WAsyncT stream type for BFS version of AsyncT composition
  • Add simpler stream types that are specialized to the IO monad
  • Put a bound (1500) on the output buffer used for asynchronous tasks
  • Put a limit (1500) on the number of threads used for Async and WAsync types

0.1.2

Enhancements

  • Add iterate, iterateM stream operations

Bug Fixes

  • Fixed a bug that casued unexpected behavior when pure was used to inject values in Applicative composition of ZipStream and ZipAsync types.

0.1.1

Enhancements

  • Make cons right associative and provide an operator form .: for it
  • Add null, tail, reverse, replicateM, scan stream operations
  • Improve performance of some stream operations (foldl, dropWhile)

Bug Fixes

  • Fix the product operation. Earlier, it always returned 0 due to a bug
  • Fix the last operation, which returned Nothing for singleton streams

0.1.0

  • Initial release