Provides conduits to upload data to S3 using the Multipart API https://github.com/Axman6/amazonka-s3-streaming#readme
|Version on this page:||0.2.0.3|
|LTS Haskell 9.21:||0.2.0.3|
|Stackage Nightly 2017-12-08:||0.2.0.3|
|Latest on Hackage:||220.127.116.11|
Provides a conduit based streaming interface and a concurrent interface to uploading data to S3 using the Multipart API. Also provides method to upload files or bytestrings of known size in parallel.
The documentation can be found on Hackage.
Changelog - amazonka-s3-streaming
- Make all library generated messages use Debug level not Info
- Update to mmorph < 1.2
- Fixed a bug with the printf format strings which would lead to a crash (Thanks @JakeOShannessy for reporting).
- Fixed a potential bug with very large uploads where the chunksize might be too small for the limit of 10,000 chunks per upload (#6).
- Change API to allow the user to specify a chunk size for streaming if the user knows more about the data than we do.
- Allow the user to specify how many concurrent threads to use for
concurrentUploadas as well as chunk size (#4).
- Better specify cabal dependency ranges.