csv-conduit

A flexible, fast, conduit-based CSV parser library for Haskell.

http://github.com/ozataman/csv-conduit

Version on this page:0.7.3.0@rev:1
LTS Haskell 21.25:0.7.3.0@rev:1
Stackage Nightly 2024-12-07:1.0.1.0
Latest on Hackage:1.0.1.0

See all snapshots csv-conduit appears in

BSD-3-Clause licensed by Ozgun Ataman
Maintained by Ozgun Ataman
This version can be pinned in stack with:csv-conduit-0.7.3.0@sha256:cf1a74a35963c744f331b543ebd0627fda2dfb95f957ead250558a883869aaa1,3640

Module documentation for 0.7.3.0

README Build Status

CSV Files and Haskell

CSV files are the de-facto standard in many cases of data transfer, particularly when dealing with enterprise application or disparate database systems.

While there are a number of csv libraries in Haskell, at the time of this project’s start, there wasn’t one that provided all of the following:

  • Full flexibility in quote characters, separators, input/output
  • Constant space operation
  • Robust parsing and error resiliency
  • Battle-tested reliability in real-world datasets
  • Fast operation
  • Convenient interface that supports a variety of use cases

Over time, people created other plausible CSV packages like cassava. The major benefit from this library remains to be:

  • Direct participation in the conduit ecosystem, which is now quite large, and all the benefits that come with it.
  • Flexibility in CSV format definition.
  • Resiliency to errors in the input data.

This package

csv-conduit is a conduit-based CSV parsing library that is easy to use, flexible and fast. It leverages the conduit infrastructure to provide constant-space operation, which is quite critical in many real world use cases.

For example, you can use http-conduit to download a CSV file from the internet and plug its Source into intoCSV to stream-convert the download into the Row data type and do something with it as the data streams, that is without having to download the entire file to disk first.

Author & Contributors

  • Ozgun Ataman (@ozataman)
  • Daniel Bergey (@bergey)
  • BJTerry (@BJTerry)
  • Mike Craig (@mkscrg)
  • Daniel Corson (@dancor)
  • Dmitry Dzhus (@dzhus)
  • Niklas Hambüchen (@nh2)
  • Facundo Domínguez (@facundominguez)

Introduction

  • The CSVeable typeclass implements the key operations.
  • CSVeable is parameterized on both a stream type and a target CSV row type.
  • There are 2 basic row types and they implement exactly the same operations, so you can chose the right one for the job at hand:
    • type MapRow t = Map t t
    • type Row t = [t]
  • You basically use the Conduits defined in this library to do the parsing from a CSV stream and rendering back into a CSV stream.
  • Use the full flexibility and modularity of conduits for sources and sinks.

Speed

While fast operation is of concern, I have so far cared more about correct operation and a flexible API. Please let me know if you notice any performance regressions or optimization opportunities.

Usage Examples

Example #1: Basics Using Convenience API

{-# LANGUAGE OverloadedStrings #-}

import Data.Conduit
import Data.Conduit.Binary
import Data.Conduit.List as CL
import Data.CSV.Conduit
import Data.Text (Text)

-- Just reverse te columns
myProcessor :: Monad m => Conduit (Row Text) m (Row Text)
myProcessor = CL.map reverse

test :: IO ()
test = runResourceT $ 
  transformCSV defCSVSettings 
               (sourceFile "input.csv") 
               myProcessor
               (sinkFile "output.csv")

Example #2: Basics Using Conduit API

{-# LANGUAGE OverloadedStrings #-}

import Data.Conduit
import Data.Conduit.Binary
import Data.CSV.Conduit
import Data.Text (Text)

myProcessor :: Monad m => Conduit (Row Text) m (Row Text)
myProcessor = awaitForever $ yield

-- Let's simply stream from a file, parse the CSV, reserialize it
-- and push back into another file.
test :: IO ()
test = runResourceT $ 
  sourceFile "test/BigFile.csv" $= 
  intoCSV defCSVSettings $=
  myProcessor $=
  fromCSV defCSVSettings $$
  sinkFile "test/BigFileOut.csv"

Changes

0.7.3.0

  • Add ordered versions of named records for consistent, controllable header column ordering. PR 44
  • Add support for GHC 9.0.1

0.7.2.0

  • Remove some dependency upper bounds for forward compatibility.

0.7.1.0

  • Add MonadFail instance for Parser. PR 38

0.7.0.0

  • BREAKING: Switch from partial Monoid instance on Parser to total Semigroup instance.
  • Compatibility with GHC 8.4.x/base-4.11.1.0

0.6.8.1

  • Fix documentation mistake in FromNamedRecord/ToNamedRecord examples.

0.6.8

  • Haddocks improvements
  • Fix inlining and specialization rules around formatDecimal
  • Updates to permit newest conduit/resourcet packages

0.6.7

  • Fix build for GHC 8.0.1