hasql
Fast PostgreSQL driver with a flexible mapping API
https://github.com/nikita-volkov/hasql
| LTS Haskell 24.38: | 1.9.3.1 |
| Stackage Nightly 2026-04-26: | 1.10.3 |
| Latest on Hackage: | 1.10.3 |
hasql-1.10.3@sha256:f1e9d616ed73290096a75271d01d9f5490b30ee26dca1cbe604afbd4827c24e8,9971Module documentation for 1.10.3
Hasql
PostgreSQL driver for Haskell, that prioritizes:
- Performance
- Typesafety
- Flexibility
Status
Hasql is production-ready, actively maintained and the API is moderately stable. It’s used by many companies and most notably by the Postgrest project.
Ecosystem
Hasql is not just a single library, it is a granular ecosystem of composable libraries, each isolated to perform its own task and stay simple.
-
“hasql” - the root of the ecosystem, which provides the essential abstraction over the PostgreSQL client functionality and mapping of values. Everything else revolves around that library.
-
“hasql-transaction” - an STM-inspired composable abstraction over database transactions providing automated conflict resolution.
-
“hasql-pool” - a Hasql-specialized abstraction over the connection pool.
-
“hasql-postgresql-types” - integration with the “postgresql-types” library, which is a collection of Haskell types precisely modeling PostgreSQL types without data loss or compromise.
-
“hasql-dynamic-statements” - a toolkit for generating statements based on the parameters.
-
“hasql-th” - Template Haskell utilities, providing compile-time syntax checking and easy statement declaration.
-
“hasql-cursor-query” - a declarative abstraction over cursors.
-
“hasql-cursor-transaction” - a lower-level abstraction over cursors, which however allows to fetch from multiple cursors simultaneously. Generally though “hasql-cursor-query” is the recommended alternative.
-
“hasql-migration” - A port of postgresql-simple-migration for use with hasql.
-
“hasql-listen-notify” / “hasql-notifications” - Support for PostgreSQL asynchronous notifications.
-
“hasql-optparse-applicative” - “optparse-applicative” parsers for Hasql.
-
“hasql-implicits” - implicit definitions, such as default codecs for standard types.
-
“hasql-interpolate” - a QuasiQuoter that supports interpolating Haskell expressions into Hasql queries.
Want to list your package or correct something here? Make a PR.
Why make it an ecosystem?
-
Focus. Each library in isolation provides a simple API, which is focused on a specific task or a few related tasks.
-
Flexibility. The user picks and chooses the features, thus precisely matching the level of abstraction that he needs for his task.
-
Much more stable and descriptive semantic versioning. E.g., a change in the API of the “hasql-transaction” library won’t affect any of the other libraries and it gives the user a more precise information about which part of his application he needs to update to conform.
-
Interchangeability and competition of the ecosystem components. E.g., not everyone will agree with the restrictive design decisions made in the “hasql-transaction” library. However those decisions are not imposed on the user, and instead of having endless debates about how to abstract over transactions, another extension library can simply be released, which will provide a different interpretation of what the abstraction over transactions should be.
-
Horizontal scalability of the ecosystem. Instead of posting feature- or pull-requests, the users are encouraged to release their own small extension-libraries, with themselves becoming the copyright owners and taking on the maintenance responsibilities. Compare this model to the classical one, where some core-team is responsible for everything. One is scalable, the other is not.
Tutorials
Videos
There’s several videos on Hasql done as part of a nice intro-level series of live Haskell+Bazel coding by the “Ants Are Everywhere” YouTube channel:
Articles
Short Example
Following is a complete application, which performs some arithmetic in Postgres using Hasql.
{-# LANGUAGE OverloadedStrings, QuasiQuotes #-}
import Data.Functor.Contravariant
import Data.Int
import Hasql.Session (Session)
import Prelude
import qualified Hasql.Connection as Connection
import qualified Hasql.Connection.Settings as Settings
import qualified Hasql.Decoders as Decoders
import qualified Hasql.Encoders as Encoders
import qualified Hasql.Session as Session
import qualified Hasql.Statement as Statement
main :: IO ()
main = do
Right connection <- Connection.acquire connectionSettings
result <- Connection.use connection (sumAndDivModSession 3 8 3)
print result
where
connectionSettings =
mconcat
[ Settings.hostAndPort "localhost" 5432,
Settings.user "postgres",
Settings.password "postgres",
Settings.dbname "postgres"
-- Prepared statements are enabled by default.
-- To disable them (e.g., for pgbouncer compatibility):
-- Settings.noPreparedStatements True
]
-- * Sessions
-- Session abstracts over the execution of operations on a database connection.
-- It is composable and has a Monad instance.
-------------------------
sumAndDivModSession :: Int64 -> Int64 -> Int64 -> Session (Int64, Int64)
sumAndDivModSession a b c = do
-- Get the sum of a and b
sumOfAAndB <- Session.statement (a, b) sumStatement
-- Divide the sum by c and get the modulo as well
Session.statement (sumOfAAndB, c) divModStatement
-- * Statements
-- Statement is a definition of an individual SQL-statement,
-- accompanied by a specification of how to encode its parameters and
-- decode its result.
-------------------------
-- | A statement with two integer parameters and an integer result.
sumStatement :: Statement.Statement (Int64, Int64) Int64
sumStatement = Statement.preparable sql encoder decoder
where
-- The SQL of the statement, with $1, $2, ... placeholders for parameters.
sql =
"select $1 + $2"
-- Specification of how to encode the parameters of the statement
-- where the association with placeholders is achieved by order.
encoder =
mconcat
[ -- Encoder of the first parameter as a non-nullable int8.
-- It extracts the first element of the tuple using the contravariant functor
-- instance.
fst >$< Encoders.param (Encoders.nonNullable Encoders.int8),
-- Encoder of the second parameter,
-- which extracts the second element of the tuple.
snd >$< Encoders.param (Encoders.nonNullable Encoders.int8)
]
-- Specification of how to decode the result of the statement.
-- States that we expect a single row with a single non-nullable int8 column.
decoder =
Decoders.singleRow
(Decoders.column (Decoders.nonNullable Decoders.int8))
divModStatement :: Statement.Statement (Int64, Int64) (Int64, Int64)
divModStatement = Statement.preparable sql encoder decoder
where
sql =
"select $1 / $2, $1 % $2"
encoder =
mconcat
[ fst >$< Encoders.param (Encoders.nonNullable Encoders.int8),
snd >$< Encoders.param (Encoders.nonNullable Encoders.int8)
]
-- Decoder that expects a single row with two non-nullable int8 columns,
-- returning the result as a tuple.
-- Uses the applicative functor instance to combine two column decoders.
decoder =
Decoders.singleRow
( (,)
<$> Decoders.column (Decoders.nonNullable Decoders.int8)
<*> Decoders.column (Decoders.nonNullable Decoders.int8)
)
For the general use-case it is advised to prefer declaring statements using the “hasql-th” library, which validates the statements at compile-time and generates codecs automatically. So the above two statements could be implemented the following way:
import qualified Hasql.TH as TH -- from "hasql-th"
sumStatement :: Statement.Statement (Int64, Int64) Int64
sumStatement =
[TH.singletonStatement|
select ($1 :: int8 + $2 :: int8) :: int8
|]
divModStatement :: Statement.Statement (Int64, Int64) (Int64, Int64)
divModStatement =
[TH.singletonStatement|
select
(($1 :: int8) / ($2 :: int8)) :: int8,
(($1 :: int8) % ($2 :: int8)) :: int8
|]
Discussions
Join GitHub Discussions to ask questions, provide feedback, suggest and vote on features, and help shape the future of Hasql.
Support Policy
This policy is intended to balance stability for users with the ability to evolve the library.
Each major release of Hasql is supported for at least one year from the date of its first release. During this period, fixes are backported to the latest minor version of that major release.
After the support period ends, the release may continue to work but is no longer guaranteed to receive fixes.
You’re welcome to post requests to change the policy or issues if you believe something is not being addressed.
Changes
1.10
Major revision happened.
New Features
-
OID by name resolution.
Encoders and decoders now support resolving PostgreSQL type OIDs by their names at runtime. This enables working with custom types (enums, composite types, domains) without hardcoding OID values. The system includes an OID cache to optimize repeated lookups and automatically queries
pg_typeand related system catalogs when needed. This change affects array, composite, and value encoders/decoders throughout the codec system. -
Decoder compatibility checks.
Previously decoders were silently accepting values of different types, if binary decoding did not fail. Now decoders check if the actual type of the column matches the expected type of the decoder and report
UnexpectedColumnTypeStatementErrorerror if they do not match. They also match the amount of columns in the result with the amount of columns expected by the decoder and report an error if they do not match. -
No resets on errors.
Previously when an async exception was raised during the execution of a session, the connection would get reestablished to recover from any possible half-finished states. That led to a loss of the connection-local state on the server side. Now the connection recovers without resetting.
-
Redesigned connection configuration API.
The connection settings API has been completely redesigned to be more composable and user-friendly. Settings are now represented as a monoid, allowing easy combination of multiple configuration options. The API now supports both URI and key-value connection string formats, with individual setters for common parameters like host, port, user, password, etc.
-
Custom codec API.
Added
Hasql.Encoders.customandHasql.Decoders.customfunctions providing a low-level API for defining custom value encoders and decoders. These functions offer fine-grained control over OID resolution, allowing you to:- Specify static OIDs when known at compile time
- Automatically resolve OIDs at runtime by type name
- Declare dependencies on other types needed for serialization/deserialization (e.g., field types in composite types)
- Implement custom binary encoding/decoding logic with access to resolved OIDs
This is particularly useful for advanced use cases like custom composite types with field validation or specialized binary formats.
Breaking changes
-
Text instead of ByteString for textual data.
- The public API now uses
Textinstead ofByteStringfor SQL statements and error messages.
- The public API now uses
-
Custom type mappings (enums and composite types) now require specifying names for the types being mapped.
- This will automatically identify the types with the DB and do deep compatibility checks.
-
Decoder checks are now more strict and report
UnexpectedColumnTypeStatementErrorwhen the actual type of a column does not match the expected type of the decoder. Previously such mismatches were silently ignored and could lead to either autocasts or runtime errors in later stages.- E.g.,
int4column decoded withint8decoder will now reportUnexpectedColumnTypeStatementErrorinstead of silently accepting the value.
- E.g.,
-
Session now has exclusive access to the connection for its entire duration. Previously it was releasing and reacquiring the lock on the connection between statements.
- If you need the old behaviour, you can use
ReaderT Connection (ExceptT SessionError IO).
- If you need the old behaviour, you can use
-
Dropped
MonadReader Connectioninstance forSession. -
Dropped
MonadandMonadFailinstances for theRowdecoder.Applicativeis enough for all practical purposes. -
Errors model completely overhauled.
ConnectionErrorrestructured and moved from theHasql.Connectionmodule toHasql.Errors.SessionErrorrestructured and moved from theHasql.Sessionmodule toHasql.Errors.
-
usePreparedStatementssetting dropped. UsedisablePreparedStatementsinstead. -
Hasql.Session.sqlrenamed toHasql.Session.scriptto better reflect its purpose. -
Connection configuration API overhaul to improve UX.
Hasql.Connection.acquirenow takes a singleSettingsvalue instead of a list ofSettingvalues.- The
Hasql.Connection.Settingmodule has been replaced withHasql.Connection.Settings. - Settings are now constructed using flat monoid composition instead of hierarchical lists requiring multiple imports.
- Removed
Hasql.Connection.Setting.Connectionand related submodules.
-
Custom value decoder signature changed.
The
Hasql.Decoders.customfunction signature has been extended to support more explicit control over type resolution. It now requires:- Optional static OIDs parameter (previously implicit)
- List of additional type dependencies needed for decoding
- The decoder function now receives an OID lookup function as its first parameter
This change enables more robust custom type handling but requires updating existing custom decoder implementations.
-
Exception instances on error types removed. The error types here were never thrown as exceptions. Wrap them in your own exception type if you need to throw them.
1.9
- Revised the settings construction exposing a tree of modules
- Added a global prepared statements setting
Why the changes?
To introduce the new global prepared statements setting and to make the settings API ready for extension without backward compatibility breakage.
Instructions on upgrading the 1.8 code
When explicit connection string is used
Replace
Hasql.Connection.acquire connectionString
with
Hasql.Connection.acquire
[ Hasql.Connection.Setting.connection (Hasql.Connection.Setting.Connection.string connectionString)
]
When parameteric connection string is used
Replace
Hasql.Connection.acquire (Hasql.Connection.settings host port user password dbname)
with
Hasql.Connection.acquire
[ Hasql.Connection.Setting.connection
( Hasql.Connection.Setting.Connection.params
[ Hasql.Connection.Setting.Connection.Param.host host,
Hasql.Connection.Setting.Connection.Param.port port,
Hasql.Connection.Setting.Connection.Param.user user,
Hasql.Connection.Setting.Connection.Param.password password,
Hasql.Connection.Setting.Connection.Param.dbname dbname
]
)
]
1.8.1
- In case of exceptions thrown by user from inside of Session, the connection status gets checked to be out of transaction and unless it is the connection gets reset.
1.8
- Move to “iproute” from “network-ip” for the “inet” datatype (#163).
1.7
- Decidable instance on
Encoders.Paramsremoved. It was useless and limited the design. QueryErrortype renamed toSessionError.PipelineErrorconstructor added to theSessionErrortype.
1.6.3.1
- Moved to “postgresql-libpq-0.10”
1.6.3
- Added
unknownEnumencoder
1.6.2
- Added composite encoder
- Added
oidandnameencoders
1.6.1
- Added
jsonLazyBytesandjsonbLazyBytes
1.6
- Added position to
ServerError(breaking change). - Disabled failure on empty query.
1.5
- Added column number to
RowError(breaking change). - Added
MonadReader Connectioninstance for Session.