Katip is a structured logging framework for Haskell.
Kâtip (pronounced kah-tip) is the Turkish word for scribe.
Structured: Logs are structured, meaning they can be individually tagged with key value data (JSON Objects). This helps you add critical details to log messages before you need them so that when you do, they are available. Katip exposes a typeclass for log payloads so that you can use rich, domain-specific Haskell types to add context that will be automatically merged in with existing log context.
Easy to Integration: Katip was designed to be easily integrated into existing monads. By using typeclasses for logging facilities, individual subsystems and even libraries can easily add their own namespacing and context without having any knowledge of their logging environment.
Practical Use: Katip comes with a set of convenience facilities built-in, so it can be used without much headache even in small projects.
Handlebackend for logging to files in simple settings.
AnyLogPayloadkey-value type that makes it easy to log structured columns on the fly without having to define new data types.
Monadicinterface where logging namespace can be obtained from the monad context.
Multiple variants of the fundamental logging functions for optionally including fields and line-number information.
Extensible: Can be easily extended (even at runtime) to output to multiple backends at once (known as scribes). See
katip-elasticsearchas an example. Backends for other forms of storage are trivial to write, including both hosted database systems and SaaS logging providers.
Debug-Friendly: Critical details for monitoring production systems such as host, PID, thread id, module and line location are automatically captured. User-specified attributes such as environment (e.g. Production, Test, Dev) and system name are also captured.
Configurable: Can be adjusted on a per-scribe basis both with verbosity and severity.
Verbosity dictates how much of the log structure should actually get logged. In code you can capture highly detailed metadata and decide how much of that gets emitted to each backend.
Severity AKA "log level" is specified with each message and individual scribes can decide whether or not to record that severity. It is even possible to at runtime swap out and replace loggers, allowing for swapping in verbose debug logging at runtime if you want.
Battle-Tested: Katip has been integrated into several production systems since 2015 and has logged hundreds of millions of messages to files and ElasticSearch.
Be sure to look in the examples directory for some examples of how to integrate Katip into your own stack.
Loosen deps on aeson to allow 220.127.116.11
Fix build on windows
Add some missing test files
- Fix some example code that wasn't building
- Make FromJSON instance for Severity case insensitive.
- Add support for aeson 1.0.x
- Add Katip.Format.Time module and use much more efficient time formatting code in the Handle scribe.
- Switch from
ToJSONsuperclass requirement fro
ToObjectwill provide a default instance for types with an instance for
ToJSON. This gets us to the same place as before without having to add a broader instance for something that's only going to show up in logs as an Object.
- Add a simple MVar lock for file handle scribes to avoid interleaved log lines from concurrent inputs.
- Add GHC implicit callstack support, add logLoc.
- Drop lens in favor of type-compatible, lighter microlens.
- Fixed nested objects not rendering in Handle scribe.
- LogContexts Monoid instance is now right-biased rather than left
biased. This better fits the use case. For instance
ctx1 <> ctx2will prefer keys in
ctx2if there are conflicts. This makes the most sense because functions like
mappendon the right side.
- LogContext internally uses a
Seqinstead of a list for better complexity on context add.
- Improved documentation.
- Set upper bounds for a few dependencies.
- Add ExceptT instance for Katip typeclass
- Initial release