Hoogle Search
Within LTS Haskell 24.33 (ghc-9.10.3)
Note that Stackage only displays results for the latest LTS and Nightly snapshot. Learn more.
sshowItemSamples :: ServerOptions -> BoolLambdaHack Game.LambdaHack.Server No documentation available.
debugShow :: Show a => a -> TextLambdaHack Game.LambdaHack.Server.DebugM No documentation available.
sshowItemSamples :: ServerOptions -> BoolLambdaHack Game.LambdaHack.Server.ServerOptions No documentation available.
defaultStyleViaShow :: (Show a, IsString s, Monoid s) => Style a salgebraic-graphs Algebra.Graph.Export.Dot Default style for exporting graphs with Show-able vertices. The vertexName field is computed using show; the other fields are set to trivial defaults.
defaultStyleViaShow = defaultStyle (fromString . show)
exportViaShow :: (IsString s, Monoid s, Ord (ToVertex g), Show (ToVertex g), ToGraph g) => g -> salgebraic-graphs Algebra.Graph.Export.Dot Export a graph using the defaultStyleViaShow. For example:
> putStrLn $ exportViaShow (1 + 2 * (3 + 4) :: Graph Int) digraph { "1" "2" "3" "4" "2" -> "3" "2" -> "4" }
-
Show, plot and compare benchmark results Generate text reports and graphical charts from the benchmark results generated by gauge or criterion and stored in a CSV file. This tool is especially useful when you have many benchmarks or if you want to compare benchmarks across multiple packages. You can generate many interesting reports including:
- Show individual reports for all the fields measured e.g. time taken, peak memory usage, allocations, among many other fields measured by gauge
- Sort benchmark results on a specified criterion e.g. you may want to see the biggest cpu hoggers or biggest memory hoggers on top
- Across two benchmark runs (e.g. before and after a change), show all the operations that resulted in a regression of more than x% in descending order, so that we can quickly identify and fix performance problems in our application.
- Across two (or more) packages providing similar functionality, show all the operations where the performance differs by more than 10%, so that we can critically analyze the packages and choose the right one.
$ bench-show report results.csv $ bench-show graph results.csv output
report "results.csv" Nothing defaultConfig graph "results.csv" "output" defaultConfig
There are many ways to present the reports, for example, you can show can show % regression from a baseline in descending order textually as follows:(time)(Median)(Diff using min estimator) Benchmark streamly(0)(μs)(base) streamly(1)(%)(-base) --------- --------------------- --------------------- zip 644.33 +23.28 map 653.36 +7.65 fold 639.96 -15.63
To show the same graphically: See the README and the BenchShow.Tutorial module for comprehensive documentation. -
BenchShow provides a DSL to quickly generate visual graphs or textual reports from benchmarking results file (CSV) produced by gauge or criterion. Reports or graphs can be formatted and presented in many useful ways. For example, we can prepare a graphical bar chart or column wise textual report comparing the performance of two packages or comparing the performance regression in a package caused by a particular change. Absolute or percentage difference between sets of benchmarks can be presented and sorted based on the difference. This allows us to easily identify the worst affected benchmarks and fix them. The presentation is quite flexible and a lot more interesting things can be done with it.
Generating Graphs and Reports
The input is a CSV file generated by gauge --csv=results.csv or a similar output generated by criterion. The graph or the report function is invoked on the file with an appropriate Config to control various parameters of graph or report generation. In most cases defaultConfig should just do the job and a specific config may not be required.Fields, Groups and RunIds
In the documentation when we say field it means a benchmarking field e.g. time or maxrss. When we say group it means a group of benchmarks. An input file may have benchmark results collected from multiple runs. By default each run is designated as a single benchmark group with the group name default. Benchmark groups from different runs are distinguished using a runId which is the index of the run in the file, starting with 0. Benchmarks can be classified into multiple groups using classifyBenchmark. Benchmarks from each run can be divided into multiple groups. In a multi-run input benchmark groups can be fully specified using the groupname (either default or as classified by classifyBenchmark) and the runId.Presentation
We can present the results in a textual format using report or as a graphical chart using graph. Each report consists of a number of benchmarks as rows and the columns can either be benchmarking fields or groups of benchmarks depending on the Presentation setting. In a graphical chart, we present multiple clusters, each cluster representing one column from the textual report, the rows (i.e. the benchmarks) are represented as bars in the cluster. When the columns are groups, each report presents the results for a single benchmarking field for different benchmark groups. Using GroupStyle, we can further specify how we want to present the results of the groups. We can either present absolute values of the field for each group or we can make the first group as a baseline showing absolute values and present difference from the baseline for the subsequent groups. When the columns are fields, each report consists of results for a single benchmarking group. Fields cannot be compared like groups because they are of different types and have different measurement units. The units in the report are automatically determined based on the minimum value in the range of values present. The ranges for fields can be overridden using fieldRanges.Mean and Max
In a raw benchmark file (--csvraw=results.csv with gauge) we may have data for multiple iterations of each benchmark. BenchShow combines results of all iterations depending on the field type. For example if the field is time it takes the mean of all iterations and if the field is maxrss it takes the maximum of all iterations.Tutorial and Examples
See the tutorial module BenchShow.Tutorial for sample charts and a comprehensive guide to generating reports and graphs. See the test directory for many usage examples, run the tests to see the charts generated by these tests. -
bencode Data.BEncode Render a BEncode structure to a B-coded string
c'GLFW_FOCUS_ON_SHOW :: Num a => abindings-GLFW Bindings.GLFW No documentation available.
c'glfwShowWindow :: Ptr C'GLFWwindow -> IO ()bindings-GLFW Bindings.GLFW No documentation available.