Fast mutable collections for Haskell: more benchmarks

Today at HacPDX I did more work on the Judy bindings described yesterday. Here is judy 0.2.1 up (lunchtime release). These are bindings to the (in)famous judy arrays library, allowing Haskell keys and values to be stored in fast, mutable maps and hash structures. You can currently index via words (i.e. you have to hash keys yourself), and you can store value types or pointers to Haskell values allocated with stable pointers.

Today, I started by filling out the API some more, and benchmarking of how each API function scales. The functions I’m interested in are:

size ::  JudyL a -> IO Int
insert :: JA a => Key -> a -> JudyL a -> IO ()
lookup :: JA a => Key      -> JudyL a -> IO (Maybe a)
delete ::         Key      -> JudyL a -> IO ()

Let’s see how the performance of these functions scales with input. Benchmarking is performed via criterion (newly released to the public!) with random input keys and values generated from the mersenne twister.

Implementation methodology

My general strategy with constructing a high performance Haskell library is to:

  • Pick a candidate benchmark set.
  • Pick a good candidate data structure to make the benchmark faster.
  • Carefully implement the type, or the binding to the foreign type and its representation in Haskell
  • Carefully abstract out the unsafeties in any low level interface via type or language features.
  • Function-by-function, check by eye the optimized code that GHC produces (with ghc-core) (in particular, inlining, specialization and unboxing).
  • Function-by-function, benchmark the absolute performance as input grows (with criterion), with reference to model structures.
  • Use QuickCheck to show correctness against a model, with HPC to confirm code coverage of the test.
  • Document extensively with haddock.
  • Collect ideas on how to generalize the interface (via type classes, more flexible types, representation-changing interfaces via type families).


The size function tells you how many keys are in the Judy array. It has O(1) complexity, as we see from empirical measurement, here with randomized arrays with Int values and keys. We get the same cost whether it is 100k, 1M or 10 million elements in the structure. size is fast and scales.


A common operation on associative arrays is lookup(). It better be fast, and not degrade.


size-10M-densities-800x200If we plot the mean lookup times against N, we’ll see the lookup time grows slowly (taking twice as long when the data is 100 times bigger). (N.B. I forgot to change the titles on the above graphs).


Delete has an almost identical profile to insert, unsurprisingly. Time to delete an element grows very slowly with N, and in absolute terms if very fast.




Very fast, scalable mutable maps and hashes for Haskell

I’m at HacPDX working on judy, a finite-map like interface to the classic judy arrays library, which provides fast and scalable mutable collection types for Haskell. While developing this library, where performance is the primary concern, I’m making heavy use of Bryan O’Sullivan’s new criterion benchmarking and statistics suite, announced recently at the Haskell Implementors Workshop.

Criterion is awesome. It is going to change how we design high performance libraries in Haskell. Read on to see why.

The Judy bindings for Haskell store Haskell keys and values in judy arrays, with resources tracked by a ForeignPtr on the Haskell heap. They provide a mutable, imperative collection type, similar to the old Data.HashTable (now slated for removal), but with an interface closer to Data.Map.

The key benefit over previous hashtable implementations for Haskell is that the judy bindings scale very well, as we shall see. Also, unlike, say, Data.IntMap, judy arrays are mutable, making them less useful for some applications. They are not a solution for all container problems. However, if you need large scale collections, the judy binding might be appropriate.

The library is under active development, and currently approximates a mutable IntMap structure, with more work planned to add optimized hashes, type-family based representation switching, and more. It has a straight forward interface:

new    :: JA a => IO (JudyL a)
insert :: JA a => Key -> a -> JudyL a -> IO ()
lookup :: JA a => Key      -> JudyL a -> IO (Maybe a)
delete ::         Key      -> JudyL a -> IO ()

Let’s look at the performance profile.


Here we measure the cost for inserting 1k, 100k and 10M consecutive word-sized keys and values, over repeated runs. Criterion takes care of the measurement and rendering. First 1k values:


Above, we see the timings for 100 runs of inserting one thousand values into an empty judy array. The fastest times were around 0.22ms (220 microseconds) to build the table of 1000 values. The slowest was around 230 microseconds. A tight cluster.

Criterion can then compute the probability density curve, showing a good clumping of times.


Next we see that inserting one hundred thousand elements takes about 100x longer. That is, it scales linearly, as the docs suggest it should. We go from 1k elements in 0.2ms to 100k in 20ms.


Here we see a good clustering of values again. There’s very little variance. I can reliably insert 100k elements in 20ms.

Now the bigger test, 10 million elements. Again, the performance of the Judy bindings scales linearly with input size. 0.2ms for 1k, 20ms for 100k, ~2.2s for 10M, though there are two distinct performance bands at 10M elements.


The density function shows the performance clusters quite clearly. A peak around 2.2s and a broader peak around 2.4s.


Judy arrays scale very, very well. I was able to insert 100 million elements in 22s (10x slower again), using < 1G of memory. IntMap at equilvalent N exhausted memory.


The main problem with Data.HashTable is its reliance on the garbage collector to not get in the way. As the hash sizes grow, heap pressure becomes more of an issue, and the GC runs more often, swamping performance. However, for smaller workloads, and with decent default heap sizes Data.HashTable outperforms the Judy bindings:


While at larger sizes Data.HashTable performance degrades, taking on average 5.6s to insert 10M elements (using +RTS -H1000M -A500M).

insert-hash-10M-densities-600x200With default heap settings Data.HashTable has very poor performance, as GC time dominates the cost.

insert-hash-10M-densities-600x200That is, the Data.HashTable, at N=10M is 20x slower than Judy arrays, and with optimized heap settings, Data.HashTable is 2.5x slower than judy.

Data.IntMap and Data.Map

We can also measure the imperative lookup structures againts persistance, pure structures: Data.IntMap (a big-endian patricia tree) and Data.Map (a size-balanced tree).

Data.IntMap at N=10M, with default heap settings takes on average 7.3s to insert 10M elements, or about 3.3x slower than judy arrays (and slower than the optimized HashTable).

Data.Map at N=10M, with default heap settings, takes on average 24.8s to insert 10M elements, or about 11x slower than judy arrays.


At small scale, (under 1M elements), for simple atomic types being stored, there are a variety of container types available on Hackage which do the job well: IntMap is a good choice, as it is both flexible and fast. At scale, however, judy arrays seem to be the best thing we have at the moment, and make an excellent choice for associative arrays for large scale data. For very large N, it may be the only in-memory option.

You can get judy arrays for Haskell on Hackage now.

I’ll follow up soon with benchmarks for lookup and deletion, and how to generalize the interface.

Data.Binary: performance improvments for Haskell binary parsing

I’ve just uploaded to Hackage Data.Binary which contains some good performance improvements for binary parser users.

Data.Binary is a popular serialization library for Haskell, which uses Get and Put monads to efficiently translate Haskell structures to and from lazy bytestrings (basically, byte streams) in a pure style. It is used by 85 other packages on Hackage, for everything from network packet parsing, trie container types, network traffic analysis, to web server state persistance, and high speed cryptography.

As a refresher, recall that Binary provides both a Get and Put monadic environment. The Put monad gives you a locally scoped “buffer filling” mode, where calls to ‘put’ implicitly append values to a buffer. A sequence of ‘puts’ is a pure computation that returns a bytestring at the end. Like so:

    runPut :: Put -> ByteString
        -- runPut takes a serializer code fragment,
        -- and uses it to fill a bytestring with bytes.

    Data.Binary> runPut (do put 2; put (Just 7); put "hello")

You can also stream these into the zlib binding to gain on the fly compression (via bzlib or zlib). Conversely, you can parse Haskell data from binary formats, using the ‘Get’ monad. Given a bytestring stream, parse it into some Haskell structure:

    runGet :: Get a -> ByteString -> a
       -- runGet takes a binary parser, a bytestring,
       -- and returns some 'a' parsed from the bytesting.

    Data.Binary.Get> let s = pack "\NUL\NUL\NUL\NUL\STX\SOH
    Data.Binary.Get> runGet (do a <- get
                                b <- get
                                c <- get
                                return (a :: Integer,b :: Maybe Integer ,c :: String)) s
    (2,Just 7,"hello")

There are also default Get and Put parsers and serializers in the Binary type classes, which provides ‘encode’ and ‘decode’ methods:

    encode :: Binary a => a -> ByteString
    decode :: Binary a => ByteString -> a

Binary is used for heavy lifting binary wire protocol parsing and writing. As such it needs to be very fast. It uses aggressive inlining, careful data representations, specialization, and some interesting monadic representation transformations, to ensure GHC can optimize user-supplied parsers well. The changes, thanks to Simon Marlow, are two-fold, and improve the performance of reading into Haskell data structures by about 40%.

Firstly, the underlying Get monad representation is changed from:

    newtype Get a = Get { unGet :: S -> (a, S) }


    newtype Get a = Get { unGet :: S -> (# a, S #) }

That is, we use an explict stack/register allocated unboxed tuple to return the parsed value from the parser. This allows GHC to more aggressively optimize code containing repeated reads from the parser state. (As an aside, we also looked at a continuation-based encoding of the Get monad, but there is no performance win due to the very simple tuple return type, which GHC is already able to decompose nicely. Continuation encodings are more useful if we returned, say, an Either).

Secondly, the monadic bind for the Get monad is now strict. Moving from an explicitly lazy:

    m >>= k   = Get (\s -> let (a, s') = unGet m s in unGet (k a) s')


    m >>= k   = Get $ \s -> case unGet m s of
                             (# a, s' #) -> unGet (k a) s'

This seems to also improve the kind of code GHC generates, which coalescing of repeated reads into straight line code without tests.

Overall, there is no user-visible change to Data.Binary, as only the monadic plumbing has been modified. Your code should just get faster. Parsing streams of Word64s on my laptop goes from ~100M/s to 140M/s.

Haskell for Everyone: Hackage and the Haskell Platform : Haskell Implementors Workshop 2009

“FP Super Fun Week”, aka ICFP + Haskell Symposium + CUFP + Haskell Implementors Workshop + DEFUN has come and gone.

Malcolm Wallace did an amazing job recording most of the sessions, so as a result we have the talk and discussion about Hackage and the Haskell Platform.

You can follow along with the slides here.

Haskell for Everyone: Hackage and the Haskell Platform


Improving Data Structures with Associated Types

Original .PDF

This talk describes how to view live heap structures in Haskell, including sharing and unboxing information, and then to use insights on the representation of data to improve the performance and space, for the first time, of uniform polymorphic data types by changing their representation. We’ll use associated data types to guide regular optimizations of polymorphic structures.

The library described in this talk is available on hackage, along with the visualization tool used to build it. There’s also a screencast of the visualization tool in use.

This talk was originally presented at WG2.8 in Frauenchiemsee, Germany in June 2009.

Stream Fusion for Haskell Arrays

PDF Version

Arrays have traditionally been an awkward data structure for Haskell programmers. Despite the large number of array libraries available, they have remained relatively awkward to use in comparison to the rich suite of purely functional data structures, such as fingertrees or finite maps. Arrays have simply not been first class citizens in the language.

In this talk I’ll begin with a survey of the more than a dozen array types available, including some new matrix libraries developed in the past year. I’ll then describe a new efficient, pure, and flexible array library for Haskell with a list like interface, based on work in the Data Parallel Haskell project, that employs stream fusion to dramatically reduce the cost of pure arrays. The implementation will be presented from the ground up, along with a discussion of the entire compilation process of the library, from source to assembly.

The library described in this talk is available on hackage, and is now used by a few projects, including haskell-monte-carlo haskell-pqueue-mtl haskell-statistics-fusion haskell-uvector-algorithms and Bryan O’Sullivan’s new uber benchmark suite.

This talk was originally presented at Galois on August 28th, 2008.

The Haskell Platform: Status Report: Haskell Symposium 2009

At the future of Haskell discussion, at the Haskell Symposium 2009, Duncan Coutts and I gave a status update on the Haskell Platform project: the project to build a single, shared distribution of Haskell for every platform.


Get every new post delivered to your Inbox.

Join 72 other followers