David Bakin’s programming blog.


Persistent Data Structures – now (possibly) practical

The typical data structures most programmers know and use require imperative programming: they fundamentally depend on replacing the values of fields with assignment statements, especially pointer fields.  A particular data structure represents the state of something at that particular moment in time, and that moment only.  If you want to know what the state was in the past you needed to have made a copy of the entire data structure back then, and kept it around until you needed it.  (Alternatively, you could keep a log of changes made to the data structure that you could play in reverse until you get the previous state — and then play it back forwards to get back to where you are now.  Both these techniques are typically used to implement undo/redo, for example.)

Or you could use a persistent data structure. A persistent data structure allows you to access previous versions at any time without having to do any copying.  All you needed to do at the time was to save a pointer to the data structure.  If you have a persistent data structure, your undo/redo implementation is simply a stack of pointers that you push a pointer onto after you make any change to the data structure.

This can be quite useful—but it is typically very hard to implement a persistent data structure in an imperative language, especially if you have to worry about memory management¹.   If you’re using a functional programming language—especially a language with lazy semantics like Haskell—then all your data structures are automatically persistent, and your only problem is efficiency (and of course, in your functional languages, the language system takes care of memory management).  But for practical purposes, as a hardcore C++ programmer for professional purposes, I was locked out of the world of persistent data structures.

Now, however, with C# and C++/CLI in use (and garbage collection coming to C++ any time now …²) I can at last contemplate the use of persistent data structures in my designs.  And that’s great, because it gave me an excuse to take one of my favorite computer science books off the shelf and give it another read.

The book is Purely Functional Data Structures, by Chris Okasaki.  I find it to be a very well written and easy to understand introduction to the design and analysis of persistent data structures—or equivalently—for the design and analysis of any data structure you’d want to use in a functional language.

There are two key themes of the book: First, to describe the use and implementation of several persistent data structures, such as different kinds of heaps, queues, and random-access lists, and second, to describe how to create your own efficient persistent data structures.

A nice feature here is the inclusion of “Hint to Practitioners” sidebars that point out which of these data structures work especially well in various contexts.

The second theme is the more demanding one—but of course, it is teaching something really valuable. First, the methods of analyzing amortized time taken by operations in a data structure are fully explained. The two basic techniques are the “banker’s method” and the “physicist’s method”, and they have to do with different ways of accounting for time spent in in the different operations on a data structure so that bounds on the time spent can be computed. (The “credits” and “debits” used for accounting for time are not reflected in the code for the data structure – they are “virtual” and only used for the analysis.) Then, Okasaki adapts these methods to work for persistent data structures, and provides several fully worked out examples.

With analysis of persistent data structures explained he then goes on to describe several methods of creating efficient persistent data structures from non-persistent data structures. These methods are lazy rebuilding,use of numerical representations in building data structures, data-structural bootstrapping, and implicit recursive slowdown. These methods are all interesting, but the one which is most fun (and for me, the easiest to understand) is the use of numerical representations. In this method data structures are built by composing smaller structures in a form that is similar to the representation of a binary number—and merging and deleting items from the data structure is modeled as adding or subtracting from a number. Also, different kinds of binary number representations are used, and the use of base 3 and base 4 numbers is mentioned.³

The persistent data structures described are given with code (in an ML variant that includes explicit notations for lazy evaluation, and also in Haskell). After the book was published the data structures described were “productized” into a Haskell library called Edison, originally written by Chris Okasaki, but which is now maintained and enhanced by Robert Dockins and available at his github repo.


¹A good starting point to look for papers on the subject of imperative persistent data structures is to search for papers co-authored by Robert E Tarjan with the word “persistent” in the title.

²GC is part of the C++0x language and compilers will be required to support it, so that means we’ll get to use it by 2015, I’m sure!

³You can get the flavor of this kind of thing from this presentation (pdf) by Ralf Hinze.