Tuesday, June 26, 2012

Exploring C# LinkedLists via LRU Caches (2 of 3)

Downloads for post:
In the previous post, I showed a simple implementation of an LRU (least recently used) cache, using a Dictionary and .Net’s LinkedList. It’s reasonably fast and completely filled the need I had. I should have left it at that. But after writing it, I was thinking about how .Net’s LinkedList internally treats the list as circular, where the list head node is connected to the tail node, making the tail the node previous to the head node. This detail is not exposed to consumers of LinkedList. It would have been nice if it were when implementing this cache, because a common operation is to use the oldest node (at the tail), when adding an item to the cache after it’s reached capacity. If the linked list were circular, all that would be required to make the oldest node become the newest node is just change which node we consider the oldest node to the previous node. This works, because the node previous to the oldest node is the next oldest node, and the next node after the oldest node is the newest node in a circular linked list. So a simple assignment would do, without needing to remove and re-insert the oldest node, or change any next/previous pointers at all. I also wondered how little code would be needed to implement a linked list that just served the need of the LRU cache. So I made some changes.
First, I wrote a class that’s private to the LRU cache class, implementing a circular, doubly linked list.
private class LruNode {
    /// <summary>Key that identifies a cache entry.</summary>
    public TKey Key { get; set; }
    public LruNode Prev { get; private set; }
    public LruNode Next { get; private set; }
    
    public LruNode(TKey key) {
        Key = key;
        Prev = Next = this;
    }

    public void MoveAfter(LruNode node) {
        if (node.Next != this) {
            this.Next.Prev = this.Prev;
            this.Prev.Next = this.Next;
            InsertAfter(node);
        }
    }

    public void InsertAfter(LruNode node) {
        this.Prev = node;
        this.Next = node.Next; 
        node.Next = this.Next.Prev = this;
    }

    public void Clear() {
        Key = default(TKey);
        Prev = Next = null;
    }

    public override string ToString() {
        return "LruNode<" + Key.ToString() + ">";
    }
}

This implements all the behavior the cache needs from a linked list. Rather than create a linked list class, I just implemented all the behavior in the class for nodes. The LruCache class then just needs a field for the oldest node in its linked list, and that’s its handle for the linked list.
public sealed class LruCache<TKey, TValue> : ILruCache<TKey, TValue> {
    private object lockObj = new object();
    private int capacity;
    private Dictionary<TKey, Entry> cacheMap;
    // oldestNode (least recently used) is the tail of a circular linked list of cache nodes.
    private LruNode oldestNode;

Now I just needed to change a few methods. The constructor loses the need to initialize the linked list since the oldestNode field will already be initialized to null.
public LruCache(int capacity) {
    this.capacity = capacity;
    this.cacheMap = new Dictionary<TKey, Entry>(capacity);
}

The Clear() method has a little more work to do to replace what LinkedList was doing for it in the previous version of the class.
public void Clear() {
    lock (lockObj) {
        this.cacheMap.Clear();
        // break all links between nodes (may help garbage collection)
        var node = this.oldestNode;
        this.oldestNode = null;
        while (node != null) {
            var temp = node;
            node = node.Prev;
            temp.Clear();
        }
    }
}
The Add() method gets the code that takes advantage of the linked list being circular. This is where most of the changes are needed.
public void Add(TKey key, TValue value) {
    lock (lockObj) {
        Entry entry;
        if (!this.cacheMap.TryGetValue(key, out entry)) {
            LruNode node;
            if (this.cacheMap.Count == capacity) {
                // cache full, so re-use the oldest node
                node = this.oldestNode;
                this.cacheMap.Remove(node.Key);
                node.Key = key;
                // oldest becomes the newest
                this.oldestNode = node.Prev;
            } else {
                // room to create and insert a new node
                node = new LruNode(key);
                if (this.oldestNode != null) {
                    // in a circular list, newest is next after oldest
                    node.InsertAfter(this.oldestNode);
                } else {
                    // set up the first node in the linked list
                    this.oldestNode = node;
                }
            }
            // map the key to the node
            entry.node = node;
            entry.value = value;
            this.cacheMap.Add(key, entry);
        } else {
            // key exists, replace value with that given
            entry.value = value;
            this.cacheMap[key] = entry;
            Touch(entry.node);
        }
    }
}
Finally, a minor modification to the private Touch() method that also takes advantage of the circular linked list.
private void Touch(LruNode node) {
    if (node != this.oldestNode) {
        node.MoveAfter(this.oldestNode);
    } else {
        // since node is oldest, we make it newest by just saying previous node 
        // the oldest because our linked list is circular
        this.oldestNode = node.Prev;
    }
}
When compared to the previous implementation, this version is a little faster (average around 15%, ranging 0%-25% depending on operation and cache size) over all operations except trying to get an item from the cache when it’s not in the cache. That case is not affected by any of these changes, because the only code that really executes is the lock and the Dictionary.TryGet, and that’s the same in all the versions of LRU cache presented in these three blog posts.
It was over a year later before I revisited this class and made a third version, which I’ll cover in the next post, with some graphs comparing the performance of the three versions.

No comments:

Post a Comment