Thursday, January 15, 2015

Optimizing the Damerau-Levenshtein Algorithm in C#

The previous two posts covered the Levenshtein algorithm in C#, and the TSQL implementation. In this post I’ll cover the Damerau-Levenshtein algorithm in C#, with the next post giving the TSQL version. The idea for this distance measure is very similar to Levenshtein. If you remember, Levenshtein measures the number of substitution, insert, and delete edits required to convert one string to another. Damerau added the additional edit of two character transpositions. As an example, The Levenshtein distance between “paul” and “pual” is 2. With Damerau-Levenshtein, the distance is only 1. For applications matching strings like words or people names, my experience is that Damerau-Levenshtein gives better results.

Before getting into the algorithm, I should mention this caveat. As described in the wikipedia article, there are two basic implementations. One is a literal implementation producing a true distance metric, which is fairly complicated. The other simpler implementation is the optimal string alignment, also called the restricted edit distance, which is much easier to implement. The downside is that the optimal string alignment version is not a true metric. This post implements the simpler restricted edit distance. For most purposes, it works fine. The main difference is that it only allows a substring to be edited once. Using the a literal implementation of Damerau, the distance between “CA” and “ABC” is 2 (CA->AC->ABC). With the restricted edit distance version, the distance is 3 (CA->A->AB->ABC). This is because in this case it can’t do the transpose and then edit the substring a second time by inserting the B between the two characters.

The implementation presented here is very similar to the earlier Levenshtein implementation. It contains all the same optimizations that had, and adds the additional logic to handle the transpositions. If you recall, the Levenshtein implementation had a space optimization that used the improvement that reduced the need for a two dimensional m * n matrix, to just two one dimensional arrays, and then improved on that further to needing just a single array. With Damerau, you can go from the m * n matrix to three arrays without too much difficulty. You need three arrays because you need to look one level further back to detect and compute the transposition portion of the algorithm. It’s a little trickier to take it down to just two arrays, but it can be accomplished similarly to how we went from two to one array with Levenshtein. It’s done by judicial use of temporary variables, which has the added benefit of reducing array access and reducing execution time. It can be tricky to follow, so hopefully this diagram helps show what goes on.

DamLev

We’re able to reduce memory use by modifying the arrays in place as we iterate down the column. We read ahead from the values stored when we processed the previous column, and store the values for the current column as we proceed. In the diagram, we are in the middle of computing the distance, and we’re on column 2 (the outer i loop), and row 4 (the inner j loop), with the cell about to be calculated colored in black. The yellow cells represent the contents of the v0 array. We’ve calculated the new values in the cells above, but have yet to read and use the cells from the previous column below the current point. The tan column marks the v2 array that holds the values we need for computing the transposition cost. To avoid some extra math operations, the contents of the v2 array are actually offset by 1, so v2[4] contains 3, although in the diagram above, that value is shown in row 3.

I’ve been using Damerau-Levenshtein for person name matching. I don’t use it as a primary technique, because it’s not so well suited for fast searches in a database. I use it mainly as a second opinion verifier for other search techniques. I use both phonetic lookups and bigram searches. But I don’t use the scores of those techniques alone, I adjust them by also computing the Damerau-Levenshtein distance. This is not too bad, because phonetic and bigram searches can take advantage of indexed database access. Then the smaller returned set is all that needs to have the more expensive edit distance computed. Sometimes, strings may be close measured by bigram similarity, or have the same phonetic code, but be pretty dissimilar. By applying Damerau-Levenshtein, these marginal matches will get their scores knocked down a bit.

There are two implementations of Damerau-Levenshtein below. The first method takes two strings as parameters, and returns the edit distance between them. The second method is very similar, but has an important difference. It takes an additional parameter that lets you specify a maximum distance. Specifying a maximum distance allows two important optimizations. The obvious one is that it allows a short circuit exit. As soon as it’s determined that the two strings have a distance greater than the maximum allowed distance, it can return immediately, without spending further time determining the complete edit distance. We don’t need to look at all the intermediate values to test for the short circuit. We only need to examine the line of the final diagonal (a single value per column). If the distance is larger than the maxDistance, a value of –1 is returned to indicate that the distance is greater than maxDistance, but the exact distance is not known. The second optimization is less obvious, but equally, and in some cases more important. Given a maxDistance, we don’t need to evaluate all cells of each column. We only need to evaluate the cells within a window around the two diagonals (the one that starts in the upper left corner, and the one then ends in the lower right corner). The size of the window is maxDistance cells on either side of the diagonals, reduced by the difference in lengths of the two strings. When comparing two large strings when given a small maxDistance, this greatly reduces the amount of cells that must be visited and computed. It essentially changes the time complexity from being the product of the two string lengths to being just the length of the shorter string, i.e. the time complexity becomes linear. So, even if the early exit short circuit isn’t triggered, there is still a great speed benefit. This maxDistance optimizations come with a small amount of overhead. If you want to get the full edit distance, and don’t care about giving a max distance, it’s better to use the method below that doesn’t have the maxDistance parameter. It will be faster for your use. But if you do want to give a maxDistance, the method that takes that parameter will give you great results. For large strings, it can be many times faster.

/// <summary>
// Computes and returns the Damerau-Levenshtein edit distance between two strings,
/// i.e. the number of insertion, deletion, sustitution, and transposition edits
/// required to transform one string to the other. This value will be >= 0, where 0
/// indicates identical strings. Comparisons are case sensitive, so for example,
/// "Fred" and "fred" will have a distance of 1. This algorithm is basically the
/// Levenshtein algorithm with a modification that considers transposition of two
/// adjacent characters as a single edit.
/// http://blog.softwx.net/2015/01/optimizing-damerau-levenshtein_15.html
/// </summary>
/// <remarks>See http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance
/// Note that this is based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
/// at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm.
/// This version differs by including some optimizations, and extending it to the Damerau-
/// Levenshtein algorithm.
/// Note that this is the simpler and faster optimal string alignment (aka restricted edit) distance
/// that difers slightly from the classic Damerau-Levenshtein algorithm by imposing the restriction
/// that no substring is edited more than once. So for example, "CA" to "ABC" has an edit distance
/// of 2 by a complete application of Damerau-Levenshtein, but a distance of 3 by this method that
/// uses the optimal string alignment algorithm. See wikipedia article for more detail on this
/// distinction.
/// </remarks>
/// <param name="s">String being compared for distance.</param>
/// <param name="t">String being compared against other string.</param>
/// <returns>int edit distance, >= 0 representing the number of edits required
/// to transform one string to the other.</returns>
public static int DamLev(this string s, string t) {
if (String.IsNullOrEmpty(s)) return (t ?? "").Length;
if (String.IsNullOrEmpty(t)) return s.Length;

// if strings of different lengths, ensure shorter string is in s. This can result in a little
// faster speed by spending more time spinning just the inner loop during the main processing.
if (s.Length > t.Length) {
var temp = s; s = t; t = temp; // swap s and t
}
int sLen = s.Length; // this is also the minimun length of the two strings
int tLen = t.Length;

// suffix common to both strings can be ignored
while ((sLen > 0) && (s[sLen - 1] == t[tLen - 1])) { sLen--; tLen--; }

int start = 0;
if ((s[0] == t[0]) || (sLen == 0)) { // if there's a shared prefix, or all s matches t's suffix
// prefix common to both strings can be ignored
while ((start < sLen) && (s[start] == t[start])) start++;
sLen -= start; // length of the part excluding common prefix and suffix
tLen -= start;

// if all of shorter string matches prefix and/or suffix of longer string, then
// edit distance is just the delete of additional characters present in longer string
if (sLen == 0) return tLen;

t = t.Substring(start, tLen); // faster than t[start+j] in inner loop below
}

var v0 = new int[tLen];
var v2 = new int[tLen]; // stores one level further back (offset by +1 position)
for (int j = 0; j < tLen; j++) v0[j] = j + 1;

char sChar = s[0];
int current = 0;
for (int i = 0; i < sLen; i++) {
char prevsChar = sChar;
sChar = s[start + i];
char tChar = t[0];
int left = i;
current = i + 1;
int nextTransCost = 0;
for (int j = 0; j < tLen; j++) {
int above = current;
int thisTransCost = nextTransCost;
nextTransCost = v2[j];
v2[j] = current = left; // cost of diagonal (substitution)
left = v0[j]; // left now equals current cost (which will be diagonal at next iteration)
char prevtChar = tChar;
tChar = t[j];
if (sChar != tChar) {
if (left < current) current = left; // insertion
if (above < current) current = above; // deletion
current++;
if ((i != 0) && (j != 0)
&& (sChar == prevtChar)
&& (prevsChar == tChar)) {
thisTransCost++;
if (thisTransCost < current) current = thisTransCost; // transposition
}
}
v0[j] = current;
}
}
return current;
}
/// Computes and returns the Damerau-Levenshtein edit distance between two strings, 
/// i.e. the number of insertion, deletion, sustitution, and transposition edits
/// required to transform one string to the other. This value will be >= 0, where 0
/// indicates identical strings. Comparisons are case sensitive, so for example,
/// "Fred" and "fred" will have a distance of 1. This algorithm is basically the
/// Levenshtein algorithm with a modification that considers transposition of two
/// adjacent characters as a single edit.
/// http://blog.softwx.net/2015/01/optimizing-damerau-levenshtein_15.html
/// </summary>
/// <remarks>See http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance
/// Note that this is based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
/// at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm.
/// This version differs by including some optimizations, and extending it to the Damerau-
/// Levenshtein algorithm.
/// Note that this is the simpler and faster optimal string alignment (aka restricted edit) distance
/// that difers slightly from the classic Damerau-Levenshtein algorithm by imposing the restriction
/// that no substring is edited more than once. So for example, "CA" to "ABC" has an edit distance
/// of 2 by a complete application of Damerau-Levenshtein, but a distance of 3 by this method that
/// uses the optimal string alignment algorithm. See wikipedia article for more detail on this
/// distinction.
/// </remarks>
/// <param name="s">String being compared for distance.</param>
/// <param name="t">String being compared against other string.</param>
/// <param name="maxDistance">The maximum edit distance of interest.</param>
/// <returns>int edit distance, >= 0 representing the number of edits required
/// to transform one string to the other, or -1 if the distance is greater than the specified maxDistance.</returns>
public static int DamLev(this string s, string t, int maxDistance = int.MaxValue) {
if (String.IsNullOrEmpty(s)) return (t ?? "").Length;
if (String.IsNullOrEmpty(t)) return s.Length;

// if strings of different lengths, ensure shorter string is in s. This can result in a little
// faster speed by spending more time spinning just the inner loop during the main processing.
if (s.Length > t.Length) {
var temp = s; s = t; t = temp; // swap s and t
}
int sLen = s.Length; // this is also the minimun length of the two strings
int tLen = t.Length;

// suffix common to both strings can be ignored
while ((sLen > 0) && (s[sLen - 1] == t[tLen - 1])) { sLen--; tLen--; }

int start = 0;
if ((s[0] == t[0]) || (sLen == 0)) { // if there's a shared prefix, or all s matches t's suffix
// prefix common to both strings can be ignored
while ((start < sLen) && (s[start] == t[start])) start++;
sLen -= start; // length of the part excluding common prefix and suffix
tLen -= start;

// if all of shorter string matches prefix and/or suffix of longer string, then
// edit distance is just the delete of additional characters present in longer string
if (sLen == 0) return tLen;

t = t.Substring(start, tLen); // faster than t[start+j] in inner loop below
}
int lenDiff = tLen - sLen;
if ((maxDistance < 0) || (maxDistance > tLen)) {
maxDistance = tLen;
} else if (lenDiff > maxDistance) return -1;

var v0 = new int[tLen];
var v2 = new int[tLen]; // stores one level further back (offset by +1 position)
int j;
for (j = 0; j < maxDistance; j++) v0[j] = j + 1;
for (; j < tLen; j++) v0[j] = maxDistance + 1;

int jStartOffset = maxDistance - (tLen - sLen);
bool haveMax = maxDistance < tLen;
int jStart = 0;
int jEnd = maxDistance;
char sChar = s[0];
int current = 0;
for (int i = 0; i < sLen; i++) {
char prevsChar = sChar;
sChar = s[start + i];
char tChar = t[0];
int left = i;
current = left + 1;
int nextTransCost = 0;
// no need to look beyond window of lower right diagonal - maxDistance cells (lower right diag is i - lenDiff)
// and the upper left diagonal + maxDistance cells (upper left is i)
jStart += (i > jStartOffset) ? 1 : 0;
jEnd += (jEnd < tLen) ? 1 : 0;
for (j = jStart; j < jEnd; j++) {
int above = current;
int thisTransCost = nextTransCost;
nextTransCost = v2[j];
v2[j] = current = left; // cost of diagonal (substitution)
left = v0[j]; // left now equals current cost (which will be diagonal at next iteration)
char prevtChar = tChar;
tChar = t[j];
if (sChar != tChar) {
if (left < current) current = left; // insertion
if (above < current) current = above; // deletion
current++;
if ((i != 0) && (j != 0)
&& (sChar == prevtChar)
&& (prevsChar == tChar)) {
thisTransCost++;
if (thisTransCost < current) current = thisTransCost; // transposition
}
}
v0[j] = current;
}
if (haveMax && (v0[i + lenDiff] > maxDistance)) return -1;
}
return (current <= maxDistance) ? current : -1;
}

12 comments:

  1. Hi Steve. Thanks for this useful article. Have you run any tests on this implementation? I'm trying to compare this to http://stackoverflow.com/questions/9453731/how-to-calculate-distance-similarity-measure-of-given-2-strings/9454016#9454016 -- any insight would be appreciated

    ReplyDelete
  2. Yes, I run a large set of generated test cases against the optimized algorithm, and also compare that to a standard implementation to test results correctness. I just tested against the version you linked to, and except for a minor difference in how null strings are handled, they gave the same results. For timing, the version in my post runs in less than half the time compared to the stackoverflow version you linked to (when run with release compile and no debugger attached). If the strings share a common prefix or suffix, the speed difference is even more significant. The version I presented also uses less memory. It doesn't take a max distance parameter like the stackoverflow version, so for comparison, I passed a large max distance to the stackoverflow function. I did implement that feature in the TSQL version in my next blog post. I didn't put it in the C# version because I don't need it in my own use, and it imposes a little bit of extra overhead cost. But for completeness, I will add that to this post in the next few days. The way that new version will take advantage of the max distance should be more efficient than how it's done in the stackoverflow version.

    ReplyDelete
  3. I've added a version to this post that takes a maxDistance parameter. In my tests it's 2 to 10 times faster than the version at stackoverflow that you linked to. For some inputs, like when the two strings share common prefix or suffix characters, the speed can be more than 10 times faster.

    ReplyDelete
  4. Hi Steve. Thank you very much for this detailed explanation and the example code. I reimplemented it in java with some minor language-specific changes and did some testing. Works like a charm, the results match the ones from the non-optimised damerau levenshtein version and outperforms the other algorithm. Relly nice work, I appreciate you sharing this with us!

    For anyone interested in the setting and the results:
    The programm calculated the distance between 82307 words (for 82306 calculations) from a shuffled list of my german dictionary (size 82307 words) with distances ranging between 1 and 27 at an average of 9.

    I tested the following 3 algorithms:

    - a normal levenshtein distance
    c1: http://rosettacode.org/wiki/Levenshtein_distance#Java

    - a damerau-levenshtein implementation by Kevin L. Stern
    c2: https://github.com/KevinStern/software-and-algorithms/blob/master/src/main/java/blogspot/software_and_algorithms/stern_library/string/DamerauLevenshteinAlgorithm.java

    - this algorithm rewritten in java, with and without maxdistance used
    c3:

    (no maxdistance for c3)
    c1: Computed in 126ms.
    c2: Computed in 377ms.
    c3: Computed in 102ms.

    (maxdistance 5 for c3, as this is the distance used in the specific case I want to use the algorithm in)
    c1: Computed in 111ms.
    c2: Computed in 389ms.
    c3: Computed in 54ms.

    ReplyDelete
    Replies
    1. Phillip,
      I really appreciate the feedback. It's good to know it was easy to convert to Java, and that you found it useful. If you ever post the java version on a blog or something, I hope you'll leave a comment with a link to it here.
      Steve

      Delete
  5. What license applies to this code?

    ReplyDelete
    Replies
    1. It's the MIT license. You can find the full text of it on the github project I made for this code to live in https://github.com/softwx/SoftWx.Match

      Delete
    2. Awesome, thanks for that. Have you ever checked out the https://github.com/DanHarltey/Fastenshtein project?

      I was already using Fastenshtein to determine distance and hooked in your Dem-Lev function as a comparison. In my initial tests all of the scores were exactly equal (probably no transpositions in the data set).

      Hopefully today i'll get a better data set to actually compare whether the matches are better or not using more real world data.

      Delete
    3. I had not seen Fastenshtein before. Thanks for the link. I plugged my versions into his comparison test. Comparing Levenshtein to Levenshtein, the SoftWx version was faster for normal and large words, and a bit slower for small words.

      Regarding Levenshtein vs. Damerau-Levenshtein, I'm mainly working with person name matching, and in that context, I think Damerau's modification to treat transpositions as a single edit gives better results, since transpositions are pretty common human errors (Michael vs. Micheal, for example).

      Delete
  6. Hi. I've been having a play, and I noticed a bug with the limited D-L algorithm, where if one string is a either the head or tail of the other, it ignores the limit.
    For example,
    Console.WriteLine("\"abcdefghijklmno\".DamLev(\"abc\") => " + "abcdefghijklmno".DamLev("abc"));
    Console.WriteLine("\"abcdefghijklmno\".DamLev(\"abc\", 10) => " + "abcdefghijklmno".DamLev("abc", 10));
    Console.WriteLine("\"abcdefghijklmno\".DamLev(\"mno\") => " + "abcdefghijklmno".DamLev("mno"));
    Console.WriteLine("\"abcdefghijklmno\".DamLev(\"mno\", 10) => " + "abcdefghijklmno".DamLev("mno", 10));
    Console.WriteLine("\"abcdefghijklmno\".DamLev(\"ghi\") => " + "abcdefghijklmno".DamLev("ghi"));
    Console.WriteLine("\"abcdefghijklmno\".DamLev(\"ghi\", 10) => " + "abcdefghijklmno".DamLev("ghi", 10));
    Gives me:
    "abcdefghijklmno".DamLev("abc") => 12
    "abcdefghijklmno".DamLev("abc", 10) => 12
    "abcdefghijklmno".DamLev("mno") => 12
    "abcdefghijklmno".DamLev("mno", 10) => 12
    "abcdefghijklmno".DamLev("ghi") => 12
    "abcdefghijklmno".DamLev("ghi", 10) => -1

    ReplyDelete
  7. Stepping through it, issue is on line 219:
    if (sLen == 0) return tLen;
    This doesn't take into consideration if tLen is greater than maxDistance. Changing it to
    if (sLen == 0) return tLen <= maxDistance ? tLen : -1;
    solves the problem

    ReplyDelete
    Replies
    1. Thanks for pointing that out. I'll update the code soon for this and the Levenshtein version which likely has the same issue.

      Delete