Tuesday, December 30, 2014

Optimizing the Levenshtein Algorithm in TSQL

The previous post covered the Levenshtein algorithm in C#. This post will applies most of the optimizations described in that post to SQL. To be specific, for Microsoft SQL Server. The next post covers Damerau-Levenshtein in C#, followed by Damerau-Levenshtein in TSQL.

For most people, the fastest results will be gained by simply using the C# version as a CLR scalar function in SQL Server. Some DBAs may not want to enable CLR on their servers, and for those environments, you have to write in SQL, which isn’t especially great for porting functions that do lots of computations, use arrays, etc. It’s not hopeless though. We can get fairly good results. I should first point out that there is a difference. The C# version is strictly case-sensitive, and this SQL version ignores treats case comparisons as configured in SQL Server (case-insensitive by default).

Micro-optimizing SQL code is very different than C#. The little rearrangements that speed up C# code may have an entirely opposite effect in SQL, the available constructs much more limited, and the behavior sometimes non-intuitive. I don’t optimize computational (non data related) SQL code as much as C# code, so my knowledge and tool bag of tricks is much more sparse. In general, I try to avoid IF statements when just computing and assigning values. CASE WHEN is a faster alternative in most cases, albeit with more verbosity. I try to only use IF where needed for major logic branching.. Another thing that surprised me when I realized it is that the statement count often matters more than the operator count. In particular I’m referring to assigning values to variables using SELECT. Grouping multiple such assignments in a single SELECT separated by commas can be noticeably faster than individual SELECT or SET statements. The only problem with this is when a variable you’re assigning is referenced by other assignments in the same statement. It appears to evaluate in a left to right order, but MSDN documentation specifically states that order is not guaranteed. That means that the following code, although it seems to work, is not guaranteed to work the same way all the time. Notice that the loop counter is referenced in the other assignment preceding it. That, according to MSDN, is dangerous, because we don’t know if the assignment to j will happen before or after the assignment to total.

SET @j = 1, @total = 0
WHILE (@j <= @max) SELECT @total = @total + @j, @j = @j + 1

Instead, we should do this, which is slower because there are more statements.

SELECT @j = 1, @total = 0
WHILE (@j <= @max) BEGIN
    SELECT @total = @total + @j
    SELECT @j = @j + 1
END

All the major optimization ideas in the C# version were implemented in this SQL version. We use a single array, and we ignore shared prefix and suffix characters. The body of the inner loop was implemented using logic closer to the standard definition of Levenshtein, because that was faster in SQL than the way I did it in the C# version. The critical bit to fiddle with is the body of the inner loop, and the speed boiled down to how many SELECT statements were used. That was more important that some extra additions or subtractions within the assignment expressions themselves.

The result of this was ok, but I had been hoping for better. I had thrown in support for a third parameter to specify a max allowed distance, and did a simple short circuit to exit early when the max had been exceeded. Then I realized that if you have a max distance, it allows you to do a significant additional optimization. Because of the way the edit distances in the conceptual two dimensional m x n distance array cascade from top left to bottom right, if you have a max allowed distance, you only need to calculate distances in the cells in a band surrounding the diagonal of the matrix, where the band includes the cell on the diagonal, plus max distance cells above and max distance cells below the diagonal. For example, if the max distance is 2, you only need to evaluate a 5 cell window for each column. As strings get larger, this makes a big difference. When a max distance is given, the worst case time complexity goes from len1 * len2 to min(len1, len2), i.e. it's linear. This version is faster than the other two SQL versions I’ve found (one attributed originally to Gama, and the other to Fribble) with all except the tiniest of inputs, where the difference is negligible. In cases where the two strings share suffix or prefix characters this version is much faster. It is also much faster when the strings are large and a small max edit distance is given. It also doesn't have some problems I've seen in the other versions out there. I’ve noticed that some of the SQL versions out there have a bug that returns the wrong result if the first string is an empty string, and most or all of them act as if trailing spaces didn’t exist (since that’s SQL’s default behavior), or give odd results in that case. This version gives the correct results with empty strings, and treats spaces as it does other characters. (this was updated 1/20/2015)

-- =============================================
-- Computes and returns the Levenshtein edit distance between two strings, i.e. the
-- number of insertion, deletion, and sustitution edits required to transform one
-- string to the other, or NULL if @max is exceeded. Comparisons use the case-
-- sensitivity configured in SQL Server (case-insensitive by default).
-- http://blog.softwx.net/2014/12/optimizing-levenshtein-algorithm-in-tsql.html
-- 
-- See http://en.wikipedia.org/wiki/Levenshtein_distance
-- This is based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
-- at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm, 
-- with some additional optimizations.
-- @s - String being compared for distance.
-- @t - String being compared against other string.
-- @max - Maximum distance allowed, or NULL if no maximum is desired. Returns NULL if distance will exceed @max.
-- returns int edit distance, >= 0 representing the number of edits required to transform one string to the other.
-- =============================================
CREATE FUNCTION [dbo].[Levenshtein](
    @s nvarchar(4000)
  , @t nvarchar(4000)
  , @max int
)
RETURNS int
WITH SCHEMABINDING
AS
BEGIN
    DECLARE @distance int = 0 -- return variable
          , @v0 nvarchar(4000)-- running scratchpad for storing computed distances
          , @start int = 1      -- index (1 based) of first non-matching character between the two string
          , @i int, @j int      -- loop counters: i for s string and j for t string
          , @diag int          -- distance in cell diagonally above and left if we were using an m by n matrix
          , @left int          -- distance in cell to the left if we were using an m by n matrix
          , @sChar nchar      -- character at index i from s string
          , @thisJ int          -- temporary storage of @j to allow SELECT combining
          , @jOffset int      -- offset used to calculate starting value for j loop
          , @jEnd int          -- ending value for j loop (stopping point for processing a column)
          -- get input string lengths including any trailing spaces (which SQL Server would otherwise ignore)
          , @sLen int = datalength(@s) / datalength(left(left(@s, 1) + '.', 1))    -- length of smaller string
          , @tLen int = datalength(@t) / datalength(left(left(@t, 1) + '.', 1))    -- length of larger string
          , @lenDiff int      -- difference in length between the two strings
    -- if strings of different lengths, ensure shorter string is in s. This can result in a little
    -- faster speed by spending more time spinning just the inner loop during the main processing.
    IF (@sLen > @tLen) BEGIN
        SELECT @v0 = @s, @i = @sLen -- temporarily use v0 for swap
        SELECT @s = @t, @sLen = @tLen
        SELECT @t = @v0, @tLen = @i
    END
    SELECT @max = ISNULL(@max, @tLen)
         , @lenDiff = @tLen - @sLen
    IF @lenDiff > @max RETURN NULL

    -- suffix common to both strings can be ignored
    WHILE(@sLen > 0 AND SUBSTRING(@s, @sLen, 1) = SUBSTRING(@t, @tLen, 1))
        SELECT @sLen = @sLen - 1, @tLen = @tLen - 1

    IF (@sLen = 0) RETURN CASE WHEN @tLen <= @max THEN @tLen ELSE NULL END

    -- prefix common to both strings can be ignored
    WHILE (@start < @sLen AND SUBSTRING(@s, @start, 1) = SUBSTRING(@t, @start, 1)) 
        SELECT @start = @start + 1
    IF (@start > 1) BEGIN
        SELECT @sLen = @sLen - (@start - 1)
             , @tLen = @tLen - (@start - 1)

        -- if all of shorter string matches prefix and/or suffix of longer string, then
        -- edit distance is just the delete of additional characters present in longer string
        IF (@sLen <= 0) RETURN CASE WHEN @tLen <= @max THEN @tLen ELSE NULL END

        SELECT @s = SUBSTRING(@s, @start, @sLen)
             , @t = SUBSTRING(@t, @start, @tLen)
    END

    -- initialize v0 array of distances
    SELECT @v0 = '', @j = 1
    WHILE (@j <= @tLen) BEGIN
        SELECT @v0 = @v0 + NCHAR(CASE WHEN @j > @max THEN @max ELSE @j END)
        SELECT @j = @j + 1
    END
    
    SELECT @jOffset = @max - @lenDiff
         , @i = 1
    WHILE (@i <= @sLen) BEGIN
        SELECT @distance = @i
             , @diag = @i - 1
             , @sChar = SUBSTRING(@s, @i, 1)
             -- no need to look beyond window of upper left diagonal (@i) + @max cells
             -- and the lower right diagonal (@i - @lenDiff) - @max cells
             , @j = CASE WHEN @i <= @jOffset THEN 1 ELSE @i - @jOffset END
             , @jEnd = CASE WHEN @i + @max >= @tLen THEN @tLen ELSE @i + @max END
        WHILE (@j <= @jEnd) BEGIN
            -- at this point, @distance holds the previous value (the cell above if we were using an m by n matrix)
            SELECT @left = UNICODE(SUBSTRING(@v0, @j, 1))
                 , @thisJ = @j
            SELECT @distance = 
                CASE WHEN (@sChar = SUBSTRING(@t, @j, 1)) THEN @diag                    --match, no change
                     ELSE 1 + CASE WHEN @diag < @left AND @diag < @distance THEN @diag    --substitution
                                   WHEN @left < @distance THEN @left                    -- insertion
                                   ELSE @distance                                        -- deletion
                                END    END
            SELECT @v0 = STUFF(@v0, @thisJ, 1, NCHAR(@distance))
                 , @diag = @left
                 , @j = case when (@distance > @max) AND (@thisJ = @i + @lenDiff) then @jEnd + 2 else @thisJ + 1 end
        END
        SELECT @i = CASE WHEN @j > @jEnd + 1 THEN @sLen + 1 ELSE @i + 1 END
    END
    RETURN CASE WHEN @distance <= @max THEN @distance ELSE NULL END
END

Monday, December 29, 2014

Optimizing the Levenshtein Algorithm in C#

This is part 1 of a multi-part series of posts covering Levenshtein and Damerau-Levenshtein edit distance algorithms using C# and SQL. Part 2 covers Levenshtein in TSQL. Part 3 covers Damerau-Levenshtein in C#.Part 4 covers Damerau-Levenshtein in TSQL.

There is a well known edit distance metric called the Levenshtein distance. It measures the number of insertion, deletion, and substitution edits required to transform one string to another string. I won’t go into its details, because that’s covered well at Wikipedia, and thousands of other sites on the internet. What I will go into is how to make it run faster, because it can be somewhat expensive in time and memory.

If we literally implement the algorithm in C#, using Math.Min, we have a basic implementation. I’ll use this as the reference to compare our optimized versions. I’m not going to give actual benchmark times, because it’s entirely dependent on the strings being compared. Whether it’s big strings, little strings, big to little, lots of shared characters, very dissimilar, it changes relative speeds of different implementations. I’ll just speak generally about the differences. You should benchmark typical strings in whatever your application is to know how it will play out for you.

The first optimizations we can do is change our loops to be zero based (with matching changes to code using the loop variables), and replace the Math.Min calls with the comparable straight C# code, This can give us about a 20% improvement in speeds. It doesn’t change memory consumed, which is still primarily the matrix of edit costs whose size is the string1.Length * string2.Length.

Now we start doing optimizations that really matter. The first one is pretty significant. It’s an improvement described Sten Hjelmqvist in his article “Fast, memory efficient Levenshtein algorithm”. It uses a clever observation that an entire length1 * length2 matrix is not required to compute the Levenshtein edit distance. All that’s required is two arrays representing two columns of that matrix. When comparing large strings, this delivers a significant reduction in memory. Besides that, and even with small strings, it also delivers an improvement in speed. Using the implementation given in his article, execution time is cut in half compared to the literal implementation we started with.

We can improve this further. One minor thing we can do is ensure that the larger string is the one whose length is used for the inner loop. This reduces the number of executions of code that’s only in the outer loop. This decision improves speed, but at a slight cost in memory, since the column arrays are sized to the length of the string processed in the inner loop. This should only matter when strings can be of greatly different sizes. If you’re more concerned with memory than speed, you can reverse this logic, and ensure the smaller string is the one whose length is used for the inner loop. Another advantage of checking and swapping if necessary to ensure which of the two strings is the smaller is that we can then always use the length of that string as the min length of the two without further comparisons.

If you examine the sequence of how elements in the two arrays are read and written, it becomes evident that the idea described by Hjelmqvist can be taken even further. Rather than two arrays, we can do the job with a single array and a couple temporary int variables. This is done by reading the elements we need (containing values for the column just computed) just prior to them being overwritten with values of the next column being computed, and we do this all in-place in the single array. This reduces the memory used by half. It also allows us to reduce our array access to a single read and a single write in each inner loop iteration. The result is a further speed improvement, cutting the time in half again, in addition to the memory savings.

The final optimization, depending on the nature of the strings you throw at it, can be significant. When looking at two strings, any shared suffix and prefix should not affect the final edit distance. If we can identify and exclude shared suffix and/or prefix characters, the cost to do that will be much cheaper than the cost to have those characters processed through the normal algorithm. For example, if I want the edit distance between

johnathan and jonithan

we know that the cost is zero for the shared prefix jo and the shared suffix than. So we only need to find the edit distance between hna and ni. Even if we can shave off a couple characters, this optimization pays off. If even more characters are shared, as in this example, the improvement in time and memory can be very significant. With the original strings, we execute the inner loop 72 times (the length of string 1 times the length of string 2). By shaving off the shared 6 characters, we only execute the inner loop 6 times.

The code below is the version of Levenshtein edit distance after making the optimizations described above. What we end up with is a pretty fast implementation that requires less than 25% of the time of the basic implementation we started with when used with 9 character or 200 character test strings having no common prefix or suffix characters. When there are common suffix and prefix characters, the speed improvement is even more noticeable, with the "johnathan" example given earlier taking only 6% of the time required by the original algorithm. In addition to the speed improvement, we also use less memory, and if the strings being compared are large, we use significantly less memory.

/// <summary>
/// Computes and returns the Levenshtein edit distance between two strings, i.e. the
/// number of insertion, deletion, and sustitution edits required to transform one
/// string to the other. This value will be >= 0, where 0 indicates identical strings.
/// Comparisons are case sensitive, so for example, "Fred" and "fred" will have a 
/// distance of 1.
/// http://blog.softwx.net/2014/12/optimizing-levenshtein-algorithm-in-c.html
/// </summary>
/// <remarks>See http://en.wikipedia.org/wiki/Levenshtein_distance
/// This is based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
/// at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm, 
/// with some additional optimizations.
/// </remarks>
/// <param name="s">String being compared for distance.</param>
/// <param name="t">String being compared against other string.</param>
/// <returns>int edit distance, >= 0 representing the number of edits required to transform one string to the other.</returns>
public static int Levenshtein(this string s, string t) {
    if (String.IsNullOrEmpty(s)) return (t ?? "").Length;
    if (String.IsNullOrEmpty(t)) return s.Length;

    // if strings of different lengths, ensure shorter string is in s. This can result in a little
    // faster speed by spending more time spinning just the inner loop during the main processing.
    if (s.Length > t.Length) {
        var temp = s; s = t; t = temp; // swap s and t
    }
    int sLen = s.Length; // this is also the minimun length of the two strings
    int tLen = t.Length;

    // suffix common to both strings can be ignored
    while ((sLen > 0) && (s[sLen - 1] == t[tLen - 1])) { sLen--; tLen--; }

    int start = 0;
    if ((s[0] == t[0]) || (sLen == 0)) { // if there's a shared prefix, or all s matches t's suffix
        // prefix common to both strings can be ignored
        while ((start < sLen) && (s[start] == t[start])) start++;
        sLen -= start; // length of the part excluding common prefix and suffix
        tLen -= start;

        // if all of shorter string matches prefix and/or suffix of longer string, then
        // edit distance is just the delete of additional characters present in longer string
        if (sLen == 0) return tLen;

        t = t.Substring(start, tLen); // faster than t[start+j] in inner loop below
    }
    var v0 = new int[tLen];
    for (int j = 0; j < tLen; j++) v0[j] = j + 1;

    int current = 0;
    for (int i = 0; i < sLen; i++) {
        char sChar = s[start + i];
        int left = current = i;
        for (int j = 0; j < tLen; j++) {
            int above = current;
            current = left; // cost on diagonal (substitution)
            left = v0[j];    
            if (sChar != t[j]) {
                current++;              // substitution
                int insDel = above + 1; // deletion
                if (insDel < current) current = insDel;
                insDel = left + 1;      // insertion
                if (insDel < current) current = insDel;
            }
            v0[j] = current;
        }
    }
    return current;
}