Tuesday, December 30, 2014

Optimizing the Levenshtein Algorithm in TSQL

The previous post covered the Levenshtein algorithm in C#. This post will applies most of the optimizations described in that post to SQL. To be specific, for Microsoft SQL Server. The next post covers Damerau-Levenshtein in C#, followed by Damerau-Levenshtein in TSQL.

For most people, the fastest results will be gained by simply using the C# version as a CLR scalar function in SQL Server. Some DBAs may not want to enable CLR on their servers, and for those environments, you have to write in SQL, which isn’t especially great for porting functions that do lots of computations, use arrays, etc. It’s not hopeless though. We can get fairly good results. I should first point out that there is a difference. The C# version is strictly case-sensitive, and this SQL version ignores treats case comparisons as configured in SQL Server (case-insensitive by default).

Micro-optimizing SQL code is very different than C#. The little rearrangements that speed up C# code may have an entirely opposite effect in SQL, the available constructs much more limited, and the behavior sometimes non-intuitive. I don’t optimize computational (non data related) SQL code as much as C# code, so my knowledge and tool bag of tricks is much more sparse. In general, I try to avoid IF statements when just computing and assigning values. CASE WHEN is a faster alternative in most cases, albeit with more verbosity. I try to only use IF where needed for major logic branching.. Another thing that surprised me when I realized it is that the statement count often matters more than the operator count. In particular I’m referring to assigning values to variables using SELECT. Grouping multiple such assignments in a single SELECT separated by commas can be noticeably faster than individual SELECT or SET statements. The only problem with this is when a variable you’re assigning is referenced by other assignments in the same statement. It appears to evaluate in a left to right order, but MSDN documentation specifically states that order is not guaranteed. That means that the following code, although it seems to work, is not guaranteed to work the same way all the time. Notice that the loop counter is referenced in the other assignment preceding it. That, according to MSDN, is dangerous, because we don’t know if the assignment to j will happen before or after the assignment to total.

SET @j = 1, @total = 0
WHILE (@j <= @max) SELECT @total = @total + @j, @j = @j + 1

Instead, we should do this, which is slower because there are more statements.

SELECT @j = 1, @total = 0
WHILE (@j <= @max) BEGIN
    SELECT @total = @total + @j
    SELECT @j = @j + 1
END

All the major optimization ideas in the C# version were implemented in this SQL version. We use a single array, and we ignore shared prefix and suffix characters. The body of the inner loop was implemented using logic closer to the standard definition of Levenshtein, because that was faster in SQL than the way I did it in the C# version. The critical bit to fiddle with is the body of the inner loop, and the speed boiled down to how many SELECT statements were used. That was more important that some extra additions or subtractions within the assignment expressions themselves.

The result of this was ok, but I had been hoping for better. I had thrown in support for a third parameter to specify a max allowed distance, and did a simple short circuit to exit early when the max had been exceeded. Then I realized that if you have a max distance, it allows you to do a significant additional optimization. Because of the way the edit distances in the conceptual two dimensional m x n distance array cascade from top left to bottom right, if you have a max allowed distance, you only need to calculate distances in the cells in a band surrounding the diagonal of the matrix, where the band includes the cell on the diagonal, plus max distance cells above and max distance cells below the diagonal. For example, if the max distance is 2, you only need to evaluate a 5 cell window for each column. As strings get larger, this makes a big difference. When a max distance is given, the worst case time complexity goes from len1 * len2 to min(len1, len2), i.e. it's linear. This version is faster than the other two SQL versions I’ve found (one attributed originally to Gama, and the other to Fribble) with all except the tiniest of inputs, where the difference is negligible. In cases where the two strings share suffix or prefix characters this version is much faster. It is also much faster when the strings are large and a small max edit distance is given. It also doesn't have some problems I've seen in the other versions out there. I’ve noticed that some of the SQL versions out there have a bug that returns the wrong result if the first string is an empty string, and most or all of them act as if trailing spaces didn’t exist (since that’s SQL’s default behavior), or give odd results in that case. This version gives the correct results with empty strings, and treats spaces as it does other characters. (this was updated 1/20/2015)

-- =============================================
-- Computes and returns the Levenshtein edit distance between two strings, i.e. the
-- number of insertion, deletion, and sustitution edits required to transform one
-- string to the other, or NULL if @max is exceeded. Comparisons use the case-
-- sensitivity configured in SQL Server (case-insensitive by default).
-- http://blog.softwx.net/2014/12/optimizing-levenshtein-algorithm-in-tsql.html
-- 
-- See http://en.wikipedia.org/wiki/Levenshtein_distance
-- This is based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
-- at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm, 
-- with some additional optimizations.
-- @s - String being compared for distance.
-- @t - String being compared against other string.
-- @max - Maximum distance allowed, or NULL if no maximum is desired. Returns NULL if distance will exceed @max.
-- returns int edit distance, >= 0 representing the number of edits required to transform one string to the other.
-- =============================================
CREATE FUNCTION [dbo].[Levenshtein](
    @s nvarchar(4000)
  , @t nvarchar(4000)
  , @max int
)
RETURNS int
WITH SCHEMABINDING
AS
BEGIN
    DECLARE @distance int = 0 -- return variable
          , @v0 nvarchar(4000)-- running scratchpad for storing computed distances
          , @start int = 1      -- index (1 based) of first non-matching character between the two string
          , @i int, @j int      -- loop counters: i for s string and j for t string
          , @diag int          -- distance in cell diagonally above and left if we were using an m by n matrix
          , @left int          -- distance in cell to the left if we were using an m by n matrix
          , @sChar nchar      -- character at index i from s string
          , @thisJ int          -- temporary storage of @j to allow SELECT combining
          , @jOffset int      -- offset used to calculate starting value for j loop
          , @jEnd int          -- ending value for j loop (stopping point for processing a column)
          -- get input string lengths including any trailing spaces (which SQL Server would otherwise ignore)
          , @sLen int = datalength(@s) / datalength(left(left(@s, 1) + '.', 1))    -- length of smaller string
          , @tLen int = datalength(@t) / datalength(left(left(@t, 1) + '.', 1))    -- length of larger string
          , @lenDiff int      -- difference in length between the two strings
    -- if strings of different lengths, ensure shorter string is in s. This can result in a little
    -- faster speed by spending more time spinning just the inner loop during the main processing.
    IF (@sLen > @tLen) BEGIN
        SELECT @v0 = @s, @i = @sLen -- temporarily use v0 for swap
        SELECT @s = @t, @sLen = @tLen
        SELECT @t = @v0, @tLen = @i
    END
    SELECT @max = ISNULL(@max, @tLen)
         , @lenDiff = @tLen - @sLen
    IF @lenDiff > @max RETURN NULL

    -- suffix common to both strings can be ignored
    WHILE(@sLen > 0 AND SUBSTRING(@s, @sLen, 1) = SUBSTRING(@t, @tLen, 1))
        SELECT @sLen = @sLen - 1, @tLen = @tLen - 1

    IF (@sLen = 0) RETURN @tLen

    -- prefix common to both strings can be ignored
    WHILE (@start < @sLen AND SUBSTRING(@s, @start, 1) = SUBSTRING(@t, @start, 1)) 
        SELECT @start = @start + 1
    IF (@start > 1) BEGIN
        SELECT @sLen = @sLen - (@start - 1)
             , @tLen = @tLen - (@start - 1)

        -- if all of shorter string matches prefix and/or suffix of longer string, then
        -- edit distance is just the delete of additional characters present in longer string
        IF (@sLen <= 0) RETURN @tLen

        SELECT @s = SUBSTRING(@s, @start, @sLen)
             , @t = SUBSTRING(@t, @start, @tLen)
    END

    -- initialize v0 array of distances
    SELECT @v0 = '', @j = 1
    WHILE (@j <= @tLen) BEGIN
        SELECT @v0 = @v0 + NCHAR(CASE WHEN @j > @max THEN @max ELSE @j END)
        SELECT @j = @j + 1
    END
    
    SELECT @jOffset = @max - @lenDiff
         , @i = 1
    WHILE (@i <= @sLen) BEGIN
        SELECT @distance = @i
             , @diag = @i - 1
             , @sChar = SUBSTRING(@s, @i, 1)
             -- no need to look beyond window of upper left diagonal (@i) + @max cells
             -- and the lower right diagonal (@i - @lenDiff) - @max cells
             , @j = CASE WHEN @i <= @jOffset THEN 1 ELSE @i - @jOffset END
             , @jEnd = CASE WHEN @i + @max >= @tLen THEN @tLen ELSE @i + @max END
        WHILE (@j <= @jEnd) BEGIN
            -- at this point, @distance holds the previous value (the cell above if we were using an m by n matrix)
            SELECT @left = UNICODE(SUBSTRING(@v0, @j, 1))
                 , @thisJ = @j
            SELECT @distance = 
                CASE WHEN (@sChar = SUBSTRING(@t, @j, 1)) THEN @diag                    --match, no change
                     ELSE 1 + CASE WHEN @diag < @left AND @diag < @distance THEN @diag    --substitution
                                   WHEN @left < @distance THEN @left                    -- insertion
                                   ELSE @distance                                        -- deletion
                                END    END
            SELECT @v0 = STUFF(@v0, @thisJ, 1, NCHAR(@distance))
                 , @diag = @left
                 , @j = case when (@distance > @max) AND (@thisJ = @i + @lenDiff) then @jEnd + 2 else @thisJ + 1 end
        END
        SELECT @i = CASE WHEN @j > @jEnd + 1 THEN @sLen + 1 ELSE @i + 1 END
    END
    RETURN CASE WHEN @distance <= @max THEN @distance ELSE NULL END
END

5 comments:

  1. I agree entirely about the importance of 'max difference'. Most of the time, you are wanting to find similar strings, and if you aren't you can specify a high 'max'. I like the solution you use, and I'd be intrigued to test it against a more 'relational' one. (using a string as an array isn't entirely Kosher Relational!) What method of timing are you using? My own experience is that the WHILE loop is the slowest part of the algorithm.

    ReplyDelete
    Replies
    1. Phil - I played around with a set based solution, but it was quite a bit slower than this approach. It was disappointing, because I was expecting SQL Server to give me a big performance boost for the effort, but sadly, it was the opposite. When dealing with normal data queries and functions, getting rid of procedural constructs and going straight set based is beneficial. Maybe it was the way I did it, but it was sufficiently disappointing that I abandoned that direction.
      If you do a comparison like you mentioned, post a link. I'd be interested to see that too.

      Delete
    2. I tried out a relational method. You're right. The 'Arnold Fribble' version is the quickest at around 4-5 ms. on my tests, with yours clocking at 10 ms, with the relational version lagging at 160 ms. I suspect that yours will do better with real data. (I use all the classic examples in my tests). The relational cut was a first working version and there is plenty of room for optimisation, but I doubt that I'll get it to perform better than the string-array version.

      Delete
    3. We may be using different versions of Fribble's. In the tests I do, this was faster than Fribble even without using the max distance, and without a shared prefix or suffix to take advantage of, except when the strings were a few characters long. For example, to calculate the distance of "johnson" vs. "hansens" 5000 times, Fribble's took 910ms, and this took 650ms. For the worst case, a distance between string of 3999 characters differing only in their first and last character (with no max distance) Fribble's took 50.5 seconds, and this took 45.5 seconds. With a max distance of 10, the time goes down to 250 milliseconds for this one. When there are shared prefix or suffix characters, or a max distance is specified, then this one is much faster, in the tests I did.

      Delete
    4. Another test I do is more of a real world test. It's a query against a table of 235,800 person name word tokens, looking for name words with an edit distance <= 1 from 'steve'. There are 19 words that meet that criterion. This algorithm without using max distance takes 20.8 seconds, and Fribble version from the stackoverflow question takes 28.2 seconds. This algorithm using max distance of 1 takes 3.9 seconds, and Fribble 13.8 seconds

      Delete