Saturday, December 5, 2015

Benchmarking C# Code Times 2.0

Getting the codez:
A couple years ago, I blogged about a little bencharking tool I threw together. I recently made some changes to make it easier to use, and to improve the accuracy when timing small, fast bits of code. I also set up a GitHub project and NuGet package for it. The goal for this project is to provide a small, simple, and fast tool for timing bits of code, that's accurate enough to do micro-benchmarking of operations that have very small run times as well as longer running code.
Benchmarking is done through the Bench class. It has two methods for timing an Action delegate. The Bench class pre-runs the delegate code so it's not run cold, then does some preliminary timing runs to determine how many iterations it should run in the final timing test. It runs the timing test on the target, then times an empty Action delegate the same number of iterations so it can compute the run time of target, excluding the overhead of the timing loop.
When creating the Bench class you can specify the minimum number of iterations, and the minimum duration of the test, and a flag indicating whether you want the timing results written to the Console.. If you don't specify them, a default of 3 iterations, 100 milliseconds, and true for Console output is used.
    var bench1 = new Bench();
    var bench2 = new Bench(20, 1000, false);
The biggest change is that users of the Bench class no longer have to bother with including the looping code in their Action delegate. The repetitive iterations are now completely handled internally by the Bench class. If you're timing a bit of code that has a run time of under 50 nanoseconds per operation, you can increase accuracy of the timing by repeating the code 2 to 10 times so its run time is not obscured by timing overhead.
There are two Time methods for doing the timing. They both take three parameters - a string that names the test, an Action delegate, and an optional parameter that specifies the number of times the operation being tested is repeated in the Action delegate (the default value of this is 1). The difference between the two methods is one takes an Action delegate with no parameters, and the other takes an Action delegate with a single TimeControl parameter that allows some additional control of the timing. The following code shows examples of the simpler Time signature, which is the one most commonly used.
    var bench = new Bench();

    // time Sleep(10)
    bench.Time("Sleep", () => { System.Threading.Thread.Sleep(10); });
            
    // time string concatenation
    string s1 = DateTime.UtcNow.ToString();
    string s2 = DateTime.UtcNow.ToString();
    string s;
    bench().Time("String Concat", () => {
        s = s1 + s2;
    });

    // time int division
    int i0 = 0;
    bench.Time("int division by 5", () => {
        i0 /= 5; i0 /= 5; i0 /= 5; i0 /= 5; i0 /= 5;
        i0 /= 5; i0 /= 5; i0 /= 5; i0 /= 5; i0 /= 5;
    }, 10);
The code above outputs the following to the Console:
Name= Sleep
Millisecs/Op= 10.291, Ops= 10, ElapsedMillisecs= 102.91
Name= String Concat
Nanosecs/Op= 59.574, Ops= 1,257,008, ElapsedMillisecs= 74.89
Name= int division by 5
Nanosecs/Op= 5.777, Ops= 16,337,850, ElapsedMillisecs= 94.38

The second Time signature that takes an Action delegate expecting a TimeControl parameter allows timing of more complex operations that contain setup, or other code whose time you want excluded from the timing results. The following code shows two examples of that, one using a lambda, and the other a method.
    static void Main(string[] args) {
        var bench = new Bench();

        // time Dictionary Remove
        const int DictCount = 100;

        bench.Time("Dictionary.Remove(lambda)", (tc) => {
            tc.Pause();
            var d = new System.Collections.Generic.Dictionary<int, string>(DictCount);
            for (int i = 0; i < DictCount; i++) d.Add(i, i.ToString());
            tc.Resume();
            for (int i = 0; i < DictCount; i++) d.Remove(i);
        }, DictCount);

        bench.Time("Dictionary.Remove(method)", DictionaryRemove, DictCount);
    }
    public static void DictionaryRemove(Bench.TimeControl tc) {
        const int DictCount = 100;
        tc.Pause();
        var d = new System.Collections.Generic.Dictionary<int, string>(DictCount);
        for (int i = 0; i < DictCount; i++) d.Add(i, i.ToString());
        tc.Resume();
        for (int i = 0; i < DictCount; i++) d.Remove(i);
    }
The code above outputs the following to the Console:
Name= Dictionary.Remove(lambda)
Nanosecs/Op= 25.234, Ops= 3,948,900, ElapsedMillisecs= 99.65
Name= Dictionary.Remove(method)
Nanosecs/Op= 25.269, Ops= 3,856,300, ElapsedMillisecs= 97.45

The Time method returns a TimeReturn objet that contains the results of the timing, and properties for examining those results. If you're not just relying on seeing the results that Time writes to the Console, you can make use of the TimeResult object.

I should mention the normal caveats for casual benchmarking. This utility uses StopWatch, which measures clock time, not CPU time. This is an important point, because your results will vary some from run to run because of the other things going on in Windows and in the .Net runtime. If you need to do serious CPU time profiling, it’s better to just go to the commercial profiling tools. I tend to use both. I use the profiler to identify where I need to optimize, and use casual benchmarking while optimizing a chunk of code, to quickly see the effect of my changes. You should also keep in mind the possible effects of optimizations made by the compiler and just-in-time compiler, which may optimize away some of the code you are trying to benchmark. You also should run your timing with a release build, optimizations enabled, and without the debugger attached. If you happen to run with the debugger attached, or with with optimizations disabled (an indication you're running a debug build), the Time method will detect that and include alerts in the Console output to point out that fact.

The Bench class also includes methods to tell you how much memory is consumed by objects or structs you pass to it. That's described in this earlier blog post. Bench is available as a NuGet package, and the source code is hosted at GitHub.

Monday, January 19, 2015

Optimizing the Damerau-Levenshtein Algorithm in TSQL

In this final post of a 4-part series, I have a TSQL implementation of the Damerau-Levenshtein algorithm, and describe some of the testing to ensure the optimizations didn’t introduce errors in the results. Previous posts covered Levenshtein in C#, Levenshtein in TSQL, and Damerau-Levanshtein in C#.

This TSQL implementation takes the Levenshtein implementation from the previous post, and adds the additional logic needed to support Damerau’s transposition handling. The way it is implemented is like that done in the C# implementation in the previous post. You can check out those earlier posts for more details. Using the C# implementation in a CLR will give the fastest results in SQL Server. But if, for some reason, you can’t enable CLR user functions on your server, this TSQL implementation is a viable alternative. The version here is faster than the other versions I’ve seen on the internet.

While working on the code in these four posts, I did a fair amount of testing to help ensure that the optimizations did not mess up the results in subtle ways. Many of the optimizations depart from the standard algorithms with tricks and shortcuts to reduce the work performed. There’s always the chance that changes like that will muck up the results. To test, I made a basic implementation of Levenshtein and Damerau-Levenshtein. I also grabbed second implementations from the internet and compared results for all pairings of every permutation of 1 to 7 character words using a small character set (about 11,000,000 word pairs). With those “truth” functions verified, I could use them to do the same verification of the algorithms I was working on.

-- =============================================
-- Computes and returns the Damerau-Levenshtein edit distance between two strings,
-- i.e. the number of insertion, deletion, substitution, and transposition edits
-- required to transform one string to the other. This value will be >= 0, where
-- 0 indicates identical strings. Comparisons use the case-sensitivity configured
-- in SQL Server (case-insensitive by default). This algorithm is basically the
-- Levenshtein algorithm with a modification that considers transposition of two
-- adjacent characters as a single edit.
-- http://blog.softwx.net/2015/01/optimizing-damerau-levenshtein_19.html
-- See http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance
-- Note that this uses Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
-- at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm.
-- This version differs by including some optimizations, and extending it to the Damerau-
-- Levenshtein algorithm.
-- Note that this is the simpler and faster optimal string alignment (aka restricted edit) distance
-- that difers slightly from the full Damerau-Levenshtein algorithm by imposing the restriction
-- that no substring is edited more than once. So for example, "CA" to "ABC" has an edit distance
-- of 2 by a complete application of Damerau-Levenshtein, but a distance of 3 by this method that
-- uses the optimal string alignment algorithm. See wikipedia article for more detail on this
-- distinction.
--
-- @s - String being compared for distance.
-- @t - String being compared against other string.
-- @max - Maximum distance allowed, or NULL if no maximum is desired. Returns NULL if distance will exceed @max.
-- returns int edit distance, >= 0 representing the number of edits required to transform one string to the other.
-- =============================================
CREATE FUNCTION [dbo].[DamLev](
@s nvarchar(4000) , @t nvarchar(4000) , @max int ) RETURNS int WITH SCHEMABINDING AS BEGIN DECLARE @distance int = 0 -- return variable , @v0 nvarchar(4000)-- running scratchpad for storing computed distances , @v2 nvarchar(4000)-- running scratchpad for storing previous column's computed distances , @start int = 1 -- index (1 based) of first non-matching character between the two string , @i int, @j int -- loop counters: i for s string and j for t string , @diag int -- distance in cell diagonally above and left if we were using an m by n matrix , @left int -- distance in cell to the left if we were using an m by n matrix , @nextTransCost int-- transposition base cost for next iteration , @thisTransCost int-- transposition base cost (2 distant along diagonal) for current iteration , @sChar nchar -- character at index i from s string , @tChar nchar -- character at index j from t string , @thisJ int -- temporary storage of @j to allow SELECT combining , @jOffset int -- offset used to calculate starting value for j loop , @jEnd int -- ending value for j loop (stopping point for processing a column) -- get input string lengths including any trailing spaces (which SQL Server would otherwise ignore) , @sLen int = datalength(@s) / datalength(left(left(@s, 1) + '.', 1)) -- length of smaller string , @tLen int = datalength(@t) / datalength(left(left(@t, 1) + '.', 1)) -- length of larger string , @lenDiff int -- difference in length between the two strings -- if strings of different lengths, ensure shorter string is in s. This can result in a little -- faster speed by spending more time spinning just the inner loop during the main processing. IF (@sLen > @tLen) BEGIN SELECT @v0 = @s, @i = @sLen -- temporarily use v0 for swap SELECT @s = @t, @sLen = @tLen SELECT @t = @v0, @tLen = @i END SELECT @max = ISNULL(@max, @tLen) , @lenDiff = @tLen - @sLen IF @lenDiff > @max RETURN NULL -- suffix common to both strings can be ignored WHILE(@sLen > 0 AND SUBSTRING(@s, @sLen, 1) = SUBSTRING(@t, @tLen, 1)) SELECT @sLen = @sLen - 1, @tLen = @tLen - 1 IF (@sLen = 0) RETURN CASE WHEN @tLen <= @max THEN @tLen ELSE NULL END -- prefix common to both strings can be ignored WHILE (@start < @sLen AND SUBSTRING(@s, @start, 1) = SUBSTRING(@t, @start, 1)) SELECT @start = @start + 1 IF (@start > 1) BEGIN SELECT @sLen = @sLen - (@start - 1) , @tLen = @tLen - (@start - 1) -- if all of shorter string matches prefix and/or suffix of longer string, then -- edit distance is just the delete of additional characters present in longer string IF (@sLen <= 0) RETURN CASE WHEN @tLen <= @max THEN @tLen ELSE NULL END SELECT @s = SUBSTRING(@s, @start, @sLen) , @t = SUBSTRING(@t, @start, @tLen) END -- initialize v0 array of distances SELECT @v0 = '', @j = 1 WHILE (@j <= @tLen) BEGIN SELECT @v0 = @v0 + NCHAR(CASE WHEN @j > @max THEN @max ELSE @j END) SELECT @j = @j + 1 END SELECT @v2 = @v0 -- copy...doesn't matter what's in v2, just need to initialize its size , @jOffset = @max - @lenDiff , @i = 1 WHILE (@i <= @sLen) BEGIN SELECT @distance = @i , @diag = @i - 1 , @sChar = SUBSTRING(@s, @i, 1) -- no need to look beyond window of upper left diagonal (@i) + @max cells -- and the lower right diagonal (@i - @lenDiff) - @max cells , @j = CASE WHEN @i <= @jOffset THEN 1 ELSE @i - @jOffset END , @jEnd = CASE WHEN @i + @max >= @tLen THEN @tLen ELSE @i + @max END , @thisTransCost = 0 WHILE (@j <= @jEnd) BEGIN -- at this point, @distance holds the previous value (the cell above if we were using an m by n matrix) SELECT @nextTransCost = UNICODE(SUBSTRING(@v2, @j, 1)) , @v2 = STUFF(@v2, @j, 1, NCHAR(@diag)) , @tChar = SUBSTRING(@t, @j, 1) , @left = UNICODE(SUBSTRING(@v0, @j, 1)) , @thisJ = @j SELECT @distance = CASE WHEN @diag < @left AND @diag < @distance THEN @diag --substitution WHEN @left < @distance THEN @left -- insertion ELSE @distance -- deletion END SELECT @distance = CASE WHEN (@sChar = @tChar) THEN @diag -- no change (characters match) WHEN @i <> 1 AND @j <> 1 AND @tChar = SUBSTRING(@s, @i - 1, 1) AND @thisTransCost < @distance AND @sChar = SUBSTRING(@t, @j - 1, 1) THEN 1 + @thisTransCost -- transposition ELSE 1 + @distance END SELECT @v0 = STUFF(@v0, @thisJ, 1, NCHAR(@distance)) , @diag = @left , @thisTransCost = @nextTransCost , @j = case when (@distance > @max) AND (@thisJ = @i + @lenDiff) then @jEnd + 2 else @thisJ + 1 end END SELECT @i = CASE WHEN @j > @jEnd + 1 THEN @sLen + 1 ELSE @i + 1 END END RETURN CASE WHEN @distance <= @max THEN @distance ELSE NULL END END

Thursday, January 15, 2015

Optimizing the Damerau-Levenshtein Algorithm in C#

The previous two posts covered the Levenshtein algorithm in C#, and the TSQL implementation. In this post I’ll cover the Damerau-Levenshtein algorithm in C#, with the next post giving the TSQL version. The idea for this distance measure is very similar to Levenshtein. If you remember, Levenshtein measures the number of substitution, insert, and delete edits required to convert one string to another. Damerau added the additional edit of two character transpositions. As an example, The Levenshtein distance between “paul” and “pual” is 2. With Damerau-Levenshtein, the distance is only 1. For applications matching strings like words or people names, my experience is that Damerau-Levenshtein gives better results.
Before getting into the algorithm, I should mention this caveat. As described in the wikipedia article, there are two basic implementations. One is a literal implementation producing a true distance metric, which is fairly complicated. The other simpler implementation is the optimal string alignment, also called the restricted edit distance, which is much easier to implement. The downside is that the optimal string alignment version is not a true metric. This post implements the simpler restricted edit distance. For most purposes, it works fine. The main difference is that it only allows a substring to be edited once. Using the a literal implementation of Damerau, the distance between “CA” and “ABC” is 2 (CA->AC->ABC). With the restricted edit distance version, the distance is 3 (CA->A->AB->ABC). This is because in this case it can’t do the transpose and then edit the substring a second time by inserting the B between the two characters.
The implementation presented here is very similar to the earlier Levenshtein implementation. It contains all the same optimizations that had, and adds the additional logic to handle the transpositions. If you recall, the Levenshtein implementation had a space optimization that used the improvement that reduced the need for a two dimensional m * n matrix, to just two one dimensional arrays, and then improved on that further to needing just a single array. With Damerau, you can go from the m * n matrix to three arrays without too much difficulty. You need three arrays because you need to look one level further back to detect and compute the transposition portion of the algorithm. It’s a little trickier to take it down to just two arrays, but it can be accomplished similarly to how we went from two to one array with Levenshtein. It’s done by judicial use of temporary variables, which has the added benefit of reducing array access and reducing execution time. It can be tricky to follow, so hopefully this diagram helps show what goes on.
DamLev
We’re able to reduce memory use by modifying the arrays in place as we iterate down the column. We read ahead from the values stored when we processed the previous column, and store the values for the current column as we proceed. In the diagram, we are in the middle of computing the distance, and we’re on column 2 (the outer i loop), and row 4 (the inner j loop), with the cell about to be calculated colored in black. The yellow cells represent the contents of the v0 array. We’ve calculated the new values in the cells above, but have yet to read and use the cells from the previous column below the current point. The tan column marks the v2 array that holds the values we need for computing the transposition cost. To avoid some extra math operations, the contents of the v2 array are actually offset by 1, so v2[4] contains 3, although in the diagram above, that value is shown in row 3.
I’ve been using Damerau-Levenshtein for person name matching. I don’t use it as a primary technique, because it’s not so well suited for fast searches in a database. I use it mainly as a second opinion verifier for other search techniques. I use both phonetic lookups and bigram searches. But I don’t use the scores of those techniques alone, I adjust them by also computing the Damerau-Levenshtein distance. This is not too bad, because phonetic and bigram searches can take advantage of indexed database access. Then the smaller returned set is all that needs to have the more expensive edit distance computed. Sometimes, strings may be close measured by bigram similarity, or have the same phonetic code, but be pretty dissimilar. By applying Damerau-Levenshtein, these marginal matches will get their scores knocked down a bit.
There are two implementations of Damerau-Levenshtein below. The first method takes two strings as parameters, and returns the edit distance between them. The second method is very similar, but has an important difference. It takes an additional parameter that lets you specify a maximum distance. Specifying a maximum distance allows two important optimizations. The obvious one is that it allows a short circuit exit. As soon as it’s determined that the two strings have a distance greater than the maximum allowed distance, it can return immediately, without spending further time determining the complete edit distance. We don’t need to look at all the intermediate values to test for the short circuit. We only need to examine the line of the final diagonal (a single value per column). If the distance is larger than the maxDistance, a value of –1 is returned to indicate that the distance is greater than maxDistance, but the exact distance is not known. The second optimization is less obvious, but equally, and in some cases more important. Given a maxDistance, we don’t need to evaluate all cells of each column. We only need to evaluate the cells within a window around the two diagonals (the one that starts in the upper left corner, and the one then ends in the lower right corner). The size of the window is maxDistance cells on either side of the diagonals, reduced by the difference in lengths of the two strings. When comparing two large strings when given a small maxDistance, this greatly reduces the amount of cells that must be visited and computed. It essentially changes the time complexity from being the product of the two string lengths to being just the length of the shorter string, i.e. the time complexity becomes linear. So, even if the early exit short circuit isn’t triggered, there is still a great speed benefit. This maxDistance optimizations come with a small amount of overhead. If you want to get the full edit distance, and don’t care about giving a max distance, it’s better to use the method below that doesn’t have the maxDistance parameter. It will be faster for your use. But if you do want to give a maxDistance, the method that takes that parameter will give you great results. For large strings, it can be many times faster.
/// <summary>
// Computes and returns the Damerau-Levenshtein edit distance between two strings, 
/// i.e. the number of insertion, deletion, sustitution, and transposition edits
/// required to transform one string to the other. This value will be >= 0, where 0
/// indicates identical strings. Comparisons are case sensitive, so for example, 
/// "Fred" and "fred" will have a distance of 1. This algorithm is basically the
/// Levenshtein algorithm with a modification that considers transposition of two
/// adjacent characters as a single edit.
/// http://blog.softwx.net/2015/01/optimizing-damerau-levenshtein_15.html
/// </summary>
/// <remarks>See http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance
/// Note that this is based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
/// at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm.
/// This version differs by including some optimizations, and extending it to the Damerau-
/// Levenshtein algorithm.
/// Note that this is the simpler and faster optimal string alignment (aka restricted edit) distance
/// that difers slightly from the classic Damerau-Levenshtein algorithm by imposing the restriction
/// that no substring is edited more than once. So for example, "CA" to "ABC" has an edit distance
/// of 2 by a complete application of Damerau-Levenshtein, but a distance of 3 by this method that
/// uses the optimal string alignment algorithm. See wikipedia article for more detail on this
/// distinction.
/// </remarks>
/// <param name="s">String being compared for distance.</param>
/// <param name="t">String being compared against other string.</param>
/// <returns>int edit distance, >= 0 representing the number of edits required
/// to transform one string to the other.</returns>
public static int DamLev(this string s, string t) {
    if (String.IsNullOrEmpty(s)) return (t ?? "").Length;
    if (String.IsNullOrEmpty(t)) return s.Length;

    // if strings of different lengths, ensure shorter string is in s. This can result in a little
    // faster speed by spending more time spinning just the inner loop during the main processing.
    if (s.Length > t.Length) {
        var temp = s; s = t; t = temp; // swap s and t
    }
    int sLen = s.Length; // this is also the minimun length of the two strings
    int tLen = t.Length;

    // suffix common to both strings can be ignored
    while ((sLen > 0) && (s[sLen - 1] == t[tLen - 1])) { sLen--; tLen--; }

    int start = 0;
    if ((s[0] == t[0]) || (sLen == 0)) { // if there's a shared prefix, or all s matches t's suffix
        // prefix common to both strings can be ignored
        while ((start < sLen) && (s[start] == t[start])) start++;
        sLen -= start; // length of the part excluding common prefix and suffix
        tLen -= start;

        // if all of shorter string matches prefix and/or suffix of longer string, then
        // edit distance is just the delete of additional characters present in longer string
        if (sLen == 0) return tLen;

        t = t.Substring(start, tLen); // faster than t[start+j] in inner loop below
    }

    var v0 = new int[tLen];
    var v2 = new int[tLen]; // stores one level further back (offset by +1 position)
    for (int j = 0; j < tLen; j++) v0[j] = j + 1;

    char sChar = s[0];
    int current = 0;
    for (int i = 0; i < sLen; i++) {
        char prevsChar = sChar;
        sChar = s[start + i];
        char tChar = t[0];
        int left = i;
        current = i + 1;
        int nextTransCost = 0;
        for (int j = 0; j < tLen; j++) {
            int above = current;
            int thisTransCost = nextTransCost;
            nextTransCost = v2[j];
            v2[j] = current = left; // cost of diagonal (substitution)
            left = v0[j];    // left now equals current cost (which will be diagonal at next iteration)
            char prevtChar = tChar;
            tChar = t[j];
            if (sChar != tChar) {
                if (left < current) current = left;   // insertion
                if (above < current) current = above; // deletion
                current++;
                if ((i != 0) && (j != 0)
                    && (sChar == prevtChar)
                    && (prevsChar == tChar)) {
                    thisTransCost++;
                    if (thisTransCost < current) current = thisTransCost; // transposition
                }
            }
            v0[j] = current;
        }
    }
    return current;
}
/// Computes and returns the Damerau-Levenshtein edit distance between two strings, 
/// i.e. the number of insertion, deletion, sustitution, and transposition edits
/// required to transform one string to the other. This value will be >= 0, where 0
/// indicates identical strings. Comparisons are case sensitive, so for example, 
/// "Fred" and "fred" will have a distance of 1. This algorithm is basically the
/// Levenshtein algorithm with a modification that considers transposition of two
/// adjacent characters as a single edit.
/// http://blog.softwx.net/2015/01/optimizing-damerau-levenshtein_15.html
/// </summary>
/// <remarks>See http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance
/// Note that this is based on Sten Hjelmqvist's "Fast, memory efficient" algorithm, described
/// at http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm.
/// This version differs by including some optimizations, and extending it to the Damerau-
/// Levenshtein algorithm.
/// Note that this is the simpler and faster optimal string alignment (aka restricted edit) distance
/// that difers slightly from the classic Damerau-Levenshtein algorithm by imposing the restriction
/// that no substring is edited more than once. So for example, "CA" to "ABC" has an edit distance
/// of 2 by a complete application of Damerau-Levenshtein, but a distance of 3 by this method that
/// uses the optimal string alignment algorithm. See wikipedia article for more detail on this
/// distinction.
/// </remarks>
/// <param name="s">String being compared for distance.</param>
/// <param name="t">String being compared against other string.</param>
/// <param name="maxDistance">The maximum edit distance of interest.</param>
/// <returns>int edit distance, >= 0 representing the number of edits required
/// to transform one string to the other, or -1 if the distance is greater than the specified maxDistance.</returns>
public static int DamLev(this string s, string t, int maxDistance = int.MaxValue) {
    if (String.IsNullOrEmpty(s)) return ((t ?? "").Length <= maxDistance) ? (t ?? "").Length : -1;
    if (String.IsNullOrEmpty(t)) return (s.Length <= maxDistance) ? s.Length : -1;

    // if strings of different lengths, ensure shorter string is in s. This can result in a little
    // faster speed by spending more time spinning just the inner loop during the main processing.
    if (s.Length > t.Length) {
        var temp = s; s = t; t = temp; // swap s and t
    }
    int sLen = s.Length; // this is also the minimun length of the two strings
    int tLen = t.Length;

    // suffix common to both strings can be ignored
    while ((sLen > 0) && (s[sLen - 1] == t[tLen - 1])) { sLen--; tLen--; }

    int start = 0;
    if ((s[0] == t[0]) || (sLen == 0)) { // if there's a shared prefix, or all s matches t's suffix
        // prefix common to both strings can be ignored
        while ((start < sLen) && (s[start] == t[start])) start++;
        sLen -= start; // length of the part excluding common prefix and suffix
        tLen -= start;

        // if all of shorter string matches prefix and/or suffix of longer string, then
        // edit distance is just the delete of additional characters present in longer string
        if (sLen == 0) return (tLen <= maxDistance) ? tLen : -1;

        t = t.Substring(start, tLen); // faster than t[start+j] in inner loop below
    }
    int lenDiff = tLen - sLen;
    if ((maxDistance < 0) || (maxDistance > tLen)) {
        maxDistance = tLen;
    } else if (lenDiff > maxDistance) return -1;

    var v0 = new int[tLen];
    var v2 = new int[tLen]; // stores one level further back (offset by +1 position)
    int j;
    for (j = 0; j < maxDistance; j++) v0[j] = j + 1;
    for (; j < tLen; j++) v0[j] = maxDistance + 1;

    int jStartOffset = maxDistance - (tLen - sLen);
    bool haveMax = maxDistance < tLen;
    int jStart = 0;
    int jEnd = maxDistance;
    char sChar = s[0];
    int current = 0;
    for (int i = 0; i < sLen; i++) {
        char prevsChar = sChar;
        sChar = s[start + i];
        char tChar = t[0];
        int left = i;
        current = left + 1;
        int nextTransCost = 0;
        // no need to look beyond window of lower right diagonal - maxDistance cells (lower right diag is i - lenDiff)
        // and the upper left diagonal + maxDistance cells (upper left is i)
        jStart += (i > jStartOffset) ? 1 : 0;
        jEnd += (jEnd < tLen) ? 1 : 0;
        for (j = jStart; j < jEnd; j++) {
            int above = current;
            int thisTransCost = nextTransCost;
            nextTransCost = v2[j];
            v2[j] = current = left; // cost of diagonal (substitution)
            left = v0[j];    // left now equals current cost (which will be diagonal at next iteration)
            char prevtChar = tChar;
            tChar = t[j];
            if (sChar != tChar) {
                if (left < current) current = left;   // insertion
                if (above < current) current = above; // deletion
                current++;
                if ((i != 0) && (j != 0)
                    && (sChar == prevtChar)
                    && (prevsChar == tChar)) {
                    thisTransCost++;
                    if (thisTransCost < current) current = thisTransCost; // transposition
                }
            }
            v0[j] = current;
        }
        if (haveMax && (v0[i + lenDiff] > maxDistance)) return -1;
    }
    return (current <= maxDistance) ? current : -1;
}