## Don’t trust Math.Round(), and a workaround

Turns out Math.Round() can give you some nasty surprises when applied to doubles:

double value = 0.575; value = Math.Round(value, 2, MidpointRounding.AwayFromZero); Console.WriteLine(value); // Prints 0.57 instead of 0.58

The application I’m working on does a lot of complicated calculations of construction costs. The inputs all come from a SQLite database, where they are stored as float values, and once I have them in my C# application, I do all my calculations with doubles. I know decimals are recommended when working with financial values, but if half your values are actually regular doubles and the other half are financials, constantly casting and converting can be frustrating, so I decided to just use doubles. I probably won’t make that choice again, but it works OK. Where it gets a bit messy, is all the way at the end, when rounding the results of some complicated calculation to dollars and cents. There are a couple of articles out on the web discussing why this is a problem and detailing the risks. The short version is that many decimals values cannot be perfectly represented in floating point, and thus the stored values are slightly off. There are some nice examples of this happening in the Python tutorial page Floating Point Arithmetic: Issues and Limitations. In my application, this only becomes an issue when I want to round a value that has a 5 in the third decimal position. For example, rounding 10.005 to two decimals should give 10.01, but if the closest floating point representation of 10.005 is actually 10.004999999999, then it will round down to 10.00, not up to 10.01, as expected. I knew about this problem, and implemented the following code to handle my rounding for me:

/// /// Applies rounding to the given value. /// Rounding is applied twice, first 5 digits deeper than the desired number of digits, /// to fix calculation rounding. /// ///The value. ///The desired number of decimal digits. /// Rounded value. public static double ApplyRounding(double value, int digits) { // First rounding is to fix rounding errors, // by changing things like 0.99499999999999 to 0.995 value = Math.Round(value, digits + 5, MidpointRounding.AwayFromZero); // Round value to specified number of digits value = Math.Round(value, digits, MidpointRounding.AwayFromZero); return value; }

However, this had a major bug. It worked fine for 99% of cases, but when I finally got around to running some integration tests yesterday, I found a 1 cent error on one of my results. The problem with the code above is that it still fails if the result of line 11 cannot be represented as a double. As in my original example, 0.575 is expected to round up to 0.58, but rounds down to 0.57.

So, I upgraded the ApplyRounding() method with a cast to decimal before starting the rounding process. This allows the first and second rounding calls to have perfect accuracy, and I get the expected result:

/// /// Applies rounding to the given value. /// Rounding is applied twice, first 5 digits deeper than the desired number of digits, /// to fix calculation rounding. /// ///The value. ///The desired number of decimal digits. /// Rounded value. public static double ApplyRounding(double value, int digits) { // Cast to decimal before rounding: decimal decValue = (decimal)value; // First round is to fix rounding errors by changing things like 0.99499999999999 to 0.995 decValue = Math.Round(decValue, digits + 5, MidpointRounding.AwayFromZero); // Round value to specified number of digits decValue = Math.Round(decValue, digits, MidpointRounding.AwayFromZero); // Cast rounded value back to double: return (double)decValue; }

Though this fixed the problem, it does feel like a bit of a workaround. Next time I’ll probably use decimals all the way through.