Monday, April 11, 2011

What is the difference between Decimal, Float and Double in C#?

What is the difference between Decimal, Float and Double in C#?

When would use use each of them?

From stackoverflow
  • Precision is the main difference.

    Float - 7 digits (32 bit)

    Double-15-16 digits (64 bit)

    Decimal -28-29 significant digits (128 bit)

    Decimals have much higher precession and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double\float.

    Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

    Les : +1 for the link to performance considerations
  • float is a single precision (32 bit) floating point data type as defined by IEEE 754 (it is used mostly in graphic libraries).

    double is a double precision (64 bit) floating point data type as defined by IEEE 754 (probably the most normally used data type for real values).

    decimal is a 128-bit floating point data type, it should be used where precision is of extreme importance (monetary calculations).

    Tor Haugen : Actually, decimal is not a floating-point data type.
    Mehrdad Afshari : Technically, it *is* a floating point data type as it stores exponent and mantissa.
    Jon Skeet : +1 for Mehrdad's comment. It's a floating *decimal* point type rather than a float *binary* point type.
  • The thing to keep in mind is that both float and double are considered "approximations" of a floating point number. Some floating point numbers cannot be accurately represented by floats or doubles, and you can get weird rouding errors out at the extreme precisions.

    Decimal doesn't use IEEE floating point representation, it uses a decimal representation that is 100% accurate by doing decimal based math rather than base 2 based math.

    What this means is that you can trust math to within the accuracy of decimal precision whereas you can't fully trust floats or doubles unless you are very careful.

    Mehrdad Afshari : What do you mean of 100% accurate?! Theoretically, computers can't store 100% precision of many real numbers.
    Jon Skeet : +1 to Mehrdad's comment again. How exactly is 1m/3m "100% accurate" for example?
    cgreeno : @Mystere is right - float and doubles are not 100% accurate because it uses base 2 vs decimal that internally uses base 10
    Mehrdad Afshari : BtBh: I didn't dispute that. However, Decimal is not accurate either. Theoretically, computers can only store **finite representation** of things. This is not something anybody can change.
    Joachim Sauer : Decimal floating point numbers aren't any more (or any less!) accurate than binary floating point numbers. They just match our naive expectations better, because they use base-10 instead of base-2.
    Mystere Man : I said "to within the accuracy of the decimal precision"
  • float and double are floating binary point types. In other words, they represent a number like this:

    10001.10010110011
    

    The binary number and the location of the binary point are both encoded within the value.

    decimal is a floating decimal point type. In other words, they represent a number like this:

    12345.65789
    

    Again, the number and the location of the decimal point are both encoded within the value - that's what makes decimal still a floating point type instead of a fixed point type.

    The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations. Not all decimal numbers are exactly representable in binary floating point - 0.1, for example - so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well - the result of dividing 1 by 3 can't be exactly represented, for example.

    As for what to use when:

    • For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.

    • For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

    Mehrdad Afshari : +1 Nice explanation. I think it's important to clarify the myth of decimal accuracy. Personally, I've never been a fan of school oriented CS stuff, but at least schools are good at teaching these things.
    cgreeno : This is a great answer.
    ydobonmai : Great explaination..
    Prakash Kalakoti : Nice explaination
    ionut bizau : Skeet explanation.
    1. Double and float can be divided by integer zero without an exception at both compilation and run time.
    2. Decimal cannot be divided by integer zero. Compilation will always fail if you do that.

0 comments:

Post a Comment