Difference Between Decimal, Float And Double In .NET?

    1. Home
    2. Questions
    3. Tags
    4. Users
    5. Companies
    6. Labs
    7. Jobs
    8. Discussions
    9. Collectives
    10. Communities for your favorite technologies. Explore all Collectives

  1. Teams

    Ask questions, find answers and collaborate at work with Stack Overflow for Teams.

    Try Teams for free Explore Teams
  2. Teams
  3. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

Get early access and see previews of new features.

Learn more about Labs Difference between decimal, float and double in .NET? Ask Question Asked 15 years, 10 months ago Modified 1 month ago Viewed 1.3m times 2474

What is the difference between decimal, float and double in .NET?

When would someone use one of these?

Share Improve this question Follow edited Jul 11, 2016 at 18:33 PC Luddite's user avatar PC Luddite 6,0786 gold badges25 silver badges40 bronze badges asked Mar 6, 2009 at 11:31 TomTom 2
  • 3 interesting article zetcode.com/lang/csharp/datatypes – GibboK Commented Mar 1, 2014 at 14:20
  • 9 You cannot use decimal to interop with native code since it is a .net specific implementation, while float and double numbers can be processed by CPUs directly. – codymanix Commented Mar 6, 2021 at 10:55
Add a comment |

19 Answers 19

Sorted by: Reset to default Highest score (default) Trending (recent votes count more) Date modified (newest first) Date created (oldest first) 2603

float (the C# alias for System.Single) and double (the C# alias for System.Double) are floating binary point types. float is 32-bit; double is 64-bit. In other words, they represent a number like this:

10001.10010110011

The binary number and the location of the binary point are both encoded within the value.

decimal (the C# alias for System.Decimal) is a floating decimal point type. In other words, they represent a number like this:

12345.65789

Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal still a floating point type instead of a fixed point type.

The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.

As for what to use when:

  • For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.

  • For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

Share Improve this answer Follow edited Nov 25, 2022 at 7:30 answered Mar 6, 2009 at 11:56 Jon Skeet's user avatar Jon SkeetJon Skeet 1.5m888 gold badges9.3k silver badges9.3k bronze badges 12
  • 95 float/double usually do not represent numbers as 101.101110, normally it is represented as something like 1101010 * 2^(01010010) - an exponent – Mingwei Samuel Commented Aug 13, 2014 at 21:50
  • 98 @Hazzard: That's what the "and the location of the binary point" part of the answer means. – Jon Skeet Commented Aug 13, 2014 at 21:57
  • 147 I'm surprised it hasn't been said already, float is a C# alias keyword and isn't a .Net type. it's System.Single.. single and double are floating binary point types. – Brett Caswell Commented Feb 3, 2015 at 15:48
  • 67 @BKSpurgeon: Well, only in the same way that you can say that everything is a binary type, at which point it becomes a fairly useless definition. Decimal is a decimal type in that it's a number represented as an integer significand and a scale, such that the result is significand * 10^scale, whereas float and double are significand * 2^scale. You take a number written in decimal, and move the decimal point far enough to the right that you've got an integer to work out the significand and the scale. For float/double you'd start with a number written in binary. – Jon Skeet Commented Nov 26, 2015 at 7:20
  • 38 Another difference: float 32-bit; double 64-bit; and decimal 128-bit. – David Commented Aug 29, 2016 at 15:08
| Show 7 more comments 1288

Precision is the main difference.

Float - 7 digits (32 bit)

Double-15-16 digits (64 bit)

Decimal -28-29 significant digits (128 bit)

Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

float flt = 1F/3; double dbl = 1D/3; decimal dcm = 1M/3; Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);

Result :

float: 0.3333333 double: 0.333333333333333 decimal: 0.3333333333333333333333333333 Share Improve this answer Follow edited Oct 16, 2018 at 14:49 Martin Backasch's user avatar Martin Backasch 1,8993 gold badges22 silver badges32 bronze badges answered Mar 6, 2009 at 11:33 cgreeno's user avatar cgreenocgreeno 32.3k7 gold badges68 silver badges87 bronze badges 42
  • 74 @Thecrocodilehunter: sorry, but no. Decimal can represent all numbers that can be represented in decimal notation, but not 1/3 for example. 1.0m / 3.0m will evaluate to 0.33333333... with a large but finite number of 3s at the end. Multiplying it by 3 will not return an exact 1.0. – Erik P. Commented Nov 29, 2011 at 21:14
  • 61 @Thecrocodilehunter: I think you're confusing accuracy and precision. They are different things in this context. Precision is the number of digits available to represent a number. The more precision, the less you need to round. No data type has infinite precision. – Igby Largeman Commented Jan 6, 2012 at 17:42
  • 18 @Thecrocodilehunter: You're assuming that the value that is being measured is exactly 0.1 -- that is rarely the case in the real world! Any finite storage format will conflate an infinite number of possible values to a finite number of bit patterns. For example, float will conflate 0.1 and 0.1 + 1e-8, while decimal will conflate 0.1 and 0.1 + 1e-29. Sure, within a given range, certain values can be represented in any format with zero loss of accuracy (e.g. float can store any integer up to 1.6e7 with zero loss of accuracy) -- but that's still not infinite accuracy. – Daniel Pryden Commented Jan 10, 2012 at 1:49
  • 32 @Thecrocodilehunter: You missed my point. 0.1 is not a special value! The only thing that makes 0.1 "better" than 0.10000001 is because human beings like base 10. And even with a float value, if you initialize two values with 0.1 the same way, they will both be the same value. It's just that that value won't be exactly 0.1 -- it will be the closest value to 0.1 that can be exactly represented as a float. Sure, with binary floats, (1.0 / 10) * 10 != 1.0, but with decimal floats, (1.0 / 3) * 3 != 1.0 either. Neither is perfectly precise. – Daniel Pryden Commented Jan 10, 2012 at 18:27
  • 20 @Thecrocodilehunter: You still don't understand. I don't know how to say this any more plainly: In C, if you do double a = 0.1; double b = 0.1; then a == b will be true. It's just that a and b will both not exactly equal 0.1. In C#, if you do decimal a = 1.0m / 3.0m; decimal b = 1.0m / 3.0m; then a == b will also be true. But in that case, neither of a nor b will exactly equal 1/3 -- they will both equal 0.3333.... In both cases, some accuracy is lost due to representation. You stubbornly say that decimal has "infinite" precision, which is false. – Daniel Pryden Commented Jan 10, 2012 at 19:29
| Show 37 more comments 134 +---------+----------------+---------+----------+---------------------------------------------------------+ | C# | .Net Framework | Signed? | Bytes | Possible Values | | Type | (System) type | | Occupied | | +---------+----------------+---------+----------+---------------------------------------------------------+ | sbyte | System.Sbyte | Yes | 1 | -128 to 127 | | short | System.Int16 | Yes | 2 | -32,768 to 32,767 | | int | System.Int32 | Yes | 4 | -2,147,483,648 to 2,147,483,647 | | long | System.Int64 | Yes | 8 | -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 | | byte | System.Byte | No | 1 | 0 to 255 | | ushort | System.Uint16 | No | 2 | 0 to 65,535 | | uint | System.UInt32 | No | 4 | 0 to 4,294,967,295 | | ulong | System.Uint64 | No | 8 | 0 to 18,446,744,073,709,551,615 | | float | System.Single | Yes | 4 | Approximately ±1.5e-45 to ±3.4e38 | | | | | | with ~6-9 significant figures | | double | System.Double | Yes | 8 | Approximately ±5.0e-324 to ±1.7e308 | | | | | | with ~15-17 significant figures | | decimal | System.Decimal | Yes | 16 | Approximately ±1.0e-28 to ±7.9e28 | | | | | | with 28-29 significant figures | | char | System.Char | N/A | 2 | Any Unicode character (16 bit) | | bool | System.Boolean | N/A | 1 / 2 | true or false | +---------+----------------+---------+----------+---------------------------------------------------------+

See here for more information.

Share Improve this answer Follow edited Dec 24, 2020 at 1:29 Brady Davis's user avatar Brady Davis 33 bronze badges answered Jun 7, 2013 at 12:50 user2389722user2389722 3
  • 12 You left out the biggest difference, which is the base used for the decimal type (decimal is stored as base 10, all other numeric types listed are base 2). – BrainSlugs83 Commented Mar 14, 2015 at 22:55
  • 2 The value ranges for the Single and Double are not depicted correctly in the above image or the source forum post. Since we can't easily superscript the text here, use the caret character: Single should be 10^-45 and 10^38, and Double should be 10^-324 and 10^308. Also, MSDN has the float with a range of -3.4x10^38 to +3.4x10^38. Search MSDN for System.Single and System.Double in case of link changes. Single: msdn.microsoft.com/en-us/library/b1e65aza.aspx Double: msdn.microsoft.com/en-us/library/678hzkk9.aspx – deegee Commented Jun 22, 2015 at 19:18
  • 3 Decimal is 128 bits ... means it occupies 16 bytes not 12 – user1477332 Commented Oct 23, 2018 at 3:29
Add a comment | 107

The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:

  • A certain loss of precision is acceptable in many scientific calculations because of the practical limits of the physical problem or artifact being measured. Loss of precision is not acceptable in finance.
  • Decimal is much (much) slower than float and double for most operations, primarily because floating point operations are done in binary, whereas Decimal stuff is done in base 10 (i.e. floats and doubles are handled by the FPU hardware, such as MMX/SSE, whereas decimals are calculated in software).
  • Decimal has an unacceptably smaller value range than double, despite the fact that it supports more digits of precision. Therefore, Decimal can't be used to represent many scientific values.
Share Improve this answer Follow answered Apr 13, 2011 at 13:55 Mark Jones's user avatar Mark JonesMark Jones 2,0541 gold badge19 silver badges12 bronze badges 1
  • 7 If you're doing financial calculations, you absolutely have to roll your own datatypes or find a good library that matches your exact needs. Accuracy in a financial setting is defined by (human) standards bodies and they have very specific localized (both in time and geography) rules about how to do calculations. Things like correct rounding aren't captured in the simple numeric datatypes in .Net. The ability to do calculations is only a very small part of the puzzle. – James Moore Commented Apr 6, 2016 at 16:59
Add a comment | 97

I won't reiterate tons of good (and some bad) information already answered in other answers and comments, but I will answer your followup question with a tip:

When would someone use one of these?

Use decimal for counted values

Use float/double for measured values

Some examples:

  • money (do we count money or measure money?)

  • distance (do we count distance or measure distance? *)

  • scores (do we count scores or measure scores?)

We always count money and should never measure it. We usually measure distance. We often count scores.

* In some cases, what I would call nominal distance, we may indeed want to 'count' distance. For example, maybe we are dealing with country signs that show distances to cities, and we know that those distances never have more than one decimal digit (xxx.x km).

Share Improve this answer Follow answered Apr 22, 2016 at 15:18 tomosius's user avatar tomosiustomosius 1,42913 silver badges18 bronze badges 1
  • 4 I really like this answer, especially the question "do we count or measure money?" However, other than money, I can't think of anything that is "counted" that is not simply integer. I have seen some applications that use decimal simply because double has too few significant digits. In other words, decimal might be used because C# does not have a quadruple type en.wikipedia.org/wiki/Quadruple-precision_floating-point_format – John Henckel Commented Apr 4, 2019 at 18:55
Add a comment | 59

float 7 digits of precision

double has about 15 digits of precision

decimal has about 28 digits of precision

If you need better accuracy, use double instead of float. In modern CPUs both data types have almost the same performance. The only benifit of using float is they take up less space. Practically matters only if you have got many of them.

I found this is interesting. What Every Computer Scientist Should Know About Floating-Point Arithmetic

Share Improve this answer Follow edited Jul 15, 2014 at 21:42 ABCD's user avatar ABCD 90716 silver badges39 bronze badges answered Aug 29, 2011 at 0:06 CharithJ's user avatar CharithJCharithJ 47.5k20 gold badges124 silver badges136 bronze badges 6
  • 1 @RogerLipscombe: I would consider double proper in accounting applications in those cases (and basically only those cases) where no integer type larger than 32 bits was available, and the double was being used as though it were a 53-bit integer type (e.g. to hold a whole number of pennies, or a whole number of hundredths of a cent). Not much use for such things nowadays, but many languages gained the ability to use double-precision floating-point values long before they gained 64-bit (or in some cases even 32-bit!) integer math. – supercat Commented May 29, 2014 at 17:57
  • 1 Your answer implies precision is the only difference between these data types. Given binary floating point arithmetic is typically implemented in hardware FPU, performance is a significant difference. This may be inconsequential for some applications, but is critical for others. – saille Commented Jan 15, 2015 at 3:16
  • 6 @supercat double is never proper in accounting applications. Because Double can only approximate decimal values (even within the range of its own precision). This is because double stores the values in a base-2 (binary)-centric format. – BrainSlugs83 Commented Mar 14, 2015 at 22:50
  • 2 @BrainSlugs83: Use of floating-point types to hold non-whole-number quantities would be improper, but it was historically very common for languages to have floating-point types that could precisely represent larger whole-number values than their integer types could represent. Perhaps the most extreme example was Turbo-87 whose only integer types were limited to -32768 to +32767, but whose Real could IIRC represent values up to 1.8E+19 with unit precision. I would think it would be much saner for an accounting application to use Real to represent a whole number of pennies than... – supercat Commented Mar 15, 2015 at 19:45
  • 1 ...for it to try to perform multi-precision math using a bunch of 16-bit values. For most other languages the difference wasn't that extreme, but for a long time it has been very common for languages not to have any integer type that went beyond 4E9 but have a double type which had unit accuracy up to 9E15. If one needs to store whole numbers which are bigger than the largest available integer type, using double is apt to be simpler and more efficient than trying to fudge multi-precision math, especially given that while processors have instructions to perform 16x16->32 or... – supercat Commented Mar 15, 2015 at 19:47
| Show 1 more comment 49

No one has mentioned that

In default settings, Floats (System.Single) and doubles (System.Double) will never use overflow checking while Decimal (System.Decimal) will always use overflow checking.

I mean

decimal myNumber = decimal.MaxValue; myNumber += 1;

throws OverflowException.

But these do not:

float myNumber = float.MaxValue; myNumber += 1;

&

double myNumber = double.MaxValue; myNumber += 1; Share Improve this answer Follow edited Apr 15, 2015 at 13:55 answered Jan 2, 2015 at 13:12 GorkemHalulu's user avatar GorkemHaluluGorkemHalulu 3,0651 gold badge29 silver badges26 bronze badges 4
  • 2 float.MaxValue+1 == float.MaxValue, just as decimal.MaxValue+0.1D == decimal.MaxValue. Perhaps you meant something like float.MaxValue*2? – supercat Commented Jan 14, 2015 at 0:21
  • @supercar But it is not true that decimal.MaxValue + 1 == decimal.MaxValue – GorkemHalulu Commented Jan 14, 2015 at 6:12
  • @supercar decimal.MaxValue + 0.1m == decimal.MaxValue ok – GorkemHalulu Commented Jan 14, 2015 at 6:19
  • 1 The System.Decimal throws an exception just before it becomes unable to distinguish whole units, but if an application is supposed to be dealing with e.g. dollars and cents, that could be too late. – supercat Commented Jan 14, 2015 at 16:15
Add a comment | 31

Integers, as was mentioned, are whole numbers. They can't store the point something, like .7, .42, and .007. If you need to store numbers that are not whole numbers, you need a different type of variable. You can use the double type or the float type. You set these types of variables up in exactly the same way: instead of using the word int, you type double or float. Like this:

float myFloat; double myDouble;

(float is short for "floating point", and just means a number with a point something on the end.)

The difference between the two is in the size of the numbers that they can hold. For float, you can have up to 7 digits in your number. For doubles, you can have up to 16 digits. To be more precise, here's the official size:

float: 1.5 × 10^-45 to 3.4 × 10^38 double: 5.0 × 10^-324 to 1.7 × 10^308

float is a 32-bit number, and double is a 64-bit number.

Double click your new button to get at the code. Add the following three lines to your button code:

double myDouble; myDouble = 0.007; MessageBox.Show(myDouble.ToString());

Halt your program and return to the coding window. Change this line:

myDouble = 0.007; myDouble = 12345678.1234567;

Run your programme and click your double button. The message box correctly displays the number. Add another number on the end, though, and C# will again round up or down. The moral is if you want accuracy, be careful of rounding!

Share Improve this answer Follow edited Feb 19, 2018 at 12:42 Sae1962's user avatar Sae1962 1,17415 silver badges33 bronze badges answered May 22, 2012 at 12:05 daniel's user avatar danieldaniel 4056 silver badges6 bronze badges 1
  • 4 The "point something" you mentioned is generally referred to as "the fractional part" of a number. "Floating point" does not mean "a number with a point something on the end"; but instead "Floating Point" distinguishes the type of number, as opposed to a "Fixed Point" number (which can also store a fractional value); the difference is whether the precision is fixed, or floating. -- Floating point numbers give you a much bigger dynamic range of values (Min and Max), at the cost of precision, whereas a fixed point numbers give you a constant amount of precision at the cost of range. – BrainSlugs83 Commented Sep 16, 2017 at 1:09
Add a comment | 29
  1. Double and float can be divided by integer zero without an exception at both compilation and run time.
  2. Decimal cannot be divided by integer zero. Compilation will always fail if you do that.
Share Improve this answer Follow answered Jul 29, 2010 at 7:21 Display Name's user avatar Display NameDisplay Name 15k23 gold badges99 silver badges180 bronze badges 2
  • 6 They sure can! They also also have a couple of "magic" values such as Infinity, Negative Infinity, and NaN (not a number) which make it very useful for detecting vertical lines while computing slopes... Further, if you need to decide between calling float.TryParse, double.TryParse, and decimal.TryParse (to detect if a string is a number, for example), I recommend using double or float, as they will parse "Infinity", "-Infinity", and "NaN" properly, whereas decimal will not. – BrainSlugs83 Commented Jun 23, 2011 at 19:29
  • 2 Compilation only fails if you attempt to divide a literal decimal by zero (CS0020), and the same is true of integral literals. However if a runtime decimal value is divided by zero, you'll get an exception not a compile error. – Drew Noakes Commented Nov 18, 2016 at 0:24
Add a comment | 19
  • float: ±1.5 x 10^-45 to ±3.4 x 10^38 (~7 significant figures
  • double: ±5.0 x 10^-324 to ±1.7 x 10^308 (15-16 significant figures)
  • decimal: ±1.0 x 10^-28 to ±7.9 x 10^28 (28-29 significant figures)
Share Improve this answer Follow edited Apr 2, 2019 at 23:50 Wai Ha Lee's user avatar Wai Ha Lee 8,79597 gold badges59 silver badges94 bronze badges answered Jan 2, 2014 at 5:01 Mukesh Kumar's user avatar Mukesh KumarMukesh Kumar 2,3764 gold badges30 silver badges38 bronze badges 1
  • 11 The difference is more than just precision. -- decimal is actually stored in decimal format (as opposed to base 2; so it won't lose or round digits due to conversion between the two numeric systems); additionally, decimal has no concept of special values such as NaN, -0, ∞, or -∞. – BrainSlugs83 Commented Sep 16, 2017 at 1:19
Add a comment | 16

This has been an interesting thread for me, as today, we've just had a nasty little bug, concerning decimal having less precision than a float.

In our C# code, we are reading numeric values from an Excel spreadsheet, converting them into a decimal, then sending this decimal back to a Service to save into a SQL Server database.

Microsoft.Office.Interop.Excel.Range cell = … object cellValue = cell.Value2; if (cellValue != null) { decimal value = 0; Decimal.TryParse(cellValue.ToString(), out value); }

Now, for almost all of our Excel values, this worked beautifully. But for some, very small Excel values, using decimal.TryParse lost the value completely. One such example is

  • cellValue = 0.00006317592

  • Decimal.TryParse(cellValue.ToString(), out value); // would return 0

The solution, bizarrely, was to convert the Excel values into a double first, and then into a decimal:

Microsoft.Office.Interop.Excel.Range cell = … object cellValue = cell.Value2; if (cellValue != null) { double valueDouble = 0; double.TryParse(cellValue.ToString(), out valueDouble); decimal value = (decimal) valueDouble; … }

Even though double has less precision than a decimal, this actually ensured small numbers would still be recognised. For some reason, double.TryParse was actually able to retrieve such small numbers, whereas decimal.TryParse would set them to zero.

Odd. Very odd.

Share Improve this answer Follow edited Feb 19, 2018 at 10:45 Sae1962's user avatar Sae1962 1,17415 silver badges33 bronze badges answered Apr 16, 2012 at 9:23 Mike Gledhill's user avatar Mike GledhillMike Gledhill 29.1k8 gold badges157 silver badges162 bronze badges 6
  • 4 Out of curiosity, what was the raw value of cellValue.ToString()? Decimal.TryParse("0.00006317592", out val) seems to work... – micahtan Commented Aug 27, 2012 at 23:57
  • 12 -1 Don't get me wrong, if true, it's very interesting but this is a separate question, it's certainly not an answer to this question. – weston Commented May 22, 2013 at 14:19
  • 5 Maybe because the Excel cell was returning a double and ToString() value was "6.31759E-05" therefore the decimal.Parse() didn't like the notation. I bet if you checked the return value of Decimal.TryParse() it would have been false. – SergioL Commented Oct 15, 2014 at 20:44
  • 3 @weston Answers often complement other answers by filling in nuances they have missed. This answer highlights a difference in terms of parsing. It is very much an answer to the question! – Robino Commented May 20, 2015 at 15:52
  • 4 Er... decimal.Parse("0.00006317592") works -- you've got something else going on. -- Possibly scientific notation? – BrainSlugs83 Commented Sep 16, 2017 at 1:15
| Show 1 more comment 15

The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.

Float - 32 bit (7 digits)

Double - 64 bit (15-16 digits)

Decimal - 128 bit (28-29 significant digits)

More about...the difference between Decimal, Float and Double

Share Improve this answer Follow edited Jan 13, 2015 at 12:56 John Saunders's user avatar John Saunders 162k26 gold badges249 silver badges402 bronze badges answered Sep 30, 2014 at 8:22 warnerl's user avatar warnerlwarnerl 1591 silver badge2 bronze badges 1
  • Someone knows why these different digits range for every type? – Andrea Leganza Commented Feb 4, 2021 at 11:06
Add a comment | 9

For applications such as games and embedded systems where memory and performance are both critical, float is usually the numeric type of choice as it is faster and half the size of a double. Integers used to be the weapon of choice, but floating point performance has overtaken integer in modern processors. Decimal is right out!

Share Improve this answer Follow edited Apr 11, 2016 at 22:52 answered May 16, 2014 at 16:21 yoyo's user avatar yoyoyoyo 8,6886 gold badges58 silver badges50 bronze badges 2
  • 2 Pretty much all modern systems, even cell phones, have hardware support for double; and if you game has even simple physics, you will notice a big difference between double and float. (For example, calculating the velocity / friction in a simple Asteroids clone, doubles allow acceleration to flow much more fluidly than float. -- Seems like it shouldn't matter, but it totally does.) – BrainSlugs83 Commented Sep 16, 2017 at 1:22
  • Doubles are also double the size of floats, meaning you need to chew through twice as much data, which hurts your cache performance. As always, measure and proceed accordingly. – yoyo Commented Sep 22, 2017 at 17:53
Add a comment | 9

Float:

It is a floating binary point type variable. Which means it represents a number in it’s binary form. Float is a single precision 32 bits(6-9 significant figures) data type. It is used mostly in graphic libraries because of very high demand for processing power, and also in conditions where rounding errors are not very important.

Double:

It is also a floating binary point type variable with double precision and 64 bits size(15-17 significant figures). Double are probably the most generally used data type for real values, except for financial applications and places where high accuracy is desired.

Decimal:

It is a floating decimal point type variable. Which means it represents a number using decimal numbers (0-9). It uses 128 bits(28-29 significant figures) for storing and representing data. Therefore, it has more precision than float and double. They are mostly used in financial applications because of their high precision and easy to avoid rounding errors.

Example:

using System; public class GFG { static public void Main() { double d = 0.42e2; //double data type Console.WriteLine(d); // output 42 float f = 134.45E-2f; //float data type Console.WriteLine(f); // output: 1.3445 decimal m = 1.5E6m; //decimal data type Console.WriteLine(m); // output: 1500000 } }

Comparison between Float, Double and Decimal on the Basis of:

No. of Bits used:

  • Float uses 32 bits to represent data.
  • Double uses 64 bits to represent data.
  • Decimal uses 128 bits to represent data.

Range of values:

  • The float value ranges from approximately ±1.5e-45 to ±3.4e38.

  • The double value ranges from approximately ±5.0e-324 to ±1.7e308.

  • The Decimal value ranges from approximately ±1.0e-28 to ±7.9e28.

Precision:

  • Float represent data with single precision.
  • Double represent data with double precision.
  • Decimal has higher precision than float and Double.

Accuracy:

  • Float is less accurate than Double and Decimal.
  • Double is more accurate than Float but less accurate than Decimal.
  • Decimal is more accurate than Float and Double.
Share Improve this answer Follow answered Jan 5, 2023 at 7:23 Abbas Aryanpour's user avatar Abbas AryanpourAbbas Aryanpour 5835 silver badges19 bronze badges Add a comment | 5

The problem with all these types is that a certain imprecision subsists AND that this problem can occur with small decimal numbers like in the following example

Dim fMean as Double = 1.18 Dim fDelta as Double = 0.08 Dim fLimit as Double = 1.1 If fMean - fDelta < fLimit Then bLower = True Else bLower = False End If

Question: Which value does bLower variable contain ?

Answer: On a 32 bit machine bLower contains TRUE !!!

If I replace Double by Decimal, bLower contains FALSE which is the good answer.

In double, the problem is that fMean-fDelta = 1.09999999999 that is lower that 1.1.

Caution: I think that same problem can certainly exists for other number because Decimal is only a double with higher precision and the precision has always a limit.

In fact, Double, Float and Decimal correspond to BINARY decimal in COBOL !

It is regrettable that other numeric types implemented in COBOL don't exist in .Net. For those that don't know COBOL, there exist in COBOL following numeric type

BINARY or COMP like float or double or decimal PACKED-DECIMAL or COMP-3 (2 digit in 1 byte) ZONED-DECIMAL (1 digit in 1 byte) Share Improve this answer Follow answered Feb 23, 2017 at 13:05 schlebe's user avatar schlebeschlebe 3,6965 gold badges43 silver badges57 bronze badges Add a comment | 4

In simple words:

  1. The Decimal, Double, and Float variable types are different in the way that they store the values.
  2. Precision is the main difference (Notice that this is not the single difference) where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.
  3. The summary table:

/========================================================================================== Type Bits Have up to Approximate Range /========================================================================================== float 32 7 digits -3.4 × 10 ^ (38) to +3.4 × 10 ^ (38) double 64 15-16 digits ±5.0 × 10 ^ (-324) to ±1.7 × 10 ^ (308) decimal 128 28-29 significant digits ±7.9 x 10 ^ (28) or (1 to 10 ^ (28) /========================================================================================== You can read more here, Float, Double, and Decimal.

Share Improve this answer Follow edited Feb 10, 2018 at 9:19 answered Feb 10, 2018 at 8:47 IndustProg's user avatar IndustProgIndustProg 6471 gold badge13 silver badges36 bronze badges 2
  • 1 What does this answer add that isn't already covered in the existing answers? BTW, your "or" in the "decimal" line is incorrect: the slash in the web page that you're copying from indicates division rather than an alternative. – Mark Dickinson Commented Feb 10, 2018 at 12:15
  • 2 And I'd dispute strongly that precision is the main difference. The main difference is the base: decimal floating-point versus binary floating-point. That difference is what makes Decimal suitable for financial applications, and it's the main criterion to use when deciding between Decimal and Double. It's rare that Double precision isn't enough for scientific applications, for example (and Decimal is often unsuitable for scientific applications because of its limited range). – Mark Dickinson Commented Feb 10, 2018 at 12:28
Add a comment | 3

The main difference between each of these is the precision.

  • float is a 32-bit number
  • double is a 64-bit number
  • decimal is a 128-bit number
Share Improve this answer Follow edited Jul 19, 2022 at 1:10 Pang's user avatar Pang 10.1k146 gold badges85 silver badges124 bronze badges answered Dec 21, 2014 at 18:50 user3776645's user avatar user3776645user3776645 4074 silver badges5 bronze badges Add a comment | 0

Use:

  • int for whole numbers
  • decimal anywhere numbers with decimals are displayed to end users.
  • double anywhere else you need to support fractions

Float, double and decimal are all floating-point types which means they support fractions and can represent both very large and very small numbers. Decimal is a decimal (base-10) format, while float and double are both binary (base-2) floating points, just with different precision.

The most significant distinction is between the decimal and binary floating-points, so here is a comparison:

float / double decimal
rounding behavior weird and confusing, sometimes looks like a bug intuitive and looks correct to humans
performance fast slow
exponent base base-2 (binary) base-10 (decimal)
size 32 / 64 bit 128 bit
precision ~6-9 digits / ~15-17 digits 28-29 digits
standardization universal standard .net-specific type
result of divide by zero magic NaN value throws error
result of overflow magic Infinity value throws error
normalization normalizes trailing zeros keeps trailing zeros

(source for precision)

Perhaps the most notable difference is the rounding behavior:

Console.WriteLine("0.1 + 0.2 using decimal: " + (0.1m + 0.2m)); Console.WriteLine("0.1 + 0.2 using double: " + (0.1d + 0.2d));

results in:

0.1 + 0.2 using decimal: 0.3 0.1 + 0.2 using double: 0.30000000000000004

As you see, the rounding behavior of doubles might seem surprising for someone without a computer-science degree. For this reason, the decimal-type is preferred when numbers are presented to end-users. The classic examples are monetary amounts in accounting and bank transactions, but the same would apply if these were amounts in a recipe or measurements on a blueprint. Basically anywhere numbers are displayed in decimal format for an end user.

The above could give the impression that only doubles has problems due to rounding, but this is not the case. See this example:

Console.WriteLine("(1 / 3) * 3 using decimal: " + ((1m / 3m) * 3m)); Console.WriteLine("(1 / 3) * 3 using double: " + ((1d / 3d) * 3d));

Which result in:

(1 / 3) * 3 using decimal: 0.9999999999999999999999999999 (1 / 3) * 3 using double: 1

The fact is that any numeric type will have rounding issues. It is unavoidable because there are infinitely many real numbers but a numeric format can only express a finite set of different numbers. It is just that we humans are used to decimal numbers and therefore the rounding behavior of decimals are easier to understand and seem "less wrong". We understand that 1/3 is rounded to 0.3333 (since we don't have infinite decimals) and that 0.3333 multiplied by 3 then would be 0.9999. The rounding behavior of the base-2 double is harder to explain without going into binary arithmetic and algorithm for converting between base-2 and decimal

But as you probably know, computers prefer to think in binary and therefore doubles are far more efficient. All modern processors have native support for doubles, while decimals are partially implemented in software which is slower. Float/double are standardized across platforms and languages, while decimals are a .net specific type.

Share Improve this answer Follow answered Nov 29, 2024 at 14:21 JacquesB's user avatar JacquesBJacquesB 42.6k13 gold badges75 silver badges88 bronze badges Add a comment | -3

To define Decimal, Float and Double in .Net (c#)

you must mention values as:

Decimal dec = 12M/6; Double dbl = 11D/6; float fl = 15F/6;

and check the results.

And Bytes Occupied by each are

Float - 4 Double - 8 Decimal - 12 Share Improve this answer Follow edited Jun 20, 2020 at 9:12 Community's user avatar CommunityBot 11 silver badge answered Jan 17, 2020 at 11:26 Purnima Bhatia's user avatar Purnima BhatiaPurnima Bhatia 671 silver badge6 bronze badges 1
  • The question was asking for the difference and advantages/disadvantages of each – Reid Moffat Commented May 5, 2022 at 13:44
Add a comment | Highly active question. Earn 10 reputation (not counting the association bonus) in order to answer this question. The reputation requirement helps protect this question from spam and non-answer activity.
  • The Overflow Blog
  • Developers want more, more, more: the 2024 results from Stack Overflow’s...
  • How AI apps are like Google Search
  • Featured on Meta
  • The December 2024 Community Asks Sprint has been moved to March 2025 (and...
  • Stack Overflow Jobs is expanding to more countries

Linked

21 float/double Math.Round in C# 38 What is the difference between "Double" and "double" in Java? 4 Financial calculations: double or decimal? 6 What is the difference between 'decimal' and 'float' in C#? 3 Why same values differ with float and double 6 .NET C# float.Parse returning different result to double.Parse 20 Float, Double and Decimal Max Value vs Size 2 C# issue with double number 3 Convert Double --> Decimal not exactly 2 Why is this output when compare a float and a double See more linked questions 2 double in .net 5 Should I use decimal, float or double for this simple math result? 303 When should I use double instead of decimal? 18 In .net, how do I choose between a Decimal and a Double 8 C++ / C# differences with float and double 7 Float and double in c# 2 What is that decimal can't do but double can and vice versa? 2 What are the actual ranges of floating point and double data types in C#? 2 precision of Double and Decimal (in .NET) 1 What is the difference between .NET double and python float?

Hot Network Questions

  • Heat liquids (water, milk) to specific temperature?
  • 15 puzzle solvability
  • How serving documents ensure that a party got the right ones?
  • Find Jordan cononical form of the matrix BA, and AB is known.
  • Is it possible for many electrons to become excited when energy is absorbed by an atom or only one or two?
  • Are NASA computers really that powerful?
  • Did a peaceful reunification of a separatist state ever happen?
  • What is a good way to DM when the party searches for something?
  • Correspondence of ancient 天关 in western astronomy
  • Why don't bicycles have the rear sprocket OUTSIDE of the frame spacing? (Single speed)
  • Can line integrals be evaluated along non-smooth curve?
  • NFTables Doesn't Route Packets To Another Address
  • If my mount were to attune to a headband of intellect, could I teach it common (to full understand and work with me as an Intelligent creature)?
  • What happened to 1T-SRAM?
  • What is the origin of "litera" versus "littera"?
  • Test significance of effect of a variable in log-linear model with interaction term
  • Why does Cutter use a fireaxe to save a trapped performer in the water tank trick?
  • Can game companies detect pirated games and sue if the user obtained the games using legitimate ways in other platforms?
  • Identifying data frame rows in R with specific pairs of values in two columns
  • Can I get an outlet installed inside the cabinet with out disturbing the backsplash
  • What does "standard bell" mean?
  • Why are so many problems linear and how would one solve nonlinear problems?
  • Are there any languages without adpositions?
  • What did "tag tearing" mean in 1924?
more hot questions Question feed Subscribe to RSS Question feed

To subscribe to this RSS feed, copy and paste this URL into your RSS reader.

default

Từ khóa » Visual Basic Double Vs Decimal