Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Why is it considered bad practice to use float for representing currency?

+12
−1

This is intended to be a canonical post for this problem which is pretty common. Especially among beginners.


I've heard that I should avoid using floating point variables for representing currency. What is the reason for that? Representing 123 dollars and 67 cents as 123.67 seems quite natural.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

Long-term usefulness (1 comment)

2 answers

You are accessing this answer with a direct link, so it's being shown above all other answers regardless of its score. You can return to the normal view.

+5
−3

I see that Klutt has explained why integers should be used, but there is more that the programmer must keep in mind.

Consider the number of bits the integer needs. For US currency, you'd use cents (1/100 of a dollar). On many machines, integers are only 32 bits wide unless you specifically ask for more. 232 is 4.3 billion, which is only 43 $M. Clearly that's nowhere near enough for many uses. Some other denominations have even smaller minimum increments.

I once used "double" precision floating point for expressing wealth because the machine and compiler combination didn't support more than 32 bit integers. I knew that in this case the double precision floating point format provided 48 bits of precision. Keeping everything in units of cents allowed up to 2.8 $T while still being able to represent the amount exactly. That was good enough for my purpose in that case. I did add routines that would round to the nearest whole cents after calculations.

Another issue with integers is how calculations get rounded. Usually integer arithmetic truncates results instead of rounding. Put another way, they round towards 0 instead of rounding towards the nearest whole value. That's usually not what you want. Using a large floating point format for calculations, then rounding to the nearest whole amount and storing that gets around the problem. However, this must be done carefully. You have to be very aware of both the integer and floating point formats of any machine the code might run on.

The best answer in terms of correctness and portability is to use a wide integer library with round-to-nearest capability. This could then use native hardware when available, although round-to-nearest is usually not supported in hardware integer arithmetic units.

Another issue is that while the amounts of wealth should be kept in integers, other values you need to apply to them will not be integer. For example, figuring the interest earned over some time requires multiplication by a fraction. This needs to be done very carefully. Naive implementations can overflow integers, for example.

So the answer is that we want to store exact amounts, which is always whole units of whatever the minimum denomination is in the currency you are using. However, it is nowhere near as simple as just "use integers for currency".

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

3 comment threads

$4.3B = $4,300M actually (2 comments)
Do not calculate in cents, calculate in small fractions of cents (1 comment)
Regarding rounding, also be wary of C90 (1 comment)
+6
−0

The main problem with using floating point is that the typical floating point type is binary floating point. But in binary floating point, most decimal fractions cannot be represented exactly. That is, if you store the number 9.95 in a floating point number, the number that you are actually storing is close to, but not exactly 9.95.

For example, with double precision, it is about 7×10−14 smaller, so that when you ask whether 9.95*100 < 995 you actually get a true result, contrary to what you would expect. Now it is possible to deal with that, however it is very easy to get wrong, therefore it is best to not try it to begin with.

Note that decimal floating point numbers do not have this issue, therefore if your language implementation provides those, there's nothing wrong with using them to store currency in them (indeed, that is exactly what those types are meant for). However few languages provide them.

Now floating point numbers are indeed able to store a certain range of integers exactly (the range depends on the type), therefore it is possible to store the money in cents. But then, the number of cents is an integer anyway, therefore there's usually no reason to use a floating point type. In rare cases, the integer range a floating point type can represent exactly may be larger than the range of the largest available integer type, in this case one might decide that storing the integer value in such floats is the better option. But be aware that it is still effectively an integer value you store.

Another problem with floating point numbers, this time including decimal ones, is that when the numbers get too big, they silently lose precision in the last integer digits (that is, if you store a large sum as an amount of cents, there may be a few cents too few or too many). This is not a flaw, but by design, as this matches what is needed in most cases where floating point numbers are used; however if storing money values, you don't want to lose that precision. Therefore you need to be careful that your amount never leaves the range in which you lose those digits.

Now those range checks are in principle the same that you also have to do with integer arithmetic to prevent integer overflows; however if your integers overflow, the results are usually far off and therefore obviously wrong, while with floating point “precision overflow” the errors are usually tiny and may not be noticed immediately.

Now after having considered storage, what about calculations?

Quite obviously doing all the calculations directly in the integers as multiples of cents is a bad idea. To begin with, your computer's idea of rounding integers will likely not agree with the financial idea of rounding. Most languages round integers towards zero, and most of the rest round them towards minus infinity. On the other hand, in financial applications, you want to round to the nearest number.

However even if your language offers a rounding to the nearest value, chances are that this rounding doesn't exactly do what you want. The issue is the rounding of values exactly in the middle between two integers, like e.g. 2.5, where there is no unique nearest number.

In computers, usually the “round to even” strategy is applied, as that provides the best numerical stability. This means that 2.5 is rounded down to 2, while 3.5 is rounded up to 4. On the other hand, in financial applications the general rule is that halves are always rounded up, that is 2.5 must be rounded up to 3.

However even if your computer offers the correct rounding mode, doing the calculations on integers representing cents (or whatever the minimal unit of your currency is) is not a good idea, as errors may accumulate over several steps (double rounding).

For example, say that you've got 128 dollars invested with an interest of 0.6%, but in the first year you only get half of that percentage. So how much do you get in the first year? Well, obviously 0.3% of 128 dollars are 38.4 cents, which then get rounded to 38 cents. But let's do the calculation differently: First, we calculate the interest you'd normally get: 0.6% of 128 dollars are 76.8 cents, which get rounded to 77 cents. And then half of this is 38.5 cents which get rounded to 39 cents. This is one cent more.

To avoid this type of error, intermediate calculations should always be done with a higher precision, and only the end result be converted to integer cents with proper rounding.

If you know that in the range of possible amounts your floating point numbers have sufficient precision, you can use those for the intermediate calculations; preferably decimal floats. Another option is to use a fixed point type with enough precision; that way you don't have to worry about range-dependent precision at all. If you don't have fixed point available, the best option is to again do your calculations in integers, but using a far smaller unit than one cent (e.g. 1/1000 cent). Note that the more extra digits you have, the less rounding errors from the involved arithmetic matter. The final rounding to integer cents then is completely under your control.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

1 comment thread

I would argue that _exactly_ how the rounding is to be done needs to be specified in the terms and co... (1 comment)

Sign up to answer this question »