Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Comments on Why is it considered bad practice to use float for representing currency?

Parent

Why is it considered bad practice to use float for representing currency?

+12
−1

This is intended to be a canonical post for this problem which is pretty common. Especially among beginners.


I've heard that I should avoid using floating point variables for representing currency. What is the reason for that? Representing 123 dollars and 67 cents as 123.67 seems quite natural.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

Long-term usefulness (1 comment)
Post
+6
−0

The main problem with using floating point is that the typical floating point type is binary floating point. But in binary floating point, most decimal fractions cannot be represented exactly. That is, if you store the number 9.95 in a floating point number, the number that you are actually storing is close to, but not exactly 9.95.

For example, with double precision, it is about 7×10−14 smaller, so that when you ask whether 9.95*100 < 995 you actually get a true result, contrary to what you would expect. Now it is possible to deal with that, however it is very easy to get wrong, therefore it is best to not try it to begin with.

Note that decimal floating point numbers do not have this issue, therefore if your language implementation provides those, there's nothing wrong with using them to store currency in them (indeed, that is exactly what those types are meant for). However few languages provide them.

Now floating point numbers are indeed able to store a certain range of integers exactly (the range depends on the type), therefore it is possible to store the money in cents. But then, the number of cents is an integer anyway, therefore there's usually no reason to use a floating point type. In rare cases, the integer range a floating point type can represent exactly may be larger than the range of the largest available integer type, in this case one might decide that storing the integer value in such floats is the better option. But be aware that it is still effectively an integer value you store.

Another problem with floating point numbers, this time including decimal ones, is that when the numbers get too big, they silently lose precision in the last integer digits (that is, if you store a large sum as an amount of cents, there may be a few cents too few or too many). This is not a flaw, but by design, as this matches what is needed in most cases where floating point numbers are used; however if storing money values, you don't want to lose that precision. Therefore you need to be careful that your amount never leaves the range in which you lose those digits.

Now those range checks are in principle the same that you also have to do with integer arithmetic to prevent integer overflows; however if your integers overflow, the results are usually far off and therefore obviously wrong, while with floating point “precision overflow” the errors are usually tiny and may not be noticed immediately.

Now after having considered storage, what about calculations?

Quite obviously doing all the calculations directly in the integers as multiples of cents is a bad idea. To begin with, your computer's idea of rounding integers will likely not agree with the financial idea of rounding. Most languages round integers towards zero, and most of the rest round them towards minus infinity. On the other hand, in financial applications, you want to round to the nearest number.

However even if your language offers a rounding to the nearest value, chances are that this rounding doesn't exactly do what you want. The issue is the rounding of values exactly in the middle between two integers, like e.g. 2.5, where there is no unique nearest number.

In computers, usually the “round to even” strategy is applied, as that provides the best numerical stability. This means that 2.5 is rounded down to 2, while 3.5 is rounded up to 4. On the other hand, in financial applications the general rule is that halves are always rounded up, that is 2.5 must be rounded up to 3.

However even if your computer offers the correct rounding mode, doing the calculations on integers representing cents (or whatever the minimal unit of your currency is) is not a good idea, as errors may accumulate over several steps (double rounding).

For example, say that you've got 128 dollars invested with an interest of 0.6%, but in the first year you only get half of that percentage. So how much do you get in the first year? Well, obviously 0.3% of 128 dollars are 38.4 cents, which then get rounded to 38 cents. But let's do the calculation differently: First, we calculate the interest you'd normally get: 0.6% of 128 dollars are 76.8 cents, which get rounded to 77 cents. And then half of this is 38.5 cents which get rounded to 39 cents. This is one cent more.

To avoid this type of error, intermediate calculations should always be done with a higher precision, and only the end result be converted to integer cents with proper rounding.

If you know that in the range of possible amounts your floating point numbers have sufficient precision, you can use those for the intermediate calculations; preferably decimal floats. Another option is to use a fixed point type with enough precision; that way you don't have to worry about range-dependent precision at all. If you don't have fixed point available, the best option is to again do your calculations in integers, but using a far smaller unit than one cent (e.g. 1/1000 cent). Note that the more extra digits you have, the less rounding errors from the involved arithmetic matter. The final rounding to integer cents then is completely under your control.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

1 comment thread

I would argue that _exactly_ how the rounding is to be done needs to be specified in the terms and co... (1 comment)
I would argue that _exactly_ how the rounding is to be done needs to be specified in the terms and co...
Martin Bonner‭ wrote about 3 years ago

I would argue that exactly how the rounding is to be done needs to be specified in the terms and conditions of the investment product, and the implementation needs to follow that specification.