Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Comments on Why is it considered bad practice to use float for representing currency?

Parent

Why is it considered bad practice to use float for representing currency?

+12
−1

This is intended to be a canonical post for this problem which is pretty common. Especially among beginners.


I've heard that I should avoid using floating point variables for representing currency. What is the reason for that? Representing 123 dollars and 67 cents as 123.67 seems quite natural.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

Long-term usefulness (1 comment)
Post
+5
−3

I see that Klutt has explained why integers should be used, but there is more that the programmer must keep in mind.

Consider the number of bits the integer needs. For US currency, you'd use cents (1/100 of a dollar). On many machines, integers are only 32 bits wide unless you specifically ask for more. 232 is 4.3 billion, which is only 43 $M. Clearly that's nowhere near enough for many uses. Some other denominations have even smaller minimum increments.

I once used "double" precision floating point for expressing wealth because the machine and compiler combination didn't support more than 32 bit integers. I knew that in this case the double precision floating point format provided 48 bits of precision. Keeping everything in units of cents allowed up to 2.8 $T while still being able to represent the amount exactly. That was good enough for my purpose in that case. I did add routines that would round to the nearest whole cents after calculations.

Another issue with integers is how calculations get rounded. Usually integer arithmetic truncates results instead of rounding. Put another way, they round towards 0 instead of rounding towards the nearest whole value. That's usually not what you want. Using a large floating point format for calculations, then rounding to the nearest whole amount and storing that gets around the problem. However, this must be done carefully. You have to be very aware of both the integer and floating point formats of any machine the code might run on.

The best answer in terms of correctness and portability is to use a wide integer library with round-to-nearest capability. This could then use native hardware when available, although round-to-nearest is usually not supported in hardware integer arithmetic units.

Another issue is that while the amounts of wealth should be kept in integers, other values you need to apply to them will not be integer. For example, figuring the interest earned over some time requires multiplication by a fraction. This needs to be done very carefully. Naive implementations can overflow integers, for example.

So the answer is that we want to store exact amounts, which is always whole units of whatever the minimum denomination is in the currency you are using. However, it is nowhere near as simple as just "use integers for currency".

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

3 comment threads

$4.3B = $4,300M actually (2 comments)
Do not calculate in cents, calculate in small fractions of cents (1 comment)
Regarding rounding, also be wary of C90 (1 comment)
Regarding rounding, also be wary of C90
Lundin‭ wrote about 3 years ago · edited about 3 years ago

In the old C language version C90 ("ANSI-C") the rounding direction of division of negative integers wasn't well-defined. So C90 compilers could either round up or down, it's implementation-defined. That is, they didn't necessarily round towards zero but could always round "down", which means that negative numbers would get different treatment. This big "language bug" was fixed in C99, but some old compilers may give unexpected results, particularly obscure embedded systems ones.