Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Post History

58%
+5 −3
Q&A Why is it considered bad practice to use float for representing currency?

I see that Klutt has explained why integers should be used, but there is more that the programmer must keep in mind. Consider the number of bits the integer needs. For US currency, you'd use cent...

posted 3y ago by Olin Lathrop‭

Answer
#1: Initial revision by user avatar Olin Lathrop‭ · 2021-09-13T13:01:46Z (over 3 years ago)
I see that Klutt has explained why integers should be used, but there is more that the programmer must keep in mind.

Consider the number of bits the integer needs.  For US currency, you'd use cents (1/100 of a dollar).  On many machines, integers are only 32 bits wide unless you specifically ask for more.  2<sup>32</sup> is 4.3 billion, which is only 43 $M.  Clearly that's nowhere near enough for many uses.  Some other denominations have even smaller minimum increments.

I once used "double" precision floating point for expressing wealth because the machine and compiler combination didn't support more than 32 bit integers.  I knew that in this case the double precision floating point format provided 48 bits of precision.  Keeping everything in units of cents allowed up to 2.8 $T while still being able to represent the amount exactly.  That was good enough for my purpose in that case.  I did add routines that would round to the nearest whole cents after calculations.

Another issue with integers is how calculations get rounded.  Usually integer arithmetic truncates results instead of rounding.  Put another way, they round towards 0 instead of rounding towards the nearest whole value.  That's usually not what you want.  Using a large floating point format for calculations, then rounding to the nearest whole amount and storing that gets around the problem.  However, this must be done carefully.  You have to be very aware of both the integer and floating point formats of any machine the code might run on.

The best answer in terms of correctness and portability is to use a wide integer library with round-to-nearest capability.  This could then use native hardware when available, although round-to-nearest is usually not supported in hardware integer arithmetic units.

Another issue is that while the amounts of wealth should be kept in integers, other values you need to apply to them will not be integer.  For example, figuring the interest earned over some time requires multiplication by a fraction.  This needs to be done very carefully.  Naive implementations can overflow integers, for example.

So the answer is that we want to store exact amounts, which is always whole units of whatever the minimum denomination is in the currency you are using.  However, it is nowhere near as simple as just "use integers for currency".