Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
I see that Klutt has explained why integers should be used, but there is more that the programmer must keep in mind. Consider the number of bits the integer needs. For US currency, you'd use cent...
Answer
#1: Initial revision
I see that Klutt has explained why integers should be used, but there is more that the programmer must keep in mind. Consider the number of bits the integer needs. For US currency, you'd use cents (1/100 of a dollar). On many machines, integers are only 32 bits wide unless you specifically ask for more. 2<sup>32</sup> is 4.3 billion, which is only 43 $M. Clearly that's nowhere near enough for many uses. Some other denominations have even smaller minimum increments. I once used "double" precision floating point for expressing wealth because the machine and compiler combination didn't support more than 32 bit integers. I knew that in this case the double precision floating point format provided 48 bits of precision. Keeping everything in units of cents allowed up to 2.8 $T while still being able to represent the amount exactly. That was good enough for my purpose in that case. I did add routines that would round to the nearest whole cents after calculations. Another issue with integers is how calculations get rounded. Usually integer arithmetic truncates results instead of rounding. Put another way, they round towards 0 instead of rounding towards the nearest whole value. That's usually not what you want. Using a large floating point format for calculations, then rounding to the nearest whole amount and storing that gets around the problem. However, this must be done carefully. You have to be very aware of both the integer and floating point formats of any machine the code might run on. The best answer in terms of correctness and portability is to use a wide integer library with round-to-nearest capability. This could then use native hardware when available, although round-to-nearest is usually not supported in hardware integer arithmetic units. Another issue is that while the amounts of wealth should be kept in integers, other values you need to apply to them will not be integer. For example, figuring the interest earned over some time requires multiplication by a fraction. This needs to be done very carefully. Naive implementations can overflow integers, for example. So the answer is that we want to store exact amounts, which is always whole units of whatever the minimum denomination is in the currency you are using. However, it is nowhere near as simple as just "use integers for currency".