Project address: github.com/wangxiaofei…

Why do floating-point operations lose precision

Some floating-point numbers are converted to binary in an infinite loop, eg: 0.1

0.1 * 2 = 0.2 # 0
0.2 * 2 = 0.4 # 0
0.4 * 2 = 0.8 # 0
0.8 * 2 = 1.6 # 1
0.6 * 2 = 1.2 # 1
0.2 * 2 = 0.4 # 0.Copy the code

But the computer memory is limited, we can not use to store all decimal places, to reach a certain precision point will directly abandon; Some of the errors cancel each other out, but some of them add up, making 0.1+0.2! = 0.3

How to do

Floating point split

1.12 => {left: 1, right: 1, length: 2} 1.1 => {left: 1, right: 1, length: 1} -1.12 => {left: 1, right: 1, length: 2}Copy the code

Two decimal places are synchronized

1.12 => {left: 1, right: 12, length: 2}Copy the code

operation

  1. add
  • Partial addition of integers
  • Add decimals
  • Determine whether to carry or uncarry (carry if the sum of the decimal places is greater than their number)
  • parseFloat
  1. The multiplication
  • (l1 + r1) * (l2 + r2) = l1l2 + l1r2 + r1l2 + r1r2
  • For example, r1* R2 = 10 * 12 = 120 = 0.012 –> because the result of multiplying two decimal places by two decimal places is four decimal places
  • Add the four parts

Why is there no division

Division itself has its limits

Why not just shift a floating-point number to an integer, evaluate it, and shift it back

After doing so, the calculable range of the method will be greatly smaller. For example, if the maximum number of digits in the system is 32 bits for the integer part and 32 bits for the decimal part, the number of digits to be added under this premise will be reduced by half. The maximum number that can be supported is 32 bits for the decimal part and the integer part together.