0.1 + 0.2! = = 0.3

0.1 + 0.2;
/ / 0.30000000000000004
Copy the code

Why is that?

0.1 + 0.2! Due to accuracy loss. = = 0.3.

Can you be more specific?

Uh…

specific

First, both 0.1 and 0.2 in decimal are converted to binary, but since floating point numbers are infinite when expressed in binary.

Why are floating point numbers infinite when expressed in binary?

Binary floating point:

Floating point numbers don’t represent all real numbers, for example1.375Can be accurately represented because. But in theCase 1,1.4and1.1Neither number can be represented precisely, as the link [4] can be used to convert the floating-point digital surface quantity to a real stored value, as can be seen1.4Is approximated as1.399999999999999, 1.1Is approximated as1.10000000000000008So if I subtract these two numbers, I get no0.3but0.2999999999999998.

The minimum relative error between two doubles is that 2-52 is approximately 2.22 x 10-16, so only 15 significant digits are guaranteed, and the 16th digit is only partially accurate. Single precision guarantees only six significant digits, and the seventh digit is partially accurate. So in example 2, 4.0 + 1E16 = 1.0000000000000004e16, it happens that the 16th significant digit is partially accurate. 4 can be represented precisely, but 5.0 + 1E16 = 1.0000000000000004e16, because the last digit does not represent 5. So you have floating point errors. 4.0 + 1e17 = 1e17 => 4.0 + 1e17-1e17 = 0.0

Standard 64-bit double-precision floating-point numbers support a maximum of 53 binary bits for the decimal part, so the truncated binary number is limited by floating-point decimal bits, and then converted to decimal causes errors.

The root cause of this problem is that computers are marked with 0 or 1, and a floating-point value is approximated by increasing the number of digits. The way floating-point numbers are implemented is a shift operation, so there will be errors in both representation and operation.

Basic knowledge supplement

In JavaScript, the Number type is essentially a 64-bit floating point Number.

The highest 1 bit is the sign bit, followed by 11 exponential bits, and the remaining 52 significant digits.

The sign bit determines the value, the exponential bit determines the value, and the significant bit determines the precision.

conclusion

0.1 + 0.2! == 0.3 is caused by the loss of accuracy.

Because JavaScript’s Number type is a double precision floating point (64-bit) Number, floating point numbers cannot accurately represent all real numbers, and some can only be approximated indefinitely, so there are errors.

The root cause of this problem is that computers are marked with 0 or 1, and a floating-point value is approximated by increasing the number of digits. The way floating-point numbers are implemented is a shift operation, so there will be errors in both representation and operation.

reference

JavaScript Number scopes and out-of-scope operations

Explain binary floating point numbers