Developers who have learned the front end will encounter 0.1+0.2 during project development! =0.3 weird problem. To think in terms of conventional logic, this is definitely not in accordance with our mathematical norms. So why do these basic arithmetic errors occur in JavaScript, and how do they work? This article will sort out the reasons for this problem from the principle

JavaScript numerical problems

Before entering into the principle of analysis, the author first throws out three basic questions, you can think about first.

Problem a:

How are quantitative values calculated in the JavaScript specification, the reason for NaN, and the quantitative value of NaN

The Number typeHas exactly 18437736874454810627 values... (Why this number?)Copy the code

Problem two:

MAX_SAFE_INTEGER === 9007199254740991 // Why is this Number number. MAX_SAFE_INTEGER + 1 === Numbertrue
Copy the code

Question 3:

0.1 + 0.2! = 0.3 // What is the reason?Copy the code

Binary in a computer

Next to enter the text, people who have learned the basis of computer know that the bottom of the computer is through binary data interaction. We should understand why computers interact with data through binary, and what is binary

1. Why do computers interact with data through binary?

In our daily use of electronic computers, digital circuits constitute the basis of our computer physics, these digital circuits can be regarded as a set of gate circuits, the theoretical basis of the gate circuit is logical operation. So when the circuit of our computer is energized, there is a voltage at each of the outputs. The voltage level is converted into binary by a/D conversion: high level is indicated by 1, low level is indicated by 0.

Put simply, the basic operation of the computer is supported by the circuit, the circuit is easy to identify high and low voltage, that is, as long as the circuit can identify low, high can represent “0” and “1”.

2. What is binary

Binary is like our decimal, decimal is one in ten, binary is one in two.

001, for example, is 002 in decimal, but 010 in binary, because the 2 in 002 needs to be carried one more bit.

So our ordinary calculations in the computer are decimal, so when the computer is processing our operations, it will convert the decimal numbers into binary numbers, and then add the binary numbers, and the result is converted into decimal, so as to display on our screen. These transformations are internal operations of the computer, and we can’t see their transformation process. Then you must understand the wisdom of 0.1 + 0.2! = 0.3 this problem must be related to the conversion of decimal to binary, and then binary back to decimal processing (precision loss).

The decimal operation of a computer

We can see from the above, we have located the problem, don’t worry, we first determine the binary to decimal, decimal to binary implementation, to analyze the cause of precision loss.

Decimal to binary

Decimal integer to binary

Example: Convert the decimal 21 to a binary number.

Method: Divide an integer by 2 and reverse the remainder

21/2 = 10 -- 1 siphione, siphione, siphione, siphione, siphione, siphione, siphione, siphione, siphione, siphioneCopy the code

Binary (inverse remainder) : 10101

Decimal Converts decimal to binary

Example: Convert 0.125 to binary

Method: Multiply the decimal part by 2, and then take the whole part until the decimal part is 0. If the decimal never equals zero, then the trade-off is used. If the last digit is a 0, then it is discarded. If it’s a 1, then it carries one. Readings are read from the preceding integer to the following integer

0.125 * 2 = 0.25, 0 ⬇ 0.25 * 2 = 0.5, 0 ⬇ ⬇ 0.5 * 2 = 1.0-1Copy the code

Binary: 0.001

Binary to decimal

Binary to decimal, the whole and fractional parts of the method is the same.

Example: Convert a binary number 101.101 to a decimal number

Method: Multiply binary bits by weights and add them up to a decimal number


After converting decimal to binary, the computer added the binary.

Note: In computer operations, there is only addition. So 5 minus 5 becomes 5 plus minus 5.

In binary operations, to prevent incorrect operations and maximum bit overflow problems. The concepts of source code, inverse code and complement are introduced. Due to the limited space, the concept of source code, inverse code and complement code will not be expanded here. Interested readers can consult the information by themselves.

A number in JavaScript — a floating point number IEEE 754

So with the basics out of the way, let’s go back to our JavaScript. It is well known that JavaScript only has the numeric type Number, and Number is encoded by IEEE 754 64-bit double precision floating point Number. So in JavaScript, all values are represented as floating-point numbers, so what does the IEEE 754 standard look like, and what does JavaScript say about Number?

IEEE 754 standard, personal understanding is through scientific notation to control the position of the decimal point, to represent different numerical values.

In Wiki, IEEE 754 specifies four ways to represent floating point values: Single accuracy (32 bits), double accuracy (64 bits), extended single accuracy (43 bits above, rarely used) and extended double accuracy (79 bits above, usually implemented as 80 bits), usually we only use single accuracy (32 bits), double accuracy (64 bits)

Single precision (32 bits) representation

Double precision (64 bit) representation

From the above two figures, it can be seen that when the value is represented by IEEE 754 standard, it is divided into three sections, including sign, Exponent and fraction. Understanding these three sections is the key to learning THE IEEE 754 standard. So what do these three segments represent? Let’s take a look at how our binary values should be represented by IEEE 754, and then learn these three definitions.

In the international standard of IEEE 754, any binary floating-point number V, whether 32-bit single precision or 64-bit double precision, can be represented as shown in the figure below, which is from Teacher Ruan Yifeng’s blog

Among them:

  1. (-1)^s represents the sign bit. When s=0, V is positive. When s is equal to 1, V is negative.
  2. M is a significant number, greater than or equal to 1, less than 2.
  3. The E in 2 to the E is the exponential bit.

For example, if decimal 7 to binary is 111, which is 1.11*2^2, then s = 0, M = 1.11,E = 2;

If -7 converted from decimal to binary is -111, which is equivalent to -1.11*2^2, then s = 1,M = 1.11, E = 2;

In fact, s equals sign,M equals fration,E equals exponent.

At 32-bit single-digit accuracy, the sign bit is the highest bit, accounting for one bit size, followed by the next 8 bits being the exponent E, and the remaining 23 bits being the significant digit M.

For 64-bit single-digit accuracy, the sign bit is the highest bit, accounting for one bit size, followed by the next 11 bits being the exponent E, and the remaining 52 bits being the significant digit M.

So let’s talk about what the exponent E and the significant number M are. I mentioned earlier that the significant number M is greater than or equal to 1, less than 2. In fact, this is easy to understand. In our scientific notation, significant digits usually start with 1, that is, the form of 1.XXXX, where XXXX is the decimal part. Clever standard-setters, in order to allow 32-bit precision to represent more significant digits, decided that the 1 of the whole number did not occupy one digit of the significant digit M. So XXXX can hold 23 bits, so that when it reads, it can hold 24 significant digits by adding the first 1. IEEE 754 states that when storing M inside a computer, by default the first digit of the number is always 1, so it can be dropped to save only the XXXXXX portion that follows. Similarly, the 64-bit accuracy of M is equivalent to saving 53 significant digits

The exponent E is more complicated. Since E is an unsigned integer, it ranges from 0 to 255 for 32-bit accuracy (E has 8 bits) and from 0 to 2047 for 64-bit accuracy. But we can actually have negative numbers in the exponential part of our scientific notation. So how do you use E to represent a negative number? You can take E to the middle, and the left will be a negative exponent, and the right will be a positive exponent. IEEE 754 then states that the true value of E (the value expressed in exponent) must be subtracted by an intermediate number, 127 for 32-bit accuracy and 1023 for 64-bit accuracy; , you can see from the bold words that the index range actually represents -127 to 128; So that we can represent from in 32-bit precision

Example: the decimal 7 to binary is 111, which is equivalent to 1.11*2^2. In this case, E = 2, so the median value of E is actually reduced, so the real value of E is 2 + 127 = 129, and the binary value is 10000001.

At the same time, the index E can also be divided into three cases according to the regulations (with 32-bit accuracy as the discussion).

  1. The phase where E is not all zero or all one is a normal floating-point representation, which is the exponent by calculating E and subtracting 127

  2. E an all-0 floating-point exponent E is equal to 0-127 = -127. When the exponent is -127, the significant digit M is no longer appended with the first 1, but reverted to the decimal of 0.xxxxxx. This is done to represent plus or minus zero, and very small numbers that are close to zero

  3. E is all one and then if the significant digits M are all zero, then it’s either plus infinity or minus infinity, depending on the first sign bit. But if the significant number M is not all 0, then it’s not a number (NaN).

Back to the JavaScript

In the above discussion, we rarely mention JavaScript, which seems beside the point, but once you understand the principles above, you will have a great understanding of numbers in JavaScript.

The following sections will take you step by step through the questions raised above:

1. A numeric value in the JavaScript specification, why this number?

First of all, understand that numbers in JavaScript are 64-bits doubles, so there are 2^64 possibilities. As mentioned above, when E is all 1, it represents either an infinite number or a NaN. So there are 2^53 possibilities that are not numbers, and JavaScript defines +∞ and -∞, NaN as numbers. So the total number of JavaScript values is zero

We can also directly calculate the exact number of nans in JavaScript, because the definition of NaN above is that if E is all one, if the significant number M is not all zero, it is not a number. I’m just going to exclude the case where the significant numbers M are all 0 (+ infinity, – infinity)

2. Why is the maximum safe integer value in JavaScript 9007199254740991

As mentioned above, there are 53 significant digits (including 1 in the first digit of 1.xxxx). If the number exceeds 52 digits after the decimal point, the principle of rounding 0 into 1 is followed. Then such digits are not one-to-one corresponding, and there will be errors, and the accuracy will be lost. It’s not safe anymore. So the maximum safe integer value in JavaScript is

3. 0.1 + 0.2 != 0.3?

This is probably the most interesting question and the most classic JavaScript interview question. However, after learning the above knowledge, we have understood the cause of the problem (accuracy loss), so how is the specific loss?

First of all, the operation 0.1 + 0.2 is decimal addition. As mentioned above, the computer handles decimal addition by converting decimal to binary first and then processing the operation. So we need to calculate 0.1 binary, 0.2 binary and 0.3 binary for comparison verification.

Based on the above calculation, it is easy to conclude that the binary of 0.1 is cyclic indefinitely, i.e

0.1d = (-1)^0 * 1.1001.. (1001 loop 13 times)1010B * 2^-4 0.2d = (-1)^0 * 1.1001.. (1001 loop 13 times)1010B * 2^-3 0.3d = (-1)^0 * 1.0011.. (0011 loop 13 times)0011B * 2^-2Copy the code

As you can see, when 0.1, 0.2 are converted to binary, the significant digits are 52 bits (4 * 12 + 4), because in 64-bit accuracy, only 52 significant digits can be maintained. Without the 52 significant digits constraint, in fact, in the 53th bit, 0.1 to binary would have been 1, but with the 52 bit constraint, Depending on the binary selection, the last five digits change from 1001 1(53rd) to 1010.

We can manually calculate 0.1 binary plus 0.2 binary

So the sum in decimal is actually 0.30000000000000004, that’s why 0.1 + 0.2! = 0.3.

At the end

Starting from a weird problem, to understand why there will be such a phenomenon, as well as the principle of the inside, presumably this is the persistence of a programmer, seeking truth from facts, get to the bottom of the matter, will get more harvest. After reading this article, you will have a better understanding of JavaScript values.