Without further ado, let’s start with the code

    function judgeFloat(n, m) {
      const binaryN = n.toString(2);
      const binaryM = m.toString(2);
      console.log(`${n}The binary of${binaryN}`);
      console.log(`${m}The binary of${binaryM}`);
      const MN = m + n;
      const accuracyMN = (m * 100 + n * 100) / 100;
      const binaryMN = MN.toString(2);
      const accuracyBinaryMN = accuracyMN.toString(2);
      console.log(`${n}+${m}The binary of${binaryMN}`);
      console.log(`${accuracyMN}The binary of${accuracyBinaryMN}`);
      console.log(`${n}+${m}Is converted from binary to decimal${to10(binaryMN)}`);
      console.log(`${accuracyMN}The binary is converted to decimal is${to10(accuracyBinaryMN)}`);
      console.log(`${n}+${m}In JS count${(to10(binaryMN) === to10(accuracyBinaryMN)) ? '' : 'no '}Accurate `); }function to10(n) {
      const pre = (n.split('. ')[0] - 0).toString(2);
      const arr = n.split('. ')[1].split(' ');
      let i = 0;
      let result = 0;
      while (i < arr.length) {
        result += arr[i] * Math.pow(2, -(i + 1));
        i++;
      }
      returnresult; } judgeFloat (0.1, 0.2); JudgeFloat (0.6, 0.7);Copy the code

Since there is no way in JavaScript to convert the binary of decimals to decimal, one is implemented manually.

Let me start with a simple conclusion

All the data in the computer is stored in binary, so the computer first converts the data into binary for calculation, and then converts the calculation result into decimal.

It is not difficult to see from the above code that when calculating 0.1+0.2, the accuracy of binary calculation is lost, which results in inconsistent with the expected result after converting to decimal.

In fact, there are some headlines, a function does not give you in-depth understanding, still have to see the following…

Analysis of the results – more questions

The binary values of 0.1 and 0.2 are both decimal numbers with an infinite loop of 1100.

Binary of 0.1:

0.0001100110011001100110011001100110011001100110011001101
Copy the code

Binary of 0.2:

0.001100110011001100110011001100110011001100110011001101
Copy the code

Theoretically, the sum of the above results should…

0.0100110011001100110011001100110011001100110011001100111
Copy the code

The actual JS calculation results in 0.1+0.2 binary

0.0100110011001100110011001100110011001100110011001101
Copy the code

As a code obsessive, I have a new problem:

Why JS calculates that the binary of 0.1 is this many bits rather than more bits??

Why JS calculated (0.1+0.2) binary and our own calculated (0.1+0.2) binary is not the same?

Why 0.1 binary + 0.2 binary! = 0.3 binary??

Js storage of binary decimals

Most decimal binaries are infinite loops. How does JavaScript store them?

As you can see in the ECMAScript® language specification, the Number type in ECMAScript follows the IEEE 754 standard. Use a 64-bit fixed length representation.

In fact, there are many languages that follow this standard for numeric types, such as JAVA, so many languages have the same problem.

So next time you have a problem like this don’t start spraying JavaScript…

If you are interested, you can have a look at this website 0.30000000000000004.com/, yes, you read it right, it is 0.30000000000000004.com/!!

IEEE 754

The IEEE754 standard contains a binary representation of a set of real numbers. It has three parts:

  • The sign bit

  • Index a

  • Mantissa bits

The three precision floating point numbers are as follows:

JavaScript uses 64-bit double-precision floating-point encoding, so it has 1 sign bit, 11 exponent bits, and 52 mantisca bits.

Now let’s understand what is the sign bit, the exponential bit and the mantissa bit, taking 0.1 as an example:

Its binary is: 0.0001100110011001100…

To save storage space, in computers it is expressed in scientific notation, i.e

1.100110011001100... X 2-4

If this is confusing, think of a decimal number:

The scientific notation for 1100 is 11 X 102

So:

The sign bit is to identify the positive and negative, 1 represents negative, 0 represents positive;

Index bits The index of the storage science counting method;

The mantissa digit stores the significant digit after scientific counting;

So what we usually see as binary is actually the mantissa that the computer actually stores.

Js the toString (2)

Since only 52 mantissa digits can be stored, this explains the result of toString(2) :

If the computer has no storage space limitations, then the binary of 0.1 would be:

0.00011001100110011001100110011001100110011001100110011001...
Copy the code

Mantissa of scientific notation

1.1001100110011001100110011001100110011001100110011001...
Copy the code

However, due to the limitation, the 53rd significant digit and subsequent digits cannot be stored. It follows the principle that if it is a 1, the first digit is advanced by 1, and if it is a 0, it is discarded.

Bit 53 of the binary scientific notation for 0.1 is 1, so we have the following result:

0.0001100110011001100110011001100110011001100110011001101
Copy the code

0.2 has the same problem, in fact, because of this storage, there is a loss of accuracy, resulting in 0.1+0.2! = 0.3.

In fact, there are many other computations with the same precision problem, and we can’t write them all down, so when there are numerical computations in the program, it’s best to use a library to help us solve them. Here are two recommended open source libraries:

  • number-precision

  • mathjs/

Now let’s look at the other two questions above.

Why JavaScript computes the binary of 0.1 to be this many bits rather than more bits??

The toString principle above helps us solve this problem. The number after the 53rd significant digit will follow the 1-to-0 principle, and only 52 significant digits will be allowed in memory.

Why is the binary (0.1+0.2) calculated by JavaScript different from the binary (0.1+0.2) calculated by ourselves?

Our own calculation of 0.1+0.2…

0.0100110011001100110011001100110011001100110011001100111
Copy the code

In fact, this result has more than 52 significant digits, so we’re going to do a 1-0 from the end to get to the following result

0.0100110011001100110011001100110011001100110011001101
Copy the code

The largest number that JavaScript can represent

Limited by the IEEE 754 double precision 64-bit specification:

Maximum number represented by exponential potential energy: 1023(decimal)

The maximum mantissa digit that can be expressed is the case where all the mantissa digits are 1

So the largest number that JavaScript can represent is bits

1.111… X 21023 = 1.7976931348623157e+308 in decimal form.

Maximum safety number

MAX_SAFE_INTEGER specifies the maximum safe Number, which is 9007199254740991. There is no loss of precision (except for decimals) within this Number, which is actually 1.111… X 252.

We can also use open source libraries to handle large integers:

  • node-bignum
  • node-bigint

BigInt was introduced in ES10 and is now available in Chrome.

BigInt type

BigInt is the seventh primitive type.

BigInt is an integer of arbitrary precision. This means that the variable can now compute numbers above 9007199254740991, the maximum safe integer.

const b = 1n; // Append n to create BigIntCopy the code

In the past, integer values greater than 9007199254740992 were not supported. If exceeded, this value is locked to MAX_SAFE_INTEGER + 1:

const limit = Number.MAX_SAFE_INTEGER;
⇨ 9007199254740991
limit + 1;
⇨ 9007199254740992
limit + 2;
⇨ 9007199254740992 <--- MAX_SAFE_INTEGER + 1 exceeded
const larger = 9007199254740991n;
⇨ 9007199254740991n
const integer = BigInt(9007199254740991); // initialize with number
⇨ 9007199254740991n
const same = BigInt("9007199254740991"); // initialize with "string"⇨ 9007199254740991 nCopy the code

typeof

typeof 10;
⇨ 'number'
typeof 10n;
⇨ 'bigint'
Copy the code