If I ask you what is 0.1 + 0.2? You may give me a blank look, 0.1 + 0.2 = 0.3 Ah, do you still need to ask? Even children in kindergarten can answer such a pediatric question. But you know, the same problem in a programming language may not be as simple as imagined.
Don’t believe it? Let’s look at a piece of JS first.
var numA = 0.1;
var numB = 0.2;
alert( (numA + numB) === 0.3 );
The execution result is false. Yes, when I saw this code for the first time, I took it for granted that it was true, but the execution results surprised me. Is my opening method wrong? No, no. Let's try executing the following code again and we will know why the result is false.
var numA = 0.1;
var numB = 0.2;
alert(numA + numB);
It turns out that 0.1 + 0.2 = 0.30000000000000004. Isn’t it weird? In fact, for the four arithmetic operations of floating point numbers, almost all programming languages will have problems similar to precision errors. However, in languages such as C++/C#/Java, methods have been encapsulated to avoid precision problems, and JavaScript is a weak type. The language does not have a strict data type for floating point numbers from the design concept, so the problem of precision error is particularly prominent. Let’s analyze why there is this accuracy error and how to fix it.
First of all, we have to think about the seemingly pediatric problem of 0.1 + 0.2 from a computer perspective. We know that what can be read by computers is binary, not decimal, so let's first convert 0.1 and 0.2 into binary and take a look:
0.1 => 0.0001 1001 1001 1001… (infinite loop)
0.2 => 0.0011 0011 0011 0011… (infinite loop)
The decimal part of a double-precision floating point number supports up to 52 bits, so after adding the two, we get a string of 0.0100110011001100110011001100110011001100110011001100 The binary number is truncated due to the limitation of decimal places of floating point numbers. At this time, we convert it to decimal and it becomes 0.30000000000000004.
So that’s it, so how to solve this problem? The result I want is 0.1 + 0.2 === 0.3 Ah! ! !
One of the simplest solutions is to give clear precision requirements. In the process of returning the value, the computer will automatically round, such as:
var numA = 0.1;
var numB = 0.2;
alert( parseFloat((numA + numB).toFixed(2)) === 0.3 );
But obviously this is not a once and for all method. It would be great if there was a method that could help us solve the precision problem of these floating point numbers. Let’s try this method:
Math.formatFloat = function(f, digit) {
var m = Math.pow(10, digit);
return parseInt(f * m, 10) / m;
}
var numA = 0.1;
var numB = 0.2;
alert(Math.formatFloat(numA + numB, 1) === 0.3);
What does this method mean? In order to avoid precision differences, we need to multiply the number to be calculated by 10 to the nth power, convert it into an integer that the computer can accurately recognize, and then divide it by 10 to the nth power. This is how most programming languages handle precision differences. , we will use it to deal with the precision error of floating point numbers in JS.
If someone asks you next time what 0.1 + 0.2 equals, you should be careful with your answer! !