The emergence of AJAX has greatly changed the operation mode of Web application clients. It allows users to concentrate on their work without having to endure the annoying page refreshes frequently. In theory, AJAX technology can reduce the waiting time for user operations to a large extent and save data traffic on the network. However, this is not always the case. Users often complain that the response speed of systems using AJAX is reduced.
The author has been engaged in AJAX research and development for many years and participated in the development of dorado, a relatively mature AJAX platform in China. According to the author's experience, the root cause of this result is not AJAX. Many times the reduction in system response speed is caused by unreasonable interface design and insufficiently efficient programming habits. Below we will analyze several aspects that need to be paid attention to during the AJAX development process.
Proper use of client programming and remote procedure calls.
Client-side programming is mainly based on JavaScript. JavaScript is an interpreted programming language, and its operating efficiency is slightly lower than that of Java. At the same time, JavaScript runs in a strictly restricted environment such as the browser. Therefore developers should have a clear understanding of which logic can be executed on the client side.
How client-side programming should be used in actual applications depends on the developer's experience and judgment. Many problems here can only be understood. Due to limited space, here we roughly summarize the following precautions:
Avoid frequent use of remote procedure calls as much as possible, for example, avoid using remote procedure calls in loop bodies.
If possible, use AJAX remote procedure call (asynchronous remote procedure call).
Avoid placing heavyweight data operations on the client side. For example: large batches of data copy operations, calculations that require a large amount of data traversal, etc.
Improve the operation method of DOM objects.
In client-side programming, operations on DOM objects are often the most likely to occupy CPU time. For the operation of DOM objects, the performance difference between different programming methods is often very large.
The following are three pieces of code with exactly the same results. Their function is to create a 10x1000 table in the web page. However, their running speeds are vastly different.
/* Test code 1 - Time taken: 41 seconds*/
var table = document.createElement("TABLE");
document.body.appendChild(table);
for(var i = 0; i < 1000; i++){
var row = table.insertRow(-1);
for(var j = 0; j < 10; j++){
var cell = objRow.insertCell(-1);
cell.innerText = "( " + i + " , " + j + " )";
}
}
/* Test code 2 - Time taken: 7.6 seconds*/
var table = document.getElementById("TABLE");
document.body.appendChild(table);
var tbody = document.createElement("TBODY");
table.appendChild(tbody);
for(var i = 0; i < 1000; i++){
var row = document.createElement("TR");
tbody.appendChild(row);
for(var j = 0; j < 10; j++){
var cell = document.createElement("TD");
row.appendChild(cell);
cell.innerText = "( " + i + " , " + j + " )";
}
}
/* Test code 3 - time taken: 1.26 seconds*/
var tbody = document.createElement("TBODY");
for(var i = 0; i < 1000; i++){
var row = document.createElement("TR");
for(var j = 0; j < 10; j++){
var cell = document.createElement("TD");
cell.innerText = "( " + i + " , " + j + " )";
row.appendChild(cell);
}
tbody.appendChild(row);
}
var table = document.getElementById("TABLE");
table.appendChild(tbody);
document.body.appendChild(table);
The difference between "Test Code 1" and "Test Code 2" here is that different API methods are used when creating table cells. The difference between "Test Code 2" and "Test Code 3" lies in the slightly different processing order.
We cannot analyze such a big performance difference between "Test Code 1" and "Test Code 2". What is currently known is that insertRow and insertCell are table-specific APIs in DHTML, and createElement and appendChild are native APIs of W3C DOM. The former should be an encapsulation of the latter. However, we cannot conclude from this that the DOM's native API is always better than the object-specific API. It is recommended that you do some basic tests on its performance when you need to call an API frequently.
The performance difference between "Test Code 2" and "Test Code 3" mainly comes from the difference in their build order. The approach of "Test Code 2" is to first create the outermost <TABLE> object, and then create <TR> and <TD> in sequence in the loop. The approach of "Test Code 3" is to first build the entire table in memory from the inside out, and then add it to the web page. The purpose of this is to reduce the number of times the browser recalculates the page layout as much as possible. Whenever we add an object to a web page, the browser attempts to recalculate the layout of the controls on the page. Therefore, if we can first create the entire object to be constructed in memory, and then add it to the web page at once. Then, the browser will only do a layout recalculation. To sum it up in one sentence, the later you execute appendChild, the better. Sometimes in order to improve operating efficiency, we can even consider using removeChild to remove the existing control from the page, and then re-place it back to the page after the construction is completed.
Improve the speed of string accumulation. When using AJAX to submit information, I may often need to assemble some relatively large strings to complete POST submission through XmlHttp. Although submitting such a large amount of information may seem inelegant, sometimes we may have to face such a need. So how fast is the accumulation of strings in JavaScript? Let’s do the following experiment first. Accumulate a string of length 30000.
/* Test code 1 - Time taken: 14.325 seconds*/
var str = "";
for (var i = 0; i < 50000; i++) {
str += "xxxxxx";
}
This code took 14.325 seconds, and the results were not ideal. Now we change the code to the following form:
/* Test code 2 - Time taken: 0.359 seconds*/
var str = "";
for (var i = 0; i < 100; i++) {
var sub = "";
for (var j = 0; j < 500; j++) {
sub += "xxxxxx";
}
str += sub;
}
This code takes 0.359 seconds! Same result, all we do is assemble some smaller strings first and then assemble into larger strings. This approach can effectively reduce the amount of data copied in memory in the later stages of string assembly. After knowing this principle, we can further dismantle the above code for testing. The code below only takes 0.140 seconds.
/* Test code 3 - Time taken: 0.140 seconds*/
var str = "";
for (var i1 = 0; i1 < 5; i1++) {
var str1 = "";
for (var i2 = 0; i2 < 10; i2++) {
var str2 = "";
for (var i3 = 0; i3 < 10; i3++) {
var str3 = "";
for (var i4 = 0; i4 < 10; i4++) {
var str4 = "";
for (var i5 = 0; i5 < 10; i5++) {
str4 += "xxxxxx";
}
str3 += str4;
}
str2 += str3;
}
str1 += str2;
}
str += str1;
}
However, the above approach may not be the best! If the information we need to submit is in XML format (in fact, in most cases, we can try to assemble the information to be submitted into XML format), we can also find a more efficient and elegant way - using DOM objects to assemble characters for us string. The following paragraph only takes 0.890 seconds to assemble a string with a length of 950015.
/* Use DOM objects to assemble information - time taken: 0.890 seconds*/
var xmlDoc;
if (browserType == BROWSER_IE) {
xmlDoc = new ActiveXObject("Msxml.DOMDocument");
}
else {
xmlDoc = document.createElement("DOM");
}
var root = xmlDoc.createElement("root");
for (var i = 0; i < 50000; i++) {
var node = xmlDoc.createElement("data");
if (browserType == BROWSER_IE) {
node.text = "xxxxxx";
}
else {
node.innerText = "xxxxxx";
}
root.appendChild(node);
}
xmlDoc.appendChild(root);
var str;
if (browserType == BROWSER_IE) {
str = xmlDoc.xml;
}
else {
str = xmlDoc.innerHTML;
}
Avoid memory leaks of DOM objects.
Memory leaks of DOM objects in IE are a problem that is often ignored by developers. However, the consequences it brings are very serious! It will cause IE's memory usage to continue to rise, and the browser's overall running speed to slow down significantly. For some seriously leaked web pages, the running speed will be doubled even if refreshed a few times.
The more common memory leak models include "cyclic reference model", "closure function model" and "DOM insertion order model". For the first two leak models, we can avoid them by dereferencing when the web page is destructed. . As for the "DOM insertion order model", it needs to be avoided by changing some common programming habits.
More information about the memory leak model can be found quickly through Google, and this article will not elaborate too much. However, here I recommend to you a small tool that can be used to find and analyze web page memory leaks - Drip. The current newer version is 0.5, and the download address is http://outofhanwell.com/ieleak/index.php
Segmented loading and initialization of complex pages For some interfaces in the system that are really complex and inconvenient to use IFrame, we can implement segmented loading. For example, for a multi-page tab interface, we can first download and initialize the default page of the multi-page tab, and then use AJAH (asynchronous JavaScript and HTML) technology to asynchronously load the content in other tab pages. This ensures that the interface can be displayed to the user in the first place. Disperse the loading process of the entire complex interface into the user's operation process.
Use GZIP to compress network traffic.
In addition to the code-level improvements mentioned above, we can also use GZIP to effectively reduce network traffic. At present, all common mainstream browsers already support the GZIP algorithm. We often only need to write a small amount of code to support GZIP. For example, in J2EE we can use the following code in Filter to determine whether the client browser supports the GZIP algorithm, and then use java.util.zip.GZIPOutputStream to implement GZIP output as needed.
/* Code to determine how the browser supports GZIP*/
private static String getGZIPEncoding(HttpServletRequest request) {
String acceptEncoding = request.getHeader("Accept-Encoding");
if (acceptEncoding == null) return null;
acceptEncoding = acceptEncoding.toLowerCase();
if (acceptEncoding.indexOf("x-gzip") >= 0) return "x-gzip";
if (acceptEncoding.indexOf("gzip") >= 0) return "gzip";
return null;
}
Generally speaking, the compression ratio of GZIP for HTML and JSP can reach about 80%, and the performance loss it causes on the server and client is almost negligible. Combined with other factors, websites that support GZIP may save us 50% of network traffic. Therefore, the use of GZIP can bring significant performance improvements to applications where the network environment is not particularly good. Using Fiddler, the HTTP monitoring tool, you can easily detect the amount of communication data on a web page before and after using GZIP. The download address of Fiddler is http://www.fiddlertool.com /fiddler/
Performance optimization of web applications is actually a very big topic. Due to limited space, this article can only cover a few of the details, and it is also unable to fully show you the optimization methods of these details. I hope this article can draw everyone's attention to Web application, especially client-side performance optimization. After all, server-side programming techniques have been known to everyone for many years, and there is not much potential for exploiting performance on the server side. Method improvements on the client side can often lead to surprising performance improvements.