Today I briefly browsed "High Performance Web Sites". The Chinese version of this book is "Guide to High-Performance Website Construction".
This book also has an advanced chapter "Even Faster Web Sites" that deeply explores individual issues, and the Chinese translation is "Advanced Guide to Building High-Performance Websites".
The author introduced it in the Douban link above, so I won’t copy it here.
This book gives 14 principles for improving website performance, each principle is an independent chapter with examples. Most of these principles are very practical and suitable for site architects and front-end engineers. It is of greater significance to front-end engineers.
This time I watched the original version. I lack practical experience in web development, and I read it in a hurry, so there may be omissions and improper expressions. I hope that netizens will feel free to correct me.
Principle 1 Reduce the number of HTTP requests
It takes time to construct a request and wait for a response, so the fewer requests, the better. The general idea of reducing requests is to merge resources and reduce the number of files required to display a page.
1. Image Map
By setting the usemap attribute of the <img> tag and using the <map> tag, you can divide an image into multiple areas and point to different links. Compared with using multiple images to construct links separately, the number of requests is reduced.
2. CSS SPRite (CSS texture integration/texture stitching/texture positioning)
This is done by setting the background-position style of the element. Generally used for interface icons. Typical ones can refer to the small buttons above the TinyMCE editor. Multiple small images are essentially cut from a unified large image with different offsets. In this way, many buttons on the loading interface actually only need to be requested once (requesting the large image once), thus reducing the number of HTTP requests.
3. Inline Image
Do not specify the URL of the external image file in the src of <img>, but directly put the image information. For example, src="data:image/gif;base64,R0lGODlhDAAMAL..." is useful in some special cases (for example, a small image is only used on the current page).
Principle 2: Utilize multi-line CDN
Provide your site with access to multiple lines (such as domestic telecommunications, China Unicom, China Mobile) and multiple geographical locations (north, south, west) so that all users can access it quickly.
Principle 3: Use HTTP Cache
Add longer Expires header information to resources that are not updated frequently (such as static images). Once these resources are cached, they will not be transmitted again for a long time in the future.
Principle 4: Use Gzip compression
Use Gzip to compress HTTP messages to reduce size and transmission time.
Principle 5: Place style sheets at the front of the page
Load the style sheet first so that page rendering can start earlier, giving users the feeling that the page loads faster.
Principle 6: Place scripts at the end of the page
The reason is the same as 5. The page display is processed first, the page rendering is completed earlier, and the script logic is executed later, which gives the user the feeling that the page loads faster.
Principle 7 Avoid using CSS expressions
Overly complex JavaScript script logic, DOM search, and selection operations will reduce page processing efficiency.
Principle 8 Use Javascript and CSS as external resources
This seems to be contrary to the merging idea in principle 1, but it is not: considering that each page introduces a public JavaScript resource (such as jQuery or a JavaScript library such as ExtJS), judging from the performance of one page alone, inline (i.e. embedding JavaScript into HTML) pages will load faster (because of their lower number of HTTP requests) than outbound (introduced using <script> tags) pages. But if many pages introduce this public JavaScript resource, then the inline scheme will cause repeated transmission (because this resource is embedded in each page, this part of the resource must be transmitted every time a page is opened, thus causing a waste of network transmission resources). This problem can be solved by making this resource independent and externally referencing it.
Since JavaScript and CSS are relatively stable, we can set longer expiration dates for their corresponding resources (refer to Principle 3).
Principle 9 Reduce DNS lookups
The author's advice is:
1. Use Keep-Alive to stay connected
If the connection is disconnected, a DNS lookup will have to be performed next time the connection is made. Even if the corresponding domain name-IP mapping has been cached, the lookup will take some time.
2. Reduce domain names
Each time you request a new domain name, you need to look up a different domain name through DNS, and the DNS cache cannot work. Therefore, you should try your best to organize the site under a unified domain name and avoid using too many subdomain names.
Principle 10 Minify your JavaScript
Use the JS compression tool to compress your JavaScript, it is very effective. Just look at the two different distributions of jQuery to see the difference:
http://code.jquery.com/jquery-1.6.2.js reading version jQuery code, 230KB
http://code.jquery.com/jquery-1.6.2.min.js compressed version of jQuery code (for actual deployment), 89.4KB
Principle 11 Try to avoid redirects
A redirect means that an additional round of HTTP requests is added before you actually access the page you want to see (the client initiates an HTTP request → the HTTP server returns a redirect response → the client initiates a request for a new URL → the HTTP server returns content, the underlined parts are additional requests), so it consumes more time (which also gives people the feeling of slower response). So don't use redirects unless necessary. Several "necessary" situations:
1. Avoid invalid URLs
After the old site is migrated, in order to prevent the old URL from becoming invalid, requests for the old URL are usually redirected to the corresponding address of the new system.
2. URL beautification
Convert between readable URLs and actual resource URLs. For example, for Google Toolbar, users remember http://toolbar.google.com, which is a semantic address for humans, but it is difficult to remember http://www. .google.com/tools/Firefox/toolbar/FT3/intl/en/index.html is the real resource address. It is therefore necessary to retain the former and redirect requests for the former to the latter.
Principle 12 Remove duplicate scripts
Don't include the same script repeatedly on a page. For example, scripts B and C both depend on A, so there may be repeated references to A in pages that use B and C. The solution is to manually check dependencies for simple sites and eliminate repeated introductions; for complex sites, you need to build your own dependency management/version control mechanism.
Principle 13 Handle ETags with care
ETag is another HTTP Cache method besides Last-Modified. Identify whether the resource has been modified through hashing. But there are some problems with ETag, such as:
1. Inconsistency: Different web servers (Apache, IIS, etc.) define different ETag formats
2. The calculation of ETag is unstable (due to considering too many factors), such as:
1) The same resources have different ETags calculated on different servers, and large-scale web applications are usually served by more than one server. This causes the client's cached resources on server A to still be valid, but the next time it requests B Because the ETag is different, it is considered invalid, resulting in repeated transmission of the same resource.
2) The resource remains unchanged, but the ETag changes due to changes in some other factors, such as configuration file changes. The direct consequence is that after the system update, client cache failures occur on a large scale, resulting in a large increase in transmission volume and a decrease in site performance.
The author's suggestion is: either improve the existing ETag calculation method according to the characteristics of your application, or simply do not use ETag and use the simplest Last-Modified.
Principle 14 Leverage HTTP Cache with Ajax
Ajax is an asynchronous request. Asynchronous requests will not block your current operations, and when the request is completed, you can see the results immediately. But asynchronous does not mean that it can be completed instantaneously, nor does it mean that it can be tolerated and take an infinite amount of time to complete. Therefore, the performance of Ajax requests also needs to be paid attention to. There are many Ajax requests that access relatively stable resources, so don’t forget to make good use of the HTTP Cache mechanism for Ajax requests. For details, see Principles 3 and 13.
Author: Yang Mengdong
Article source: Yang Mengdong’s blog. Please indicate the source link when reprinting.