Let me first explain why I wrote this article and why I am struggling with this little issue. First of all, turning on gzip compression of static files is very useful to improve the access speed of the website, and effectively reduces the time-taken of spiders crawling static pages. At the same time, it will not cause 200 0 to Baidu spiders like turning on dynamic file compression. 64 crawling problem, so on the one hand, fast website speed is conducive to improving user experience. On the other hand, the Google administrator blog has made it clear this year that website speed is one of the ranking factors, and for using foreign hosts to build Baidu Chinese sites Optimization and unsatisfactory time-taken will lead to less crawling of internal pages by Baidu Spider. Guoping also mentioned it before in his blog article: How does webpage loading speed affect SEO effect? In a fixed period of time, the total time it takes for a spider to crawl the website is fixed. If the crawling speed increases, the number of crawled pages will be more, and vice versa.
Okay, let’s start with the main text. In question 2 of the previous article “Experimental results of spider crawling static pages and triggering gzip compression”, I made a guess about how the compressed version of gzip static pages is saved on the server. After being confused for a long time After that, I found that the final reason for the different gzip results returned by the two hosts was the iis version rather than the cache folder setting that I guessed was too small.
In fact, iis7 has a larger update than iis6 in static compression. In IIS6, static compression is performed on a different thread, so after receiving an HTTP request, the first one sent to the browser The HTML version is uncompressed, and IIS6 will start using a different thread to compress the file and store the compressed version in the cache folder of the compressed file for a long time. In the past, that is, on the IIS6 server, after the compression was completed, for any HTTP request for the compressed version of the static file, IIS6 would directly call the compressed version from the cache folder and return it to the browser.
But in IIS7, compression is performed on the main thread, and in order to save the cost of compression, IIS7 does not save long-term compressed versions of all HTTP requests but only static files that are frequently accessed by users. This is why The first time I visited it, it was not compressed. The compressed version was returned when I visited it again in a short period of time, but the uncompressed version was returned when I visited it after a few minutes. Here we can understand that IIS7 does not actually save the compressed version to the cache folder, but only saves it in the server memory, or temporarily saves the compressed version to the cache folder and deletes it after a while.
The method for IIS7 to define which files are frequently accessed and comply with compression standards are the following two properties in system.webServer/serverRuntime, frequentHitThreshold and frequentHitTimePeriod. If IIS receives access to a static file that exceeds the frequentHitThreshold threshold within the frequentHitTimePeriod period, IIS7 will compress the static file like IIS6 and store the compressed version in the cache folder of the compressed file for a long time. Inside. If a cached version of the file already exists in the cache folder when a user accesses a file on the website, IIS7 will no longer judge the logic of frequentHitThreshhold and directly return the compressed version to the browser.
This setting is indeed very painful, but Microsoft's official answer is that it can be used to improve server performance. . . So if you want IIS7 to be able to compress like IIS6, there are two solutions. Of course, they both modify the values of frequentHitThreshold and frequentHitTimePeriod:
The first is to add the following content to web.config, adjust frequentHitThreshold to 1, and adjust frequentHitTimePeriod to 10 minutes
<system.webServer>
<serverRuntime enabled=true
frequentHitThreshold=1
frequentHitTimePeriod=00:10:00/>
</system.webServer>
The second method is to open %windir%/system32/inetsrv/appcmd.exe, then enter the following command string in the command line interface, and then press Enter
set config -section:system.webServer/serverRuntime -frequentHitThreshold:1
Microsoft officials suggest that a less radical approach is not to lower frequentHitThreshold but to increase frequentHitTimePeriod, which is more moderate for server performance. What I want to mention here is that for friends who have VPS, it is recommended to set it manually. Whether virtual host users can set it depends on the service provider. Unfortunately, I can't change it. Everyone, give it a try