Baidu's recent strategic update has caused a call to "focus on content construction" on the Internet. Webmasters have begun to discuss how to build a good website and what kind of website design can feed the appetite of spiders? The editor recently I am also studying how to design a surprising website to cater to Baidu’s new strategy. I would like to share my thoughts with you here.
First: Friendly navigation settings. When building a website, we must not only consider the design of the homepage, but also the design of the inner pages and spider crawling. Many websites only place content on the homepage when designing, and there are no links to other inner pages. Spiders come to the website to crawl After fetching the content of the homepage, you can only choose to jump out of the website. This kind of website design can only allow spiders to include one page on the homepage, and other pages cannot be included. For the inclusion of website pages, the internal pages of the website should not be more than 5 clicks away from the homepage. This requires the website to set up friendly navigation. The top column page navigation of the website, the home page calling section's "more" link inner page navigation, the breadcrumb navigation of the inner page: home page - column page - content page. At the same time, there are also several taboos in the design of navigation: do not use image navigation, do not use JS jump navigation, try to use simple article anchor text navigation, and friendly navigation design will have the least resistance to spiders. The experience is also the best for users.
Second: Friendly page structure design. In order to achieve the visual effect of users, many product websites use a large amount of JS code, pictures, Flash animations, etc. These codes are incomprehensible to spiders. The design of the website must not only take into account the effect, but also better consider the Survival in search engines. Webmasters who use the page optimization suggestion tool launched by Baidu will find that Baidu has high requirements for website code simplification. After the website uses JS code, it must be placed at the end of the code to reduce the spider's request time. At the same time, the website's CSS style sheets should also be merged to reduce unnecessary code. Try not to use frame structure in the website. It is difficult for spiders to identify frame and flash codes. For the advertising sections of the website and sections where there is no need to transfer weight, webmasters can use iframe and nofollow tags to avoid unnecessary loss of weight.
Third: Friendly website jump code. For search engine spiders, the only jump that will not be regarded as cheating is the 301 jump. 301 jumps can completely transfer the weight to the new website. There are other ways to jump URLs: 302 jump, JS jump, meta refresh jump, but these jump methods will be considered by spiders as cheating methods. We do jumps just to transfer weight, so there is no need to choose except Jump methods other than 301 jump. Asp code for 301 jump on Windows host: <%@ Language=VBscript %> <% Response.Status="301 Moved permanently" Response.AddHeader "Location","Domain name">. PHP code: <Header("HTTP/1.1 301 Moved permanently"); Header("Location: domain name");?>.
Fourth: Friendly page static settings. Baidu's new strategy has high requirements for high-quality content on the website, and webmasters are also working hard to create more inclusions for the website. Static page settings may make the website better included. Dynamic addresses will cause inconvenience to spiders when crawling content. It is easy for spiders to enter an endless loop when crawling, or to frequently include duplicate pages. If you want website pages to be fully included, you need to design the website. Convert dynamic URLs to static URLs. Many webmasters may say that dynamic pages can also be included. When spiders come to crawl the website, they can identify dynamic addresses, but it will cause difficulty for spiders to include them. There is actually a good way to reduce the trouble of spiders crawling. , webmasters why not do it. For websites that contain content, webmasters can add the following code to the robots.txt file: Disallow:/*?* to prohibit spiders from collecting duplicate content on the website. This will also take some time to recover. Webmasters Don't be anxious after you join, wait patiently for the database to be cached, and it will not be included again.
Webmasters are working hard to operate the website. If the website is punished because of these details, it will be like crying without tears. Then why not use the website design to win by surprise. No matter how Baidu updates its strategy, as long as webmasters can do a good job in the details and focus on content construction and improve user experience, the website will naturally survive for a long time.
Original source of this article: http://www.hlqxc.org First published in A5, please indicate the source when reprinting.
(Editor in charge: momo) The personal space of the author strontium strontium