The originally normal website suddenly became no longer included. For SEOers, it means the coming of the suffering moment. Tragically, the author's website encountered this situation some time ago. Fortunately, after going through After inspection and processing, the website returned to normal. Here, based on the author's actual examples, I will talk about the reasons and solutions for the website suddenly not being included.
General situation: Around December 15th, the author's small site began not to include the information pages that were updated daily and included normally. Then the inclusion of other pages on the website began to decrease. By September 23rd, the website began to stop including, and snapshots began. Stagnation, website keyword Nanjing SEO ranking dropped.
Since there are many reasons why websites are not included, it takes a lot of time to check the website. Of course, checking is not blind. You need to have a clear direction in your mind. The overall reasons for the website not being included are nothing more than three points: 1. Spiders are not included. Come; 2. The spider came, couldn't find the page, and left. 3. The spider came and entered some pages of the website, but it still didn't bring anything. From these 3 reasons, I did the following checks:
1. Check the iis log. By checking the IIS log, we can clearly understand the whereabouts of the spider, whether it has come to our website, when it came, and how frequently it comes. If the spider does not come, the website will naturally not be included.
2. Check the channel. If spiders come to your website normally, then the first thing you need to look at is your robots.txt file to see if you accidentally banned pages that need to be included normally when modifying robots. , or is it because the page you banned also banned the only entrance or the main entrance of the page that needs to be included? Another thing to note about the robots file is not to modify it frequently, because every time you modify it, the spider will reconsider which pages are needed and which pages are not to be crawled. Frequent modifications are also very annoying to spiders; In addition, you also need to check whether the various entrances to your website pages are normal.
3. Check the page. If spiders come, and your robots have not changed much as before, and there are no major changes in the structure and page entry, then the problem must be on the page. On the article page, you need to consider the quality of your article. Is it collected too much? Is the article original enough? etc. You also need to check whether your article has been collected too much by others (this is something many people There is no habit of checking), and too many articles are collected by others. If your weight is not as high as the website that collects your articles, you may let Baidu think that your site is a collection site, especially when your articles are frequently collected by different sites. When collecting; as for other pages, you have to check whether when you add a new page, the content is too similar, whether the title is repeated, etc. These spiders do not like it.
Check these 3 reasons for your website, I believe you will find the answer to why your website is suddenly not included. In fact, after the author's website was inspected, the problem also occurred for several reasons. The author's information pages were collected by multiple people, and it was relatively frequent; and some time ago, due to the website revision, there were several pages that the author thought were unnecessary. It was banned, and the problem of other page entrances was not taken into consideration, which resulted in the subsequent lack of inclusion.
Here is the solution:
1. If you check the IIS log and find that the spider has not come, then your website is probably being demoted. You need to check the friend links; check the status of your server to see if it returns too many 404 or 503 statuses. Yes It’s not that there are many pages that are inaccessible; also, don’t brush up on traffic, which is also the main reason for the downgrade.
2. If the problem is in robots.txt, it is easy to solve. You only need to modify it correctly. Remember to consider the relationship between pages. Do not disable page A and seriously affect page B.
3. If the problem lies with the page, then what you have to do is to increase the originality of the article. Excessive collection will be regarded as a garbage dump by Baidu, and excessive collection by others will also be regarded as a garbage dump by Baidu. Do a good job in checking, and pay special attention to being careful about being collected by machines. Nowadays, there are many collection tools similar to locomotives that can help many webmasters reduce a lot of workload, but if your site is collected by such machines, , it will be very depressing. You can put some restrictions on the page, such as exchanging p, div, span codes, etc.
Through the above method, the author's website has returned to normal. At the time of writing this article, all the articles I just updated have been included.
It will give people a headache if the website is suddenly not included, but it is important to clarify your thinking. This is the most important thing for SEO. Don’t go into a dead end. You must get out of it through reasonable methods. Once you get out, you will find that you Your website will be better than before, and Baidu will have a better impression of your website. This article comes from http://www.seo1912.com , please indicate when reprinting.
Editor in charge: Chen Long Author SEO1912's personal space