Today I would like to briefly introduce to you webmaster friends my experience in Baidu inclusion of a small website. I think many friends have encountered such problems during the process of building a website, so I would like to share my approach with you. For many websites, why does Baidu only include one homepage and not all the inner pages? In fact, this problem can be easily solved. We only need to pay attention to the website log under IIS. Check whether the crawling log of Baidu Spider is normal. See if Baidu Spider has crawled the internal pages of the website, and what records are left behind after crawling.
Let me take the website www.745sf.com that I recently built as an example. When the website was first launched, I posted some signed posts on forums such as a5 and Laocha to attract spiders. After a few days, Baidu included me. station. It sounds like everything went well. But only one homepage was included at that time. The inner pages are not included. At first I thought it was just a new website, coming step by step. In a few days, the inner pages will naturally be included. It's a pity that after waiting for a month or two, there was still a home page, and no other pages were included. Makes me very depressed. There must be something wrong with Baidu. Baidu cannot include it normally. So I went to the server and downloaded all the logs from those days for analysis. After searching for a long time, I didn't see a normal spider crawling record. Either it didn't crawl through the inner pages in a day, or the records left after crawling were all 200 0 64. After searching for some relevant information on the Internet, I realized that such recording is abnormal. The record left after Baidu Spider successfully crawls the inner page should be 200 0 0. However, when Baidu Spider crawls my site, it is always 200 0 64, which definitely means there is a problem with my inner page. I know a lot about the garbage station I built myself, and I can figure it out just by thinking about it. It is initially believed that the keyword density selected by the website is too popular. Without uniqueness, Baidu has no reason to include a garbage site that is already full of the same content. Second, the content of the website is not original.
When I thought of the reason, I had to correct it, so I changed the website title and the keywords to be optimized on the page into a unique word. (I will change it back when Baidu becomes normal. Haha) Then I found some articles and messed them up to make them pseudo-original. Fool Baidu. After correcting this problem, I posted links everywhere to attract spiders. Look at the record of the spider crawling now. Later I found that the spider crawling records were normal. Both are 200 0 0. Because the content is updated every day for Baidu spiders to crawl. Finally, there was a big update yesterday, and all the pages I crawled were released. Get up and check the website address today to have some fun. It turns out that Baidu is also very cute. It's not enough to scold every day, you just need to pay attention. It won't be difficult for us. All behavioral spiders have notified us when they crawl, so we just need to correct them in time. That's OK.
The above is just my humble opinion. Accept the spit from all the heroes.