Baidu cloud network disk search engine is a network disk search engine source code developed with PHP+MySQL. Running environment: Before starting, you need to install *PHP5.3.7+*MySQL*Python2.7~*[xunsearch](http://xunsearch.com/) The directory structure of the search engine project ___ is roughly like this--- indexer/#index---spider/#crawler---sql/---web/#website---application/---config/#configuration-related---config.php---database.php#database Configuration... ---static/# Store static resources, css|js|font---system/---index.php Start deployment Create database Create a database named `pan`, and set the encoding to ` utf-8`. Then import `sql` to complete the creation of the table. Website deployment supports `nginx` and `apache` servers. __apache__ needs to enable *mod_rewrite*. __nginx__ is configured as follows location/{indexindex.php;try_files$uri$uri//index.php/$uri;}
location~[^/].php(/|$){fastcgi_pass127.0.0.1:9000;fastcgi_indexindex.php;includefastcgi.conf;includepathinfo.conf;}
Configuration file modification: `config.php` file modifies website title, description and other information `database.php` modifies database account, password and other information> The website is developed based on the CodeIgniter framework. If there are problems with installation, deployment, or secondary development, Please refer to [Official Documentation](http://codeigniter.org.cn/user_guide/general/welcome.html)###Start the crawler and enter the `spider/` directory, and modify the database information in `spider.py`.
If this is your first deployment, you need to run the following command to complete the seeding: pythonspider.py --seed-user The above is actually to capture the relevant information of Baidu Cloud's popular sharing users, and then start crawling data from them and then run pythonspider .py At this point the crawler has started working ###Installing xunsearch currently uses __xunsearch__ as the search engine, which will be replaced by `elasticsearch` later. Please refer to the installation process (no installation required, PHPSDK, I have integrated it into the web) http://xunsearch.com/doc/php/guide/start.installation###Index data Above we have completed the crawler data capture , the website is built, but it is not searchable yet, let’s start the last step, the establishment of the index. Enter the `indexer/` directory, replace $prefix in `indexer.php` with the root path of your web require'$prefix/application/helpers/xs/lib/XS.php'; and modify the database account password and then run python./index.php