sparkler
1.0.0
網路爬蟲是一種機器人程序,它從網路上獲取資源,以便建立搜尋引擎、知識庫等應用程式。進展透過整合 Spark、Kafka、Lucene/Solr、Tika 和 pf4j 等各種 Apache 專案來建構資訊檢索領域。 Sparkler 是一個可擴展、高度可擴展的高效能網路爬蟲,它是 Apache Nutch 的演變,並在 Apache Spark Cluster 上運行。
Sparkler 正在被提議給 Apache 孵化器。查看提案文件並在此提供您的建議稍後會完成,最終!
若要使用 Sparkler,請安裝 docker 並執行以下命令:
# Step 0. Get the image
docker pull ghcr.io/uscdatascience/sparkler/sparkler:main
# Step 1. Create a volume for elastic
docker volume create elastic
# Step 1. Inject seed urls
docker run -v elastic:/elasticsearch-7.17.0/data ghcr.io/uscdatascience/sparkler/sparkler:main inject -id myid -su ' http://www.bbc.com/news '
# Step 3. Start the crawl job
docker run -v elastic:/elasticsearch-7.17.0/data ghcr.io/uscdatascience/sparkler/sparkler:main crawl -id myid -tn 100 -i 2 # id=1, top 100 URLs, do -i=2 iterations
1. Follow Steps 0-1
2. Create a file name seed-urls.txt using Emacs editor as follows:
a. emacs sparkler/bin/seed-urls.txt
b. copy paste your urls
c. Ctrl+x Ctrl+s to save
d. Ctrl+x Ctrl+c to quit the editor [Reference: http://mally.stanford.edu/~sr/computing/emacs.html]
* Note: You can use Vim and Nano editors also or use: echo -e " http://example1.comnhttp://example2.com " >> seedfile.txt command.
3. Inject seed urls using the following command, (assuming you are in sparkler/bin directory)
$bash sparkler.sh inject -id 1 -sf seed-urls.txt
4. Start the crawl job.
要爬網直到所有新 URL 的末尾,請使用-i -1
,範例: /data/sparkler/bin/sparkler.sh crawl -id 1 -i -1
如有任何問題或建議,歡迎在我們的郵件列表中 [email protected] 或者,您可以使用 slack 渠道獲取幫助 http://irds.usc.edu/sparkler/#slack