lieu
vements
替代搜索引擎
其创建是为了应对人们对超文本搜索和发现的使用漠不关心的环境。在《Lieu》中,互联网并不是可搜索的,而是一个人自己的社区。换句话说,Lieu 是一个社区搜索引擎,是个人网络圈增加偶然联系的一种方式。
有关完整的搜索语法(包括如何使用site:
和-site:
),请参阅搜索语法和 API 文档。如需更多提示,请阅读附录。
$ lieu help
Lieu: neighbourhood search engine
Commands
- precrawl (scrapes config's general.url for a list of links: <li> elements containing an anchor <a> tag)
- crawl (start crawler, crawls all urls in config's crawler.webring file)
- ingest (ingest crawled data, generates database)
- search (interactive cli for searching the database)
- host (hosts search engine over http)
Example:
lieu precrawl > data/webring.txt
lieu crawl > data/crawled.txt
lieu ingest
lieu host
Lieu 的爬行和预爬行命令输出到标准输出,以便于检查数据。您通常希望将它们的输出重定向到 Lieu 从中读取的文件,如配置文件中所定义。请参阅下面的典型工作流程。
config.crawler.webring
中添加要抓取的域url
字段设置为该页面precrawl
进行抓取的域列表: lieu precrawl > data/webring.txt
lieu crawl > data/crawled.txt
lieu ingest
lieu host
使用lieu ingest
摄取数据后,您还可以使用lieu在终端中通过lieu search
来搜索语料库。
调整配置的theme
值,如下所示。
配置文件是用 TOML 编写的。
[ general ]
name = " Merveilles Webring "
# used by the precrawl command and linked to in /about route
url = " https://webring.xxiivv.com "
# used by the precrawl command to populate the Crawler.Webring file;
# takes simple html selectors. might be a bit wonky :)
webringSelector = " li > a[href]:first-of-type "
port = 10001
[ theme ]
# colors specified in hex (or valid css names) which determine the theme of the lieu instance
# NOTE: If (and only if) all three values are set lieu uses those to generate the file html/assets/theme.css at startup.
# You can also write directly to that file istead of adding this section to your configuration file
foreground = " #ffffff "
background = " #000000 "
links = " #ffffff "
[ data ]
# the source file should contain the crawl command's output
source = " data/crawled.txt "
# location & name of the sqlite database
database = " data/searchengine.db "
# contains words and phrases disqualifying scraped paragraphs from being presented in search results
heuristics = " data/heuristics.txt "
# aka stopwords, in the search engine biz: https://en.wikipedia.org/wiki/Stop_word
wordlist = " data/wordlist.txt "
[ crawler ]
# manually curated list of domains, or the output of the precrawl command
webring = " data/webring.txt "
# domains that are banned from being crawled but might originally be part of the webring
bannedDomains = " data/banned-domains.txt "
# file suffixes that are banned from being crawled
bannedSuffixes = " data/banned-suffixes.txt "
# phrases and words which won't be scraped (e.g. if a contained in a link)
boringWords = " data/boring-words.txt "
# domains that won't be output as outgoing links
boringDomains = " data/boring-domains.txt "
# queries to search for finding preview text
previewQueryList = " data/preview-query-list.txt "
为了您自己的使用,应自定义以下配置字段:
name
url
port
source
webring
bannedDomains
除非您有特定要求,否则以下配置定义的文件可以保持原样:
database
heuristics
wordlist
bannedSuffixes
previewQueryList
有关文件及其各种作业的完整概要,请参阅文件描述。
构建一个二进制文件:
# this project has an experimental fulltext-search feature, so we need to include sqlite's fts engine (fts5)
go build --tags fts5
# or using go run
go run --tags fts5 .
创建新的发布二进制文件:
./release.sh
源代码AGPL-3.0-or-later
,Inter 可根据SIL OPEN FONT LICENSE Version 1.1
获得,Noto Serif 获得Apache License, Version 2.0
。