Anserini is a toolkit for reproducible information retrieval research. By building on Lucene, we aim to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Among other goals, our effort aims to be the opposite of this.* Anserini grew out of a reproducibility study of various open-source retrieval engines in 2016 (Lin et al., ECIR 2016). See Yang et al. (SIGIR 2017) and Yang et al. (JDIQ 2018) for overviews.
❗ Anserini was upgraded from JDK 11 to JDK 21 at commit 272565
(2024/04/03), which corresponds to the release of v0.35.0.
Anserini is packaged in a self-contained fatjar, which also provides the simplest way to get started. Assuming you've already got Java installed, fetch the fatjar:
wget https://repo1.maven.org/maven2/io/anserini/anserini/0.38.0/anserini-0.38.0-fatjar.jar
The follow commands will generate a SPLADE++ ED run with the dev queries (encoded using ONNX) on the MS MARCO passage corpus:
java -cp anserini-0.38.0-fatjar.jar io.anserini.search.SearchCollection
-index msmarco-v1-passage.splade-pp-ed
-topics msmarco-v1-passage.dev
-encoder SpladePlusPlusEnsembleDistil
-output run.msmarco-v1-passage-dev.splade-pp-ed-onnx.txt
-impact -pretokenized
To evaluate:
java -cp anserini-0.38.0-fatjar.jar trec_eval -c -M 10 -m recip_rank msmarco-passage.dev-subset run.msmarco-v1-passage-dev.splade-pp-ed-onnx.txt
See detailed instructions for the current fatjar release of Anserini (v0.38.0) to reproduce regression experiments on the MS MARCO V2.1 corpora for TREC 2024 RAG, on MS MARCO V1 Passage, and on BEIR, all directly from the fatjar!
Also, Anserini comes with a built-in webapp for interactive querying along with a REST API that can be used by other applications. Check out our documentation here.
Most Anserini features are exposed in the Pyserini Python interface. If you're more comfortable with Python, start there, although Anserini forms an important building block of Pyserini, so it remains worthwhile to learn about Anserini.
You'll need Java 21 and Maven 3.9+ to build Anserini.
Clone our repo with the --recurse-submodules
option to make sure the eval/
submodule also gets cloned (alternatively, use git submodule update --init
).
Then, build using Maven:
mvn clean package
The tools/
directory, which contains evaluation tools and other scripts, is actually this repo, integrated as a Git submodule (so that it can be shared across related projects).
Build as follows (you might get warnings, but okay to ignore):
cd tools/eval && tar xvfz trec_eval.9.0.4.tar.gz && cd trec_eval.9.0.4 && make && cd ../../..
cd tools/eval/ndeval && make && cd ../../..
With that, you should be ready to go. The onboarding path for Anserini starts here!
If you are using Windows, please use WSL2 to build Anserini. Please refer to the WSL2 Installation document to install WSL2 if you haven't already.
Note that on Windows without WSL2, tests may fail due to encoding issues, see #1466.
A simple workaround is to skip tests by adding -Dmaven.test.skip=true
to the above mvn
command.
See #1121 for additional discussions on debugging Windows build errors.
Anserini is designed to support end-to-end experiments on various standard IR test collections out of the box. Each of these end-to-end regressions starts from the raw corpus, builds the necessary index, performs retrieval runs, and generates evaluation results. See individual pages for details.
dev | DL19 | DL20 | |
---|---|---|---|
Unsupervised Sparse | |||
Lucene BoW baselines | ? | ? | ? |
Quantized BM25 | ? | ? | ? |
WordPiece baselines (pre-tokenized) | ? | ? | ? |
WordPiece baselines (Huggingface) | ? | ? | ? |
WordPiece + Lucene BoW baselines | ? | ? | ? |
doc2query | ? | ||
doc2query-T5 | ? | ? | ? |
Learned Sparse (uniCOIL family) | |||
uniCOIL noexp | ? | ? | ? |
uniCOIL with doc2query-T5 | ? | ? | ? |
uniCOIL with TILDE | ? | ||
Learned Sparse (other) | |||
DeepImpact | ? | ||
SPLADEv2 | ? | ||
SPLADE++ CoCondenser-EnsembleDistil | ? |
? |
? |
SPLADE++ CoCondenser-SelfDistil | ? |
? |
? |
Learned Dense (HNSW indexes) | |||
cosDPR-distil | full:? |
full:? |
full:? |
BGE-base-en-v1.5 | full:? |
full:? |
full:? |
OpenAI Ada2 | full:? int8:? | full:? int8:? | full:? int8:? |
Cohere English v3.0 | full:? int8:? | full:? int8:? | full:? int8:? |
Learned Dense (flat indexes) | |||
cosDPR-distil | full:? |
full:? |
full:? |
BGE-base-en-v1.5 | full:? |
full:? |
full:? |
OpenAI Ada2 | full:? int8:?️ | full:? int8:? | full:? int8:? |
Cohere English v3.0 | full:? int8:? | full:? int8:? | full:? int8:? |
Learned Dense (Inverted; experimental) | |||
cosDPR-distil w/ "fake words" | ? | ? | ? |
cosDPR-distil w/ "LexLSH" | ? | ? | ? |
Key:
Corpora | Size | Checksum |
---|---|---|
Quantized BM25 | 1.2 GB | 0a623e2c97ac6b7e814bf1323a97b435 |
uniCOIL (noexp) | 2.7 GB | f17ddd8c7c00ff121c3c3b147d2e17d8 |
uniCOIL (d2q-T5) | 3.4 GB | 78eef752c78c8691f7d61600ceed306f |
uniCOIL (TILDE) | 3.9 GB | 12a9c289d94e32fd63a7d39c9677d75c |
DeepImpact | 3.6 GB | 73843885b503af3c8b3ee62e5f5a9900 |
SPLADEv2 | 9.9 GB | b5d126f5d9a8e1b3ef3f5cb0ba651725 |
SPLADE++ CoCondenser-EnsembleDistil | 4.2 GB | e489133bdc54ee1e7c62a32aa582bc77 |
SPLADE++ CoCondenser-SelfDistil | 4.8 GB | cb7e264222f2bf2221dd2c9d28190be1 |
cosDPR-distil | 57 GB | e20ffbc8b5e7f760af31298aefeaebbd |
BGE-base-en-v1.5 | 59 GB | 353d2c9e72e858897ad479cca4ea0db1 |
OpenAI-ada2 | 109 GB | a4d843d522ff3a3af7edbee789a63402 |
Cohere embed-english-v3.0 | 38 GB | 06a6e38a0522850c6aa504db7b2617f5 |
dev | DL19 | DL20 | |
---|---|---|---|
Unsupervised Lexical, Complete Doc* | |||
Lucene BoW baselines | + | + | + |
WordPiece baselines (pre-tokenized) | + | + | + |
WordPiece baselines (Huggingface tokenizer) | + | + | + |
WordPiece + Lucene BoW baselines | + | + | + |
doc2query-T5 | + | + | + |
Unsupervised Lexical, Segmented Doc* | |||
Lucene BoW baselines | + | + | + |
WordPiece baselines (pre-tokenized) | + | + | + |
WordPiece + Lucene BoW baselines | + | + | + |
doc2query-T5 | + | + | + |
Learned Sparse Lexical | |||
uniCOIL noexp | ✓ | ✓ | ✓ |
uniCOIL with doc2query-T5 | ✓ | ✓ | ✓ |
Corpora | Size | Checksum |
---|---|---|
MS MARCO V1 doc: uniCOIL (noexp) | 11 GB | 11b226e1cacd9c8ae0a660fd14cdd710 |
MS MARCO V1 doc: uniCOIL (d2q-T5) | 19 GB | 6a00e2c0c375cb1e52c83ae5ac377ebb |
dev | DL21 | DL22 | DL23 | |
---|---|---|---|---|
Unsupervised Lexical, Original Corpus | ||||
baselines | + | + | + | + |
doc2query-T5 | + | + | + | + |
Unsupervised Lexical, Augmented Corpus | ||||
baselines | + | + | + | + |
doc2query-T5 | + | + | + | + |
Learned Sparse Lexical | ||||
uniCOIL noexp zero-shot | ✓ | ✓ | ✓ | ✓ |
uniCOIL with doc2query-T5 zero-shot | ✓ | ✓ | ✓ | ✓ |
SPLADE++ CoCondenser-EnsembleDistil (cached queries) | ✓ | ✓ | ✓ | ✓ |
SPLADE++ CoCondenser-EnsembleDistil (ONNX) | ✓ | ✓ | ✓ | ✓ |
SPLADE++ CoCondenser-SelfDistil (cached queries) | ✓ | ✓ | ✓ | ✓ |
SPLADE++ CoCondenser-SelfDistil (ONNX) | ✓ | ✓ | ✓ | ✓ |
Corpora | Size | Checksum |
---|---|---|
uniCOIL (noexp) | 24 GB | d9cc1ed3049746e68a2c91bf90e5212d |
uniCOIL (d2q-T5) | 41 GB | 1949a00bfd5e1f1a230a04bbc1f01539 |
SPLADE++ CoCondenser-EnsembleDistil | 66 GB | 2cdb2adc259b8fa6caf666b20ebdc0e8 |
SPLADE++ CoCondenser-SelfDistil | 76 GB | 061930dd615c7c807323ea7fc7957877 |
dev | DL21 | DL22 | DL23 | |
---|---|---|---|---|
Unsupervised Lexical, Complete Doc | ||||
baselines | + | + | + | + |
doc2query-T5 | + | + | + | + |
Unsupervised Lexical, Segmented Doc | ||||
baselines | + | + | + | + |
doc2query-T5 | + | + | + | + |
Learned Sparse Lexical | ||||
uniCOIL noexp zero-shot | ✓ | ✓ | ✓ | ✓ |
uniCOIL with doc2query-T5 zero-shot | ✓ | ✓ | ✓ | ✓ |
Corpora | Size | Checksum |
---|---|---|
MS MARCO V2 doc: uniCOIL (noexp) | 55 GB | 97ba262c497164de1054f357caea0c63 |
MS MARCO V2 doc: uniCOIL (d2q-T5) | 72 GB | c5639748c2cbad0152e10b0ebde3b804 |
The MS MARCO V2.1 corpora were derived from the V2 corpora for the TREC 2024 RAG Track. The experiments below capture topics and qrels originally targeted at the V2 corpora, but have been "projected" over to the V2.1 corpora.
dev | DL21 | DL22 | DL23 | RAGgy dev | |
---|---|---|---|---|---|
Unsupervised Lexical, Complete Doc | |||||
baselines | + | + | + | + | + |
Unsupervised Lexical, Segmented Doc | |||||
baselines | + | + | + | + | + |
Key:
bert-base-uncased
tokenizer), keyword queries (?)See instructions below the table for how to reproduce results for a model on all BEIR corpora "in one go".
Corpus | F1 | F2 | MF | U1 | S1 | BGE (flat) | BGE (HNSW) |
---|---|---|---|---|---|---|---|
TREC-COVID | ? | ? | ? | ? | ? |
full:? |
full:? |
BioASQ | ? | ? | ? | ? | ? |
full:? |
full:? |
NFCorpus | ? | ? | ? | ? | ? |
full:? |
full:? |
NQ | ? | ? | ? | ? | ? |
full:? |
full:? |
HotpotQA | ? | ? | ? | ? | ? |
full:? |
full:? |
FiQA-2018 | ? | ? | ? | ? | ? |
full:? |
full:? |
Signal-1M(RT) | ? | ? | ? | ? | ? |
full:? |
full:? |
TREC-NEWS | ? | ? | ? | ? | ? |
full:? |
full:? |
Robust04 | ? | ? | ? | ? | ? |
full:? |
full:? |
ArguAna | ? | ? | ? | ? | ? |
full:? |
full:? |
Touche2020 | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Android | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-English | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Gaming | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Gis | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Mathematica | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Physics | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Programmers | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Stats | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Tex | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Unix | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Webmasters | ? | ? | ? | ? | ? |
full:? |
full:? |
CQADupStack-Wordpress | ? | ? | ? | ? | ? |
full:? |
full:? |
Quora | ? | ? | ? | ? | ? |
full:? |
full:? |
DBPedia | ? | ? | ? | ? | ? |
full:? |
full:? |
SCIDOCS | ? | ? | ? | ? | ? |
full:? |
full:? |
FEVER | ? | ? | ? | ? | ? |
full:? |
full:? |
Climate-FEVER | ? | ? | ? | ? | ? |
full:? |
full:? |
SciFact | ? | ? | ? | ? | ? |
full:? |
full:? |
To reproduce the SPLADE++ CoCondenser-EnsembleDistil results, start by downloading the collection:
wget https://rgw.cs.uwaterloo.ca/pyserini/data/beir-v1.0.0-splade-pp-ed.tar -P collections/
tar xvf collections/beir-v1.0.0-splade-pp-ed.tar -C collections/
The tarball is 42 GB and has MD5 checksum 9c7de5b444a788c9e74c340bf833173b
.
Once you've unpacked the data, the following commands will loop over all BEIR corpora and run the regressions:
MODEL="splade-pp-ed"; CORPORA=(trec-covid bioasq nfcorpus nq hotpotqa fiqa signal1m trec-news robust04 arguana webis-touche2020 cqadupstack-android cqadupstack-english cqadupstack-gaming cqadupstack-gis cqadupstack-mathematica cqadupstack-physics cqadupstack-programmers cqadupstack-stats cqadupstack-tex cqadupstack-unix cqadupstack-webmasters cqadupstack-wordpress quora dbpedia-entity scidocs fever climate-fever scifact); for c in "${CORPORA[@]}"
do
echo "Running $c..."
python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-${c}.${MODEL}.onnx > logs/log.beir-v1.0.0-${c}-${MODEL}.onnx 2>&1
done
You can verify the results by examining the log files in logs/
.
For the other models, modify the above commands as follows:
Key | Corpus | Checksum | MODEL |
---|---|---|---|
F1 | corpus |
faefd5281b662c72ce03d22021e4ff6b |
flat |
F2 | corpus-wp |
3cf8f3dcdcadd49362965dd4466e6ff2 |
flat-wp |
MF | corpus |
faefd5281b662c72ce03d22021e4ff6b |
multifield |
U1 | unicoil-noexp |
4fd04d2af816a6637fc12922cccc8a83 |
unicoil-noexp |
S1 | splade-pp-ed |
9c7de5b444a788c9e74c340bf833173b |
splade-pp-ed |
BGE | bge-base-en-v1.5 |
e4e8324ba3da3b46e715297407a24f00 |
bge-base-en-v1.5-hnsw |
The "Corpus" above should be substituted into the full file name beir-v1.0.0-${corpus}.tar
, e.g., beir-v1.0.0-bge-base-en-v1.5.tar
.
The above commands should work with some minor modifications: you'll need to tweak the --regression
parameter to match the schema of the YAML config files in src/main/resources/regression/
.
The experiments described below are not associated with rigorous end-to-end regression testing and thus provide a lower standard of reproducibility. For the most part, manual copying and pasting of commands into a shell is required to reproduce our results.
If you've found Anserini to be helpful, we have a simple request for you to contribute back.
In the course of reproducing baseline results on standard test collections, please let us know if you're successful by sending us a pull request with a simple note, like what appears at the bottom of the page for Disks 4 & 5.
Reproducibility is important to us, and we'd like to know about successes as well as failures.
Since the regression documentation is auto-generated, pull requests should be sent against the raw templates.
Then the regression documentation can be generated using the bin/build.sh
script.
In turn, you'll be recognized as a contributor.
Beyond that, there are always open issues we would appreciate help on!
272565
(8/2/2022): this upgrade created backward compatibility issues, see #1952.
Anserini will automatically detect Lucene 8 indexes and disable consistent tie-breaking to avoid runtime errors.
However, Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes.
Lucene 8 code will not run on Lucene 9 indexes.
Pyserini has also been upgraded and similar issues apply: Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes.17b702d
(7/11/2019) from Java 8.
Maven 3.3+ is also required.75e36f9
(6/12/2019); prior to that, the toolkit uses Lucene 7.6.
Based on preliminary experiments, query evaluation latency has been much improved in Lucene 8.
As a result of this upgrade, results of all regressions have changed slightly.
To reproducible old results from Lucene 7.6, use v0.5.1.This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Previous support came from the U.S. National Science Foundation under IIS-1423002 and CNS-1405688. Any opinions, findings, and conclusions or recommendations expressed do not necessarily reflect the views of the sponsors.