This repo contains data from AI21 Labs' paper Generating Benchmarks for Factuality Evaluation of Language Models.
We include the following FACTOR benchmarks for evaluating factuality of language models:
To install the required libraries in our repo, run:
pip install -r requirements.txt
To have a Pytorch version specific to your CUDA, install your version before running the above command.
In the paper, we give the results for the following models (replace $MODEL_NAME
with one of those).
gpt2
, gpt2-medium
, gpt2-large
, gpt2-xl
EleutherAI/gpt-neo-1.3B
, EleutherAI/gpt-neo-2.7B
, EleutherAI/gpt-j-6B
facebook/opt-125m
, facebook/opt-350m
, facebook/opt-1.3b
, facebook/opt-2.7b
, facebook/opt-6.7b
, facebook/opt-13b
, facebook/opt-30b
, facebook/opt-66b
To run evaluation on models over FACTOR datasets, please use the following command:
python python eval_factuality.py
--data_file ./data/wiki_factor.csv
--output_folder $OUTPUT_DIR
--model_name $MODEL_NAME
wiki_factor
, expert_factor
and code: Released under the MIT license.news_factor
: The benchmark is derived from The RefinedWeb Dataset. The public extract is made available under an ODC-By 1.0 license; users should also abide to the CommonCrawl ToU: https://commoncrawl.org/terms-of-use/.If you find our paper or code helpful, please cite our paper:
@article{muhlgay2023generating,
title={Generating benchmarks for factuality evaluation of language models},
author={Muhlgay, Dor and Ram, Ori and Magar, Inbal and Levine, Yoav and Ratner, Nir and Belinkov, Yonatan and Abend, Omri and Leyton-Brown, Kevin and Shashua, Amnon and Shoham, Yoav},
journal={arXiv preprint arXiv:2307.06908},
year={2023}
}