interpret lm knowledge
1.0.0
想法:我们如何解释语言模型在训练的各个阶段学到的内容?语言模型最近被描述为开放知识库。我们可以通过从连续时期或架构变体的屏蔽语言模型中提取关系三元组来生成知识图,以检查知识获取过程。
数据集:Squad、Google-RE(3 种风格)
模型:BERT、RoBeRTa、DistilBert,从头开始训练 RoBERTa
作者:维尼特拉·斯瓦米、安吉莉卡·罗马努、马丁·贾吉
该存储库是 NeurIPS 2021 XAI4Debugging 论文“通过知识图提取解释语言模型”的官方实现。觉得这项工作有用吗?请引用我们的论文。
git clone https://github.com/epfml/interpret-lm-knowledge.git
pip install git+https://github.com/huggingface/transformers
pip install textacy
cd interpret-lm-knowledge/scripts
python run_knowledge_graph_experiments.py <dataset> <model> <use_spacy>
squad Bert spacy
re-place-birth Roberta
可选参数:
dataset=squad - "squad", "re-place-birth", "re-date-birth", "re-place-death"
model=Roberta - "Bert", "Roberta", "DistilBert"
extractor=spacy - "spacy", "textacy", "custom"
有关示例,请参阅run_lm_experiments notebook
。
!pip install git+https://github.com/huggingface/transformers
!pip list | grep -E 'transformers|tokenizers'
!pip install textacy
wikipedia_train_from_scratch_lm.ipynb
。 from run_training_kg_experiments import *
run_experiments(tokenizer, model, unmasker, "Roberta3e")
@inproceedings { swamy2021interpreting ,
author = { Swamy, Vinitra and Romanou, Angelika and Jaggi, Martin } ,
booktitle = { Advances in Neural Information Processing Systems (NeurIPS), 1st Workshop on eXplainable AI Approaches for Debugging and Diagnosis } ,
title = { Interpreting Language Models Through Knowledge Graph Extraction } ,
year = { 2021 }
}