Auditing Large Language Models made easy!
Language models enable companies to build and launch innovative applications to improve productivity and increase customer satisfaction. However, it’s been known that LLMs can hallucinate, generate adversarial responses that can harm users, and even expose private information that they were trained on when prompted or unprompted. It's more critical than ever for ML and software application teams to minimize these risks and weaknesses before launching LLMs and NLP models. As a result, it’s important for you to include a process to audit language models thoroughly before production. The Fiddler Auditor enables you to test LLMs and NLP models, identify weaknesses in the models, and mitigate potential adversarial outcomes before deploying them to production.
Fiddler Auditor supports
Auditor is available on PyPI and we test on Python 3.8 and above. We recommend creating a virtual python environment and installing using the following command
pip install fiddler-auditor
You can install from source after cloning this repo using the following command
pip install .
We are continuously updating this library to support language models as they evolve.
For step-by-step instructions follow the Contribution Guide.