A simple but super effective LLM inference framework for planning and refinement.
Two LLM models are prompted with different roles
Given a task to solve, the Persuader tries its best to persuade the Questioner to agree with its proposing solution. The Questioner, on the other hand, tries to find logical inconsistencies and loopholes in the Persuader's proposal and asks detailed questions. The chat between Persuader and Questioner continues until the Questioner "AGREE"s with the Persuader.
As a result, the LLM eventually delivers a highly detailed plan to solve the task at hand.
Clone the repository
Ensure you have the latest openai library installed.
pip install -U openai
Attach your OpenAI API key in the devil_advocate.py
code.
Modify the task.txt
file to your needs. The current task.txt file has a sample task.
Run using python devil_advocate.py
This resonates with Conditional Generative Adversarial Network and state of the art self-distillation paradigms (BYOL, DINOv2, etc.). These paradigms have empirically found that optimizing via head-butting against "asymmetric" models deliver more fine-grained, robust context representations. With further research, I'm wondering if LLM inference frameworks would pose the same findings.
Hopefully this repository can expand this idea and make some super cool discoveries!
This is much of a work in progress. But I wanted to share my ideas and simple code for you all to try it out. It's super simple but surprisingly effective.