Ollama API interaction Ghidra script for LLM-assisted reverse-engineering.
What is this?
This script interacts with Ollama's API to interact with Large Language Models (LLMs). It utilizes the Ollama API to perform various reverse engineering tasks without leaving Ghidra. It supports both local and remote instances of Ollama. This script is inspired by GptHidra.
Supported Models
This script supports any model that Ollama supports
Ollama also recently added support for any model available on HuggingFace in GGUF format, for example:
ollama run hf.co/arcee-ai/SuperNova-Medius-GGUF
Prerequisites
- Ghidra
- Ollama
- Any of the Ollama models
Ollama Setup
Feel free to replace llama3.1:8b
with any of the Ollama-compatible models
curl -fsSL https://ollama.com/install.sh | sh
ollama run llama3.1:8b
Now you should be good to go, localhost:11434
should be ready to handle requests
Note: This script also supports remote instances, set the IP address and port during first configuration.
What can it do?
- Explain the function that is currently in the decompiler window
- Suggest a name for the current function, will automatically name the function if this has been enabled
- Rewrite current function with recommended comments
- Completely rewrite the current function, trying to improve function/parameter/variable names and also add comments
- User can ask a question about a function
- Find bugs/suggest potential vulnerabilities in current function (more just to make sure you've covered everything, some suggestions are dumb as it doesn't have the context)
- Use a modified version of this LeafBlowerLeafFunctions.py Ghidra Script to automate analysis of potential 'leaf' functions such as strcpy, memcpy, strlen, etc in binaries with stripped symbols, auto rename if this is enabled
- Explain the single assembly instruction that is currently selected in the listing window
- Explain multiple assembly instructions that are currently selected in the listing window
- General prompt entry for asking questions (rather than having to Google, good for simple stuff)
Configuration Options
The following config options are available, and can be configured on first run:
- Server IP : If using remote instance, set to IP of remote instance, otherwise enter
localhost
- Port : If your instance is on a different port, change it here - default is
11434
- Scheme : Select
http
or https
depending on how your instance is configured
- Model : Select the model you wish to use for analysis, you can change this at any point
- Project-specific prompt : Used to give some additional context to the model if this is required
- Response Comments : Some options store the responses as a comment at the top of the function, this can be enabled/disabled here
- Auto-renaming : Some options try and automatically rename functions based on the responses, this can be enabled/disabled here
Options 11 & 12 can be used to adjust settings after first-run.
Usage
- Place the GhidrOllama.py script and the ghidrollama_utils directory in your Ghidra script directory (usually
~/ghidra_scripts
).
- Find a function/instruction you want to feed to the LLM
- Run the script from the Script Manager window
- If this is the first time running the script, complete the initial configuration (this can be changed later)
- Select how you want to function/instruction to be analysed
- Wait until the output is printed to the console (wait time varies depending on model and specifications of host)
Ways to Run
Going into the script window to run this script every time is inconvenient, the script can easily be run in the following ways:
- Keybinding: The default keybinding for this script is Q
- Toolbar: A small icon on the toolbar can be clicked to run the script
Keybinding
To enable, once you have added your script into the list of Ghidra script directories, locate the script in the list. Right click the result, and select "Assign key binding". If it prompts for a key, enter q.
Toolbar
To enable the toolbar icon, locate the script in the script manager, and click the tick box in the first column (the In Toolbar column). The GhidrOllama icon should appear in the toolbar.
Notes
- It is worth noting that the analysis will only be as good as the model, this script basically just bridges the models and Ghidra
- The better the model (more parameters, slower) the more comprehensive and correct the analysis will be (most of the time!)
- If you want a quick overview of a function, llama3.1:8b is quick and gives a decent analysis
- Be careful with large functions and low-context models, if the function is too big for the context window, the model will respond with complete gibberish (if it responds at all)
Function Analysis Examples
Below is an implementation of strchr() as seen in Ghidra decompile window with stripped symbols: