Interact with GPT AI models as a power user.
Supports multiple AI providers
Rich prompt engineering support
Flexible UI
Search Stack Overflow from within the editor
Invoke pre-cooked custom CLI commands from within your editor
Download from: VSCode Marketplace and follow instructions.
OR
Steps:
Ctrl
-Shift
-P
(Windows, Linux) or Cmd
-Shift
-P
(macOS), run > Extensions: Install Extension
.FlexiGPT
by ppipada
.Ctrl
/Cmd
+ ,
keyboard shortcut) and search for flexigpt
.Options:
// flexigpt basic configuration
"flexigpt.promptFiles": "/home/me/my_prompt_files/myprompts.js",
"flexigpt.inBuiltPrompts": "gobasic.js;gosql.js",
"flexigpt.defaultProvider": "openai",
// openai provider configuration
"flexigpt.openai.apiKey": "sk-mkey",
"flexigpt.openai.timeout": "120",
"flexigpt.openai.defaultCompletionModel": "gpt-3.5-turbo",
"flexigpt.openai.defaultChatCompletionModel": "gpt-3.5-turbo",
"flexigpt.openai.defaultOrigin": "https://api.openai.com",
// anthropic provider configuration
"flexigpt.anthropic.apiKey": "sk-mkey",
"flexigpt.anthropic.timeout": "120",
"flexigpt.anthropic.defaultCompletionModel": "claude-3-haiku-20240307",
"flexigpt.anthropic.defaultChatCompletionModel": "claude-3-haiku-20240307",
"flexigpt.anthropic.defaultOrigin": "https://api.anthropic.com",
// huggingface provider configuration
"flexigpt.huggingface.apiKey": "hf-mkey",
"flexigpt.huggingface.timeout": "120",
"flexigpt.huggingface.defaultCompletionModel": "bigcode/starcoder2-15b",
"flexigpt.huggingface.defaultChatCompletionModel": "deepseek-ai/deepseek-coder-1.3b-instruct",
"flexigpt.huggingface.defaultOrigin": "https://api-inference.huggingface.co",
// googlegl provider configuration
"flexigpt.googlegl.apiKey": "gl-mkey",
"flexigpt.googlegl.timeout": "120",
"flexigpt.googlegl.defaultCompletionModel": "gemini-1.0-pro",
"flexigpt.googlegl.defaultChatCompletionModel": "gemini-1.0-pro",
"flexigpt.googlegl.defaultOrigin": "https://generativelanguage.googleapis.com",
// llamacpp provider configuration
"flexigpt.llamacpp.apiKey": "",
"flexigpt.llamacpp.timeout": "120",
"flexigpt.llamacpp.defaultOrigin": "127.0.0.1:8080",
OpenAI provider requires an API key to function. You can obtain one from your openAI account settings here.
Supported APIs
Supported models - All models supported by above two APIs
gpt-4
gpt-4-*
gpt-3.5-turbo
gpt-3.5-turbo-*
FlexiGPT uses defaultChatCompletionModel: gpt-3.5-turbo
, unless the prompt overrides it.
For an example on how to use Function calling
feature of OpenAI look at this prompt file here.
Configuration Options:
gpt-3.5-turbo
. Note that gpt-3.5-turbo
usage is accounted in OpenAIs billing. Only free model that is in beta as of Feb 2023 is codex (code-davinci-002
).https://api.openai.com
.Anthropic provider requires an API key to function. You can obtain one from the Anthropic website here.
Supported API
claude-3-*
, claude-2*
FlexiGPT uses defaultChatCompletionModel: claude-3-haiku-20240307
, unless the prompt overrides it.
Configuration Options:
claude-3-haiku-20240307
.https://api.anthropic.com
.Huggingface provider requires an API key to function. You can obtain one from the huggingface website here.
Supported API
Supported models - All models supported by above API
FlexiGPT uses defaultChatCompletionModel: deepseek-ai/deepseek-coder-1.3b-instruct
, unless the prompt overrides it.
Configuration Options:
deepseek-ai/deepseek-coder-1.3b-instruct
.bigcode/starcoder2-15b
.https://api-inference.huggingface.co
.Googlegl provider requires an API key to function. You can obtain one from the website here.
Supported API: https://ai.google.dev/api/rest/v1/models/generateContent
gemini-1.0-pro
chat-bison-001
(legacy)text-bison-001
(legacy)FlexiGPT uses defaultChatCompletionModel: gemini-1.0-pro
, unless the prompt overrides it.
Configuration Options:
gemini-1.0-pro
.gemini-1.0-pro
.https://generativelanguage.googleapis.com
.Setup a llama.cpp server as noted here
Supported APIs
your host:port of the llama server
/completionSupported models - All models supported by the above APIs. Note that the model in llama.cpp needs to be given when running the server itself and cannot be given at each request level.
Configuration Options:
http://127.0.0.1:8080
.Get code using a comment in the editor.
Ctrl
+ Alt
+ G
FlexiGPT: Get Code
option to click/enterSteps to get all the below functionality (similar for all configured prompts; inbuilt or custom):
Ctrl
+ Alt
+ A
Rectify and refactor selected code.
Create unit test for selected code.
Complete the selection.
Explain the selection.
Generate documentation for the selected code.
Find problems with the selection, fix them and explain what was wrong.
Optimize the selected code
{selection}
or {readfile}
to enhance your chat with AI. (Same capability as available for prompt files)The chat activity bar can be opened in following ways:
FlexiGPT: Ask
option to click/enterCtrl
/Cmd
+ Shift
+ P
): You should get a FlexiGPT: Ask
option to click/enterCtrl
+ Alt
+ A
Search for stack overflow questions from your editor.
Ctrl
+ Alt
+ S
FlexiGPT: Stackoverflow Search
option to click/enterCtrl
/Cmd
+ Shift
+ P
): You should get a FlexiGPT: Stackoverflow Search
option to click/entercliCommands
in your prompt files.
Ctrl
+ Alt
+ C
FlexiGPT: Run CLI Command
to click/enterCtrl
/Cmd
+ Shift
+ P
): You should get a FlexiGPT: Run CLI Command
option to click/enterEngineer and fine tune prompts, save them and use them directly within VSCode.
Supports request parameter modifications for GPT APIs
Predefined system variables can be used to enhance your question.
{system.selection}
or just {selection}
to pass on the selected text in the editor (code or otherwise).{system.readfile}
or just {readfile}
to pass on the open file{system.readfile <your file path>}
to pass on the file at a given pathsystem.
prefix for a system variable is optional. Therefore, you can even use only {selection}
to use the selected text, or {language}
instead of {system.language}
for language of your file.Supports post-processing response via responseHandlers in prompts. Multiple inbuilt predefined responseHandlers available. Also supports custom responseHandlers. Example can be found here.
Function calling feature of GPT3.5/4 models is also supported. Example can be found in this prompt file.
On clicking on the input text box, basic prompts provided by FlexiGPT itself, any prompts defined in flexigpt.promptFiles
, and any inbuilt prompts enabled using flexigpt.inBuiltPrompts
, as defined in the configuration should be loaded. (If first time click on text box doesn't load some preconfigured prompts, try escaping options and clicking again. VSCode may take some time to load dynamic lists from files.)
If you select the preconfigured prompts the question template defined in the prompt command will be used after substituting defined system/user variables. Other command options will also be taken from the definition itself.
If you type a free floating question in the text box, the text itself will be used as prompt directly. You can use predefined system variables to enhance your free floating question too.
{selection}
to pass on the selected text in the editor{readfile}
to pass on the open fileFlexiGPT basic prompts (Default: enabled)
Go basic prompts (Default: disabled, enable in configuration)
Go sqlx + squirrel prompts (Default: disabled, enable in configuration)
module.exports = {
namespace: "myprompts",
commands: [
{
name: "Refactor",
template: `Refactor following function.
function:
{system.selection}`,
},
],
};
module.exports = {
namespace: "MyComplexPrompts",
commands: [
{
name: "Create unit test.",
template: `Create unit test in {user.unitTestFramework} framework for following function.
code:
{system.selection}`,
responseHandler: {
func: "writeFile",
args: {
filePath: "user.testFileName",
},
},
requestparams: {
model: "gpt-3.5-turbo",
stop: ["##", "func Test", "package main", "func main"],
},
},
{
name: "Write godoc",
template: `Write godoc for following functions.
code:
{system.selection}`,
responseHandler: {
func: "append",
args: {
position: "start",
},
},
requestparams: {
model: "code-davinci-002",
stop: ["##", "func Test", "package main", "func main"],
},
},
],
functions: [
// you could also write your own responseHandler.
// Note that it takes a single object as input.
function myHandler({ system, user }) {
console.table({ system });
console.table({ user });
},
],
variables: [
{
name: "unitTestFramework",
value: "testing",
},
{
name: "testFileName",
value: ({ baseFolder, fileName, fileExtension }) =>
`${baseFolder}\${fileName}_test${fileExtension}`,
},
],
cliCommands: [
{
name: "Go generate all",
command: `go generate ./...`,
description: "Run go generate in the workspace",
},
],
};
name: Required
description: Optional
template: Required
{system.*variableName*}
, variableName can be one of Predefined System Variables. You can also pass parameters to functions like readFile. E.g: {readfile user.testFile}
is a valid template variable where input to readfile is the file pointed by the user defined variable testfile.{user.*variableName*}
, variableName must be in variables field in prompt file.requestparams: optional
{ [key: string]: any }
.responseHandler: Optional
responseHandler is used to handle a response. By default, replace function is used. Handle function can be one of Predefined System Function or a User defined function.
You can set responseHandler in following ways:
responseHandler: "replace";
responseHandler: {
func: 'replace',
args: {
textToReplace: 'user.answerModified'
}
}
Any of the variables
items can be used in a command template. User-defined values must have the "user" prefix. For example, if "testFileName" is defined in variables, it can be used as "user.TestFileName" in the template file or passed to a function.
Variable values can be static or dynamic. For dynamic values, you should create a getter method. When calling the variable getter, a single object with system variables (see Predefined System Variables) is passed as first argument, any other vars can be taken as next args..
module.exports = {
variables: [
{
//static
name: "testingFramework",
value: "xUnit"
},
{
//dynamic
name: "typeNameInResponse",
value: ({ answer/*system variable*/ }, myTestFile/*user defined var*/ ) => {}
},
]
functions: [
function extractTypeName({ code, system }) {/**/},
function myOtherFunc() {},
],
commands: [
{
name: "Create DTO",
template: `Create unit test with {user.testingFramework} for following class.
class:
{system.selection}`,
responseHandler: {
func: 'writeFile',
args: {
filePath: 'user.typeNameInResponse'/*usage for function arg*/
}
}
}
]
}
All vars are case-insensitive.
Variable Name | Description |
---|---|
system.selection | Selected text in editor |
system.question | OpenAI question |
system.answer | OpenAI answer |
system.language | Programming language of active file |
system.baseFolder | Project base path |
system.fileFolder | Parent folder path of active file |
system.fileName | Name of active file |
system.filePath | Full path of active file |
system.fileExtension | Extension of active file |
system.commitAndTagList | Last 25 commits and associated tags |
system.readFile | Read the full open editor file. Optionaly pass a filepath as a second argument |
Note that the system.
prefix for a system variable is optional. Therefore, you can even use only {selection}
to use the selected text, or {language}
instead of {system.language}
for language of your file.
functions
list.Function Name | Description | params(default) |
---|---|---|
append | Append Text | textToAppend(system.answer),postion('end') |
replace | Replace selected text | textToReplace(system.answer) |
writeFile | Write text to file. Append if file exists. | filePath(),content(system.answer) |
Replace
Replace text with selection. Take optional parameter textToReplace
In default value equals to API answer.
Default Usage
...
commands: [
{
name: "Refactor",
template: `Refactor following function.
function:
{system.selection}`
responseHandler:'replace'
},
],
Usage with params
...
commands: [
{
name: "Refactor",
template: `Refactor following function.
function:
{system.selection}`
responseHandler:{
func: 'replace',
args: {
textToReplace: 'user.answerModified'
}
}
},
],
variables: [
{
name: "answerModified",
value: ({answer})=>`/*n${anwer}n*/`
},
],
Append
Append text with selection. Take optional parameter textToAppend
and postion
. postion
can be start
or end
In default textToAppend
equals to OpenAI postion
is end of selection
Sample usage
...
commands: [
{
name: "Append",
template: `Write jsdoc for following function.
function:
{system.selection}`
responseHandler:{
func: 'append',
args: {
position: 'start'
}
}
},
],
name: Required
description: Optional
command: Required
Functional areas | Features and Implementations | Status |
Flexibility to talk to any AI | Integration with multiple AI providers through APIs. | Done |
Support parameter selection and handle different response structures. | Done | |
Flexibility to use custom prompts | Support for prompt engineering that enables creating and modifying prompts via a standard structure. | Done |
Allow request parameter modification | Done | |
Allow adding custom response handlers to massage the response from AI. | Done | |
Provide common predefined variables that can be used to enhance the prompts | Done | |
Provide extra prompt enhancements using custom variables that can be static or function getters. This should allow function definitions in the prompt structure and integrate the results into prompts. Also allow passing system vars or user vars or static strings as inputs | Done | |
Provide capability to evaluate different prompts, assign ELO ratings, choose and save the strongest | Long term | |
Seamless UI integration | Design a flexible UI, a chat interface integrated into the VSCode activity bar. | Done |
The UI must support saving, loading, and exporting of conversations. | Done | |
Implement streaming typing in the UI, creating a feeling that the AI bot is typing itself. | Long term | |
Adhoc queries/tasks | Help the developer ask adhoc queries to AI where he can describe the questions or issues to it using the chat interface. This can be used to debug issues, understand behaviour, get hints on things to look out for, etc. The developer should be able to attach code or files to his questions. | Done |
Provide a way to define pre cooked CLI commands and fire them as needed. Interface to define CLI commands should be similar to prompts. | Done | |
Provide a way to search queries on StackOverflow. | Done | |
Provide a way to get results for queries from StackOverflow answers and corresponding AI answer. | Long term | |
Code completion and intelligence | Provide a way to generate code from a code comment | Done |
Provide a way to complete, refactor, edit, or optimize code via the chat interface. Should allow selecting relevant code from editor as needed. | Done | |
Implement a context management system integrated with the Language Server Protocol (LSP) that can be used to enrich AI interactions. | Medium term | |
Support generating code embeddings to understand the code context and integrate it into prompts. | Medium term | |
Develop an intelligent code completion feature that predicts next lines of code. It should integrate context (LSP or embeddings) into autocomplete prompts and handle autocomplete responses in the UI. | Medium term | |
Code review and intelligence | Provide a way to review via the chat interface. Should allow selecting relevant code from editor as needed | Done |
Ability to fetch a Merge/Pull request from Github, Gitlab or other version providers, analyse them and provide review comments. Should provide flexibility to specify review areas and associated priority depending on usecase. | Medium term | |
Provide automated code reviews and recommendations. It should provide subtle indicators for code improvements and handle code review API responses in the UI. | Long term | |
Provide automated refactoring suggestions. This should handle refactoring API responses and display suggestions in the UI. | Long term | |
Provide automated security suggestions. This should be able to identify potential vulnerabilities being added or deviations from security best practices used in code. | Long term | |
Code documentation assistance | Generate documentation for the selected code using the chat interface. Should allow selecting relevant code from editor as needed. | Done |
Develop effective inline documentation assistance. It should automatically generate and update documentation based on the code and display it in the UI. | Long term | |
Code Understanding and Learning Support | Provide a way to explain code via the chat interface. Should allow selecting relevant code from editor as needed. | Done |
Develop/Integrate with an integrated knowledge graph to provide detailed explanations of Services, APIs, methods, algorithms, and concepts the developer is using or may want to use | Long term | |
Integrate graph search into prompts | Long term | |
Testing | Provide a way to generate unit tests via the chat interface. Should allow selecting relevant code from editor as needed. Should have ability to insert tests in new files or current file as needed. | Done |
Provide a way to generate API and associated workflow tests via the chat interface. Should allow selecting relevant code/api definitions from editor as needed. Should have ability to insert tests in new files or current file as needed. | Short term |
FlexiGPT is a fully open source software licensed under the MIT license.
Contributions are welcome! Feel free to submit a pull request on GitHub.
If you have any questions or problems, please open an issue on GitHub at the issues page.