The Gemini API is free, but there are many tools that work exclusively with the OpenAI API.
This project provides a personal OpenAI-compatible endpoint for free.
Although it runs in the cloud, it does not require server maintenance. It can be easily deployed to various providers for free (with generous limits suitable for personal use).
Tip
Running the proxy endpoint locally is also an option, though it's more appropriate for development use.
You will need a personal Google API key.
Important
Even if you are located outside of the supported regions, it is still possible to acquire one using a VPN.
Deploy the project to one of the providers, using the instructions below. You will need to set up an account there.
If you opt for “button-deploy”, you'll be guided through the process of forking the repository first, which is necessary for continuous integration (CI).
vercel deploy
vercel dev
netlify deploy
netlify dev
/v1
(e.g. /v1/chat/completions
endpoint)/edge/v1
src/worker.mjs
to https://workers.cloudflare.com/playground (see there Deploy
button).wrangler deploy
wrangler dev
See details here.
Only for Node: npm install
.
Then npm run start
/ npm run start:deno
/ npm run start:bun
.
Only for Node: npm install --include=dev
Then: npm run dev
/ npm run dev:deno
/ npm run dev:bun
.
If you open your newly-deployed site in a browser, you will only see a 404 Not Found
message. This is expected, as the API is not designed for direct browser access.
To utilize it, you should enter your API address and your Gemini API key into the corresponding fields in your software settings.
Note
Not all software tools allow overriding the OpenAI endpoint, but many do (however these settings can sometimes be deeply hidden).
Typically, you should specify the API base in this format:
https://my-super-proxy.vercel.app/v1
The relevant field may be labeled as "OpenAI proxy". You might need to look under "Advanced settings" or similar sections. Alternatively, it could be in some config file (check the relevant documentation for details).
For some command-line tools, you may need to set an environment variable, e.g.:
OPENAI_BASE_URL="https://my-super-proxy.vercel.app/v1"
..or:
OPENAI_API_BASE="https://my-super-proxy.vercel.app/v1"
Requests use the specified model if its name starts with "gemini-", "learnlm-", or "models/". Otherwise, these defaults apply:
chat/completions
: gemini-1.5-pro-latest
embeddings
: text-embedding-004
Vision and audio input supported as per OpenAI specs.
Implemented via inlineData
.
chat/completions
Currently, most of the parameters that are applicable to both APIs have been implemented, with the exception of function calls.
messages
content
role
system
(=>system_instruction
)user
assistant
tool
(v1beta)name
tool_calls
model
frequency_penalty
logit_bias
logprobs
top_logprobs
max_tokens
n
(candidateCount
<8, not for streaming)presence_penalty
response_format
seed
service_tier
stop
: string|array (stopSequences
[1,5])stream
stream_options
include_usage
temperature
(0.0..2.0 for OpenAI, but Gemini supports up to infinity)top_p
tools
(v1beta)tool_choice
(v1beta)parallel_tool_calls
user
completions
embeddings
models