Set up a free OpenAI GPT-4 API on your own
Follow these steps to get gpt4free-demo up and running:
Clone the Repository:
git clone https://github.com/username/gpt4free-demo.git
cd gpt4free-demo
Set Up Environment Variables: Copy the example environment file and set up your own variables:
cp .env.example .env
Open .env
with your preferred text editor and fill in your own values for the given variables. Save and close the file when you're finished.
Start the Services: Start your services using Docker Compose:
docker-compose up -d
If you change any environment variables in your .env
file, restart your services with docker-compose down
and docker-compose up -d
.
Access the API: Once the services are running, the API will be accessible at:
http://127.0.0.1:13000/supports
[GET]http://127.0.0.1:13000/ask?prompt=***&model=***&site=***
[POST/GET]http://127.0.0.1:13000/ask/stream?prompt=***&model=***&site=***
[POST/GET]More usage examples can be found at xiangsx/gpt4free-ts.
Certainly! If you want to include instructions on how to use hurl
to test the API in the README, you can add a new section like this:
Hurl is a command-line tool to run HTTP requests. You can use it to test the endpoints in this API. Here's how you can get started:
Install Hurl: Follow the instructions on the official website to install Hurl on your system.
Create a Hurl File: You can create a file with a .hurl
extension to define the HTTP requests you want to test. Here's an example gpt.hurl
file for this project:
# List all supports model
GET http://127.0.0.1:13000/supports
# Call Vita model
GET http://127.0.0.1:13000/ask
[QueryStringParams]
site: vita
model: gpt-3.5-turbo
prompt: Tell me a joke about Software Engineering
Run the Hurl File: Use the following command to execute the gpt.hurl
file:
hurl --verbose gpt.hurl
This will run the defined HTTP requests and print the responses to the terminal.
Read the Documentation: For more advanced usage, you can refer to the samples documentation.