The must-have resource for anyone who wants to experiment with and build on the OpenAI Vision API. This repository serves as a hub for innovative experiments, showcasing a variety of applications ranging from simple image classifications to advanced zero-shot learning models. It's a space for both beginners and experts to explore the capabilities of the Vision API, share their findings, and collaborate on pushing the boundaries of visual AI.
Experimenting with the OpenAI API requires an API ?. You can get one here.
experiment | complementary materials | authors |
---|---|---|
WebcamGPT - chat with video stream | @SkalskiP | |
HotDogGPT - simple image classification application | @SkalskiP | |
zero-shot image classifier with GPT-4V | @capjamesg | |
zero-shot object detection with GroundingDINO + GPT-4V | @capjamesg | |
GPT-4V vs. CLIP | @capjamesg | |
GPT-4V with Set-of-Mark (SoM) | Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, Jianfeng Gao | |
GPT-4V on Web | @Jiayi-Pan | |
automated voiceover of NBA game | @SkalskiP | |
screenshot-to-code | @abi | |
GPT with Vision Checkup | Roboflow team |
We would love your help in making this repository even better! Whether you want to add a new experiment or have any suggestions for improvement, feel free to open an issue or pull request.
If you are up to the task and want to add a new experiment, please look at our contribution guide. There you can find all the information you need.