這個儲存庫包含社群維護的 Java 中 OpenAI API 庫,這是在應用程式中使用 GPT 3/4 的最簡單方法。
這個 Java 函式庫(社群維護的函式庫)提供了一種與 OpenAI 的 API 互動的便捷方式,以實現審核和聊天完成。該庫封裝了必要的細節,可以輕鬆地將 OpenAI 的強大模型整合到您的 Java 應用程式中。該項目不是由 OPENAI 維護的,這是一個非官方庫。
請參考 CONTRIBUTING.md
所有「OPENAI_KEYS」必須透過此處定義的readKeys()函數讀取,該函數允許您同時讀取多個鍵,對於多執行緒任務,建議使用多個鍵以避免速率限制。要運行此功能,您需要在專案根資料夾中擁有keys.txt(可以隨意編輯)。
若要使用聊天完成 API,請依照下列步驟操作:
Message message = Message . builder ()
. role ( "user" )
. content ( "what is the capital of Cambodia?" )
. build ();
List < Message > messages = new ArrayList <>();
messages . add ( message );
ChatCompletionRequest request = ChatCompletionRequest . builder ()
. model ( "gpt-3.5-turbo" )
. messages ( messages )
. build ();
ChatCompletionResponse response = new EasyopenaiService ( new DAOImpl ()). chatCompletion ( "OPENAI_KEY" , request );
按一下此處跳轉至程式碼範例。
若要使用審核 API,請執行下列步驟:
ModerationAPIRequest request = ModerationAPIRequest . builder ()
. model ( "text-moderation-latest" )
. input ( "hello from the other side kill me now" )
. build ();
ModerationAPIResponse res = new EasyopenaiService ( new DAOImpl ()). getmoderation ( "OPENAI_KEY" , request );
按一下此處跳轉至程式碼範例。
Vision API 可以這樣使用,隨意在圖像列表中添加 N 個圖像-
VisionApiResponse responseobj = new EasyVisionService (). VisionAPI ( "OPENAI_KEY" , new EasyVisionRequest ()
. setModel ( "gpt-4-vision-preview" )
. setPrompt ( "What are the difference between these images?" )
. setImageUrls ( new ArrayList < String >() {{
add ( "https://images.pexels.com/photos/268533/pexels-photo-268533.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1" );
add ( "https://images.pexels.com/photos/36717/amazing-animal-beautiful-beautifull.jpg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1" );
}}));
. setMaxtokens ( 1000 ) //Optional - for more tokens, defaults to 300
按一下此處跳轉至程式碼範例。
若要使用 Vision API,請執行下列步驟:
EasyopenaiService obj = new EasyopenaiService ( new DAOImpl ());
VisionApiRequest request = new VisionApiRequest ();
ImageUrl url = new ImageUrl ();
url . setUrl ( "https://images.pexels.com/photos/18907092/pexels-photo-18907092/free-photo-of-a-photo-of-the-golden-gate-bridge-in-the-sky.jpeg" );
url . setDetail ( "low" );
Content content1 = new Content ();
content1 . setText ( "What’s in this image?" );
content1 . setType ( "text" );
Content content2 = new Content ();
content2 . setImageUrl ( url );
content2 . setType ( "image_url" );
ArrayList < Content > listofContent = new ArrayList <>();
listofContent . add ( content1 );
listofContent . add ( content2 );
MessageList messageList = new MessageList ();
messageList . setRole ( "user" );
messageList . setContent ( listofContent );
ArrayList < MessageList > listofMessage = new ArrayList <>();
listofMessage . add ( messageList );
request . setModel ( "gpt-4-vision-preview" );
request . setMaxTokens ( 300 );
request . setMessages ( listofMessage );
VisionApiResponse res = obj . visionAPI ( OPENAI_KEY , request );
System . out . println ( "Vision API Response is:" + res );
按一下此處跳轉至程式碼範例。
語音API可以這樣使用,隨意調整-
SpeechRequest request = SpeechRequest . builder ()
. model ( "tts-1" )
. input ( "Easy OpenAI is best solution." )
. voice ( "alloy" )
. build ();
ResponseBody response = new EasyopenaiService ( new DAOImpl ()). createSpeech ( "OPENAI_KEY" , request );
按一下此處跳轉至程式碼範例。
轉錄API可以這樣使用,隨意調整-
EasyTranscriptionRequest request = EasyTranscriptionRequest . builder ()
. filepath ( "/Users/namankhurpia/Desktop/audio.mp3" )
. model ( "whisper-1" )
. build ();
ResponseBody response = new EasyTranscriptionService (). EasyTranscription ( "OPENAI_KEY" , request );
按一下此處跳轉至程式碼範例。
轉錄API可以這樣使用,隨意調整-
File audioFile = new File ( "/Users/namankhurpia/Desktop/audio.mp3" );
MultipartBody . Part filePart = MultipartBody . Part . createFormData (
"file" ,
audioFile . getName (),
RequestBody . create ( MediaType . parse ( "audio/*" ), audioFile )
);
RequestBody model = RequestBody . create ( MediaType . parse ( "text/plain" ), "whisper-1" );
RequestBody language = RequestBody . create ( MediaType . parse ( "text/plain" ), "" );
RequestBody prompt = RequestBody . create ( MediaType . parse ( "text/plain" ), "" );
RequestBody response_format = RequestBody . create ( MediaType . parse ( "text/plain" ), "" );
RequestBody temperature = RequestBody . create ( MediaType . parse ( "text/plain" ), "" );
ResponseBody response = new EasyopenaiService ( new DAOImpl ()). createTranscriptions ( "OPENAI_KEY" , filePart , model , language , prompt , response_format , temperature );
按一下此處跳轉至程式碼範例。
影像生成API可以這樣使用,隨意調整-
ImageRequest imageRequest = ImageRequest . builder ()
. prompt ( "Generate a cute dog playing with frizbee" )
. model ( "dall-e-2" )
. build ();
ImageResponse response = new EasyopenaiService ( new DAOImpl ()). createImage ( "OPENAI_KEY" , imageRequest );
按一下此處跳轉至程式碼範例。
若要非同步使用聊天完成 API,請依照下列步驟操作:
使用AsyncDAOImpl
實例初始化EasyopenaiAsyncService
物件:
EasyopenaiAsyncService obj = new EasyopenaiAsyncService ( new AsyncDAOImpl ());
建立一個ChatMessage
物件列表來表示對話:
ChatMessage message = new ChatMessage ();
message . setRole ( "user" );
message . setContent ( "what is the capital of Cambodia?" );
List < ChatMessage > messages = new ArrayList <>();
messages . add ( message );
建立ChatCompletionRequest
物件:
ChatCompletionRequest request = new ChatCompletionRequest ();
request . setModel ( "gpt-3.5-turbo" );
request . setMessages ( messages ); // old conversations as well
進行非同步 API 呼叫:
ChatCompletionResponse res = obj . getAsyncChatCompletion ( OPENAI_KEY , request );
按一下此處跳轉至程式碼範例。
若要非同步使用審核 API,請依照下列步驟操作:
建立一個ModerationAPIRequest
物件:
ModerationAPIRequest request = new ModerationAPIRequest ();
request . setInput ( "kill me now" );
request . setModel ( "text-moderation-latest" );
使用AsyncDAOImpl
實例初始化EasyopenaiAsyncService
物件:
EasyopenaiAsyncService obj = new EasyopenaiAsyncService ( new AsyncDAOImpl ());
進行非同步 API 呼叫:
ModerationAPIResponse res = obj . getASyncModeration ( OPENAI_KEY , request );
按一下此處跳轉至程式碼範例。
對於使用聊天完成 API 進行多執行緒和並發調用,請按照以下步驟操作:
建立ChatCompletionRequestList
物件:
ChatCompletionRequestList list = new ChatCompletionRequestList ( new ArrayList < ChatCompletionRequest >());
將多個ChatCompletionRequest
物件加入到清單中:
// Example request 1
ChatCompletionRequest requestchat = new ChatCompletionRequest ();
requestchat . setModel ( "gpt-3.5-turbo" );
ChatMessage message = new ChatMessage ();
message . setRole ( "user" );
message . setContent ( "what is the capital of India?" );
List < ChatMessage > messages = new ArrayList <>();
messages . add ( message );
requestchat . setMessages ( messages );
list . add ( requestchat );
// Add more requests as needed (requestchat2, requestchat3, requestchat4, etc.)
進行多異步API呼叫:
EasyopenaiConcurrentService concurrentCalls = new EasyopenaiConcurrentService ();
ChatCompletionResponseList responseList = concurrentCalls . CallMultipleChatCompletionAPI ( OPENAI_KEY , list );
System . out . println ( responseList );
按一下此處跳轉至程式碼範例。
對於使用審核 API 的多執行緒和並發調用,請按照以下步驟操作:
建立一個ModerationRequestList
物件:
ModerationRequestList requestList = new ModerationRequestList ( new ArrayList < ModerationAPIRequest >());
將多個ModerationAPIRequest
物件加入到清單中:
// Example request 1
ModerationAPIRequest request = new ModerationAPIRequest ();
request . setInput ( "kill me now" );
request . setModel ( "text-moderation-latest" );
requestList . add ( request );
// Add more requests as needed (request2, request3, request4, etc.)
進行多異步API呼叫:
EasyopenaiConcurrentService concurrentCalls = new EasyopenaiConcurrentService ();
ModerationResponseList responseList = concurrentCalls . CallMultipleModerationAPI ( OPENAI_KEY , requestList );
System . out . println ( responseList );
按一下此處跳轉至程式碼範例。
這允許您使用多個鍵進行多個聊天完成呼叫。所有 API 呼叫都是並行執行的,但回應會被主動收集並在所有執行緒完成時發回。
CompletableFuture類別已用於實作。請參考 EasyopenaiConcurrentService.java 檔案查看原始程式碼。
用法範例-
public static void CallMultipleChatCompletionAPI_multikey_Test ()
{
//this function read multiple keys from keys.txt file
ArrayList < String > keyList = readKeys ();
EasyopenaiConcurrentService concurrentCalls = new EasyopenaiConcurrentService ();
ChatCompletionRequestList list = new ChatCompletionRequestList ( new ArrayList < ChatCompletionRequest >());
//First thread for
ChatCompletionRequest requestchat = new ChatCompletionRequest ();
requestchat . setModel ( "gpt-3.5-turbo" );
Message message = new Message ();
message . setRole ( "user" );
message . setContent ( "what is the capital of India?" );
List < Message > messages = new ArrayList <>();
messages . add ( message );
requestchat . setMessages ( messages );
list . add ( requestchat );
ChatCompletionRequest requestchat2 = new ChatCompletionRequest ();
requestchat2 . setModel ( "gpt-3.5-turbo" );
Message message2 = new Message ();
message2 . setRole ( "user" );
message2 . setContent ( "what is the capital of combodia?" );
List < Message > messages2 = new ArrayList <>();
messages2 . add ( message2 );
requestchat2 . setMessages ( messages2 );
list . add ( requestchat2 );
ChatCompletionRequest requestchat3 = new ChatCompletionRequest ();
requestchat3 . setModel ( "gpt-3.5-turbo" );
Message message3 = new Message ();
message3 . setRole ( "user" );
message3 . setContent ( "what is the capital of new zealand?" );
List < Message > messages3 = new ArrayList <>();
messages3 . add ( message3 );
requestchat3 . setMessages ( messages3 );
list . add ( requestchat3 );
ChatCompletionRequest requestchat4 = new ChatCompletionRequest ();
requestchat4 . setModel ( "gpt-3.5-turbo" );
Message message4 = new Message ();
message4 . setRole ( "user" );
message4 . setContent ( "what is the capital of hawaii? and what is 2+2?" );
List < Message > messages4 = new ArrayList <>();
messages4 . add ( message4 );
requestchat4 . setMessages ( messages4 );
list . add ( requestchat4 );
ChatCompletionResponseList responseList = concurrentCalls . CallMultipleChatCompletionAPI ( keyList , list );
System . out . println ( "response is" + responseList );
}
在使用 OpenAI API 包裝器之前,請確保已安裝所需的依賴項。
< dependency >
< groupId >io.github.namankhurpia</ groupId >
< artifactId >easyopenai</ artifactId >
< version >x.x.x</ version >
</ dependency >
implementation group: 'io.github.namankhurpia', name: 'easyopenai', version: 'x.x.x'
implementation 'io.github.namankhurpia:easyopenai:x.x.x'
implementation("io.github.namankhurpia:easyopenai:x.x.x")
libraryDependencies += "io.github.namankhurpia" % "easyopenai" % "x.x.x"
<dependency org="io.github.namankhurpia" name="easyopenai" rev="x.x.x"/>
@Grapes(
@Grab(group='io.github.namankhurpia', module='easyopenai', version='x.x.x')
)
[io.github.namankhurpia/easyopenai "x.x.x"]
'io.github.namankhurpia:easyopenai:jar:x.x.x'