Ein einfacher, leichter und effizienter API-Wrapper für Poe.com
pip install -U poe-api-wrapper
Oder Sie können die Auto-Proxy-Version dieser Bibliothek für Python 3.9+ installieren
pip install -U 'poe-api-wrapper[proxy]'
Schnelle Einrichtung für Async Client:
from poe_api_wrapper import AsyncPoeApi
import asyncio
tokens = {
'p-b' : ...,
'p-lat' : ...,
}
async def main ():
client = await AsyncPoeApi ( tokens = tokens ). create ()
message = "Explain quantum computing in simple terms"
async for chunk in client . send_message ( bot = "gpt3_5" , message = message ):
print ( chunk [ "response" ], end = '' , flush = True )
asyncio . run ( main ())
from poe_api_wrapper import PoeExample
tokens = {
'p-b' : ...,
'p-lat' : ...,
}
PoeExample ( tokens = tokens ). chat_with_bot ()
poe -b P-B_HERE -lat P-LAT_HERE -f FORMKEY_HERE
Tipp
Geben Sie poe -h
ein, um weitere Informationen zu erhalten
Anzeigename | Modell | Token-Limit | Worte | Zugriffstyp |
---|---|---|---|---|
Assistent | Wasserschwein | 4K | 3K | |
Claude-3.5-Sonett | claude_3_igloo | 4K | 3K | |
Claude-3-Opus | claude_2_1_cedar | 4K | 3K | |
Claude-3-Sonett | claude_2_1_bamboo | 4K | 3K | |
Claude-3-Haiku | claude_3_haiku | 4K | 3K | |
Claude-3.5-Sonett-200k | claude_3_igloo_200k | 200.000 | 150.000 | |
Claude-3-Opus-200k | claude_3_opus_200k | 200.000 | 150.000 | |
Claude-3-Sonett-200k | claude_3_sonnet_200k | 200.000 | 150.000 | |
Claude-3-Haiku-200k | claude_3_haiku_200k | 200.000 | 150.000 | |
Claude-2 | Claude_2_short | 4K | 3K | |
Claude-2-100k | a2_2 | 100.000 | 75.000 | |
Claude-sofort | a2 | 9K | 7K | |
Claude-instant-100k | a2_100k | 100.000 | 75.000 | |
GPT-3.5-Turbo | Chinchilla | 4K | 3K | |
GPT-3.5-Turbo-Raw | gpt3_5 | 2k | 1,5K | |
GPT-3.5-Turbo-Instruct | chinchilla_instruct | 2K | 1,5K | |
ChatGPT-16k | Agouti | 16K | 12K | |
GPT-4-Classic | gpt4_classic | 2K | 1,5K | |
GPT-4-Turbo | Biber | 4K | 3K | |
GPT-4-Turbo-128k | vizcacha | 128K | 96K | |
GPT-4o | gpt4_o | 4k | 3K | |
GPT-4o-128k | gpt4_o_128k | 128K | 96K | |
GPT-4o-Mini | gpt4_o_mini | 4K | 3K | |
GPT-4o-Mini-128k | gpt4_o_mini_128k | 128K | 96K | |
Google-PaLM | schlaff | 8K | 6K | |
Code-Lama-13b | code_llama_13b_instruct | 4K | 3K | |
Code-Lama-34b | code_llama_34b_instruct | 4K | 3K | |
Solar-Mini | upstage_solar_0_70b_16bit | 2K | 1,5K | |
Gemini-1.5-Flash-Suche | gemini_pro_search | 4K | 3K | |
Gemini-1.5-Pro-2M | gemini_1_5_pro_1m | 2M | 1,5 Mio |
Wichtig
Die oben aufgeführten Daten zu Token-Limits und Wortzahlen sind Näherungswerte und möglicherweise nicht ganz korrekt, da der Pre-Prompt-Engineering-Prozess von poe.com privat ist und nicht öffentlich bekannt gegeben wird.
Die obige Tabelle zeigt nur Bots mit anderen Anzeigenamen als ihre Modelle. Andere Bots auf poe.com haben denselben Anzeigenamen wie das Modell.
Melden Sie sich unter https://poe.com/ an.
F12 für Devtools (Rechtsklick + Inspizieren)
Kopieren Sie die Werte von pb
und p-lat
-Cookies
Wichtig
Standardmäßig ruft poe-api-wrapper den Formkey automatisch für Sie ab. Wenn es nicht funktioniert, übergeben Sie dieses Token bitte manuell, indem Sie die folgenden Schritte ausführen:
Es gibt zwei Möglichkeiten, den Formkey zu erhalten:
F12 für Devtools (Rechtsklick + Inspizieren)
1. Methode: Devtools > Netzwerk > gql_POST > Header > Poe-Formkey
Kopieren Sie den Wert von Poe-Formkey
2. Methode: Devtools > Konsole > Typ: allow pasting
> Dieses Skript einfügen: window.ereNdsRqhp2Rd3LEW()
Kopieren Sie das Ergebnis
pip install -U 'poe-api-wrapper[llm]'
openai
-Ordner: git clone https://github.com/snowby666/poe-api-wrapper.git
cd poe-api-wrapperpoe_api_wrapperopenai
Ändern Sie Secrets.json mit Ihren eigenen Token
Führen Sie den FastAPI-Server aus:
python api.py
python example.py
from poe_api_wrapper import PoeServer
tokens = [
{ "p-b" : "XXXXXXXX" , "p-lat" : "XXXXXXXX" },
{ "p-b" : "XXXXXXXX" , "p-lat" : "XXXXXXXX" },
{ "p-b" : "XXXXXXXX" , "p-lat" : "XXXXXXXX" }
]
PoeServer ( tokens = tokens )
# You can also specify address and port (default is 127.0.0.1:8000)
PoeServer ( tokens = tokens , address = "0.0.0.0" , port = "8080" )
import openai
client = openai . OpenAI ( api_key = "anything" , base_url = "http://127.0.0.1:8000/v1/" , default_headers = { "Authorization" : "Bearer anything" })
response = client . chat . completions . create (
model = "gpt-3.5-turbo" ,
messages = [
{ "role" : "system" , "content" : "You are a helpful assistant." },
{ "role" : "user" , "content" : "Hello!" }
]
)
print ( response . choices [ 0 ]. message . content )
import openai
client = openai . OpenAI ( api_key = "anything" , base_url = "http://127.0.0.1:8000/v1/" , default_headers = { "Authorization" : "Bearer anything" })
stream = client . chat . completions . create (
model = "gpt-3.5-turbo" ,
messages = [
{ "role" : "user" , "content" : "this is a test request, write a short poem" }
],
stream = True
)
for chunk in stream :
print ( chunk . choices [ 0 ]. delta . content or "" , end = "" , flush = True )
# Set max_tokens
stream_2 = client . chat . completions . create (
model = "claude-instant" ,
messages = [
{ "role" : "user" , "content" : "Can you tell me about the creation of blackholes?" }
],
stream = True ,
max_tokens = 20 , # if max_tokens reached, finish_reason will be 'length'
)
for chunk in stream_2 :
print ( chunk . choices [ 0 ]. delta . content or "" , end = "" , flush = True )
# Include usage
stream_3 = client . chat . completions . create (
model = "claude-instant" ,
messages = [
{ "role" : "user" , "content" : "Write a 100-character meta description for my blog post about llamas" }
],
stream = True ,
max_tokens = 4096 ,
stream_options = {
"include_usage" : True # last chunk contains prompts_tokens, completion_tokens and total_tokens
}
)
for chunk in stream_3 :
print ( chunk , end = " n n " , flush = True )
import openai
client = openai . OpenAI ( api_key = "anything" , base_url = "http://127.0.0.1:8000/v1/" , default_headers = { "Authorization" : "Bearer anything" })
# Legacy style (https://platform.openai.com/docs/api-reference/chat/create)
response = client . chat . completions . create (
model = "claude-3.5-sonnet" ,
messages = [
{
"role" : "user" ,
"content" : [
{ "type" : "text" , "text" : "What's in this image?" },
{
"type" : "image_url" ,
"image_url" : "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" ,
}
],
}
]
)
# New style (https://platform.openai.com/docs/guides/vision)
response = client . chat . completions . create (
model = "claude-3.5-sonnet" ,
messages = [
{
"role" : "user" ,
"content" : [
{ "type" : "text" , "text" : "What's in this image?" },
{
"type" : "image_url" ,
"image_url" : {
"url" : "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
}
],
}
]
)
# Multiple images
response = client . chat . completions . create (
model = "gpt-4o" ,
messages = [
{
"role" : "user" ,
"content" : [
{
"type" : "text" ,
"text" : "What are in these images? Is there any difference between them?" ,
},
{
"type" : "image_url" ,
"image_url" : {
"url" : "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" ,
},
},
{
"type" : "image_url" ,
"image_url" : {
"url" : "https://imgcdn.stablediffusionweb.com/2024/4/29/0b0b8798-1965-4e3d-b0a8-d153728320d4.jpg" ,
}
}
]
}
]
)
# Base64 image
import base64
# Function to encode the image
def encode_image ( image_path ):
with open ( image_path , "rb" ) as image_file :
return base64 . b64encode ( image_file . read ()). decode ( 'utf-8' )
# Path to your image
image_path = "path_to_your_image.jpg"
# Getting the base64 string
base64_image = encode_image ( image_path )
response = client . chat . completions . create (
model = "gpt-4o" ,
messages = [
{
"role" : "user" ,
"content" : [
{
"type" : "text" ,
"text" : "What’s in this image?"
},
{
"type" : "image_url" ,
"image_url" : {
"url" : f"data:image/jpeg;base64, { base64_image } "
}
}
]
}
]
)
print ( response . choices [ 0 ]. message . content )
import openai , json
client = openai . OpenAI ( api_key = "anything" , base_url = "http://127.0.0.1:8000/v1/" , default_headers = { "Authorization" : "Bearer anything" })
TEST_MODEL = "gpt-4o-mini"
# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_temperature ( location , unit = "fahrenheit" ):
"""Get the current weather in a given location"""
if "tokyo" in location . lower ():
return json . dumps ({ "location" : "Tokyo" , "temperature" : "10" , "unit" : unit })
elif "san francisco" in location . lower ():
return json . dumps ({ "location" : "San Francisco" , "temperature" : "72" , "unit" : unit })
elif "paris" in location . lower ():
return json . dumps ({ "location" : "Paris" , "temperature" : "22" , "unit" : unit })
else :
return json . dumps ({ "location" : location , "temperature" : "unknown" })
def get_rain_probability ( location ):
"""Get the probability of rain in a given location"""
if "tokyo" in location . lower ():
return json . dumps ({ "location" : "Tokyo" , "rain_probability" : "10%" })
elif "san francisco" in location . lower ():
return json . dumps ({ "location" : "San Francisco" , "rain_probability" : "20%" })
elif "paris" in location . lower ():
return json . dumps ({ "location" : "Paris" , "rain_probability" : "30%" })
else :
return json . dumps ({ "location" : location , "rain_probability" : "unknown" })
def run_conversation ():
# Step 1: send the conversation and available functions to the model
messages = [
{ 'role' : 'user' , 'content' : "Hello there. What the weather like in Tokyo?" },
{ 'role' : 'assistant' , 'content' : "Let me check the weather for you." },
{ 'role' : 'user' , 'content' : "What is the chance of raining in paris? Can you also tell me the temperature in Tokyo and LA?" },
]
tools = [
{
"type" : "function" ,
"function" : {
"name" : "get_current_temperature" ,
"description" : "Get the current temperature for a specific location" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"location" : {
"type" : "string" ,
"description" : "The city and state, e.g., San Francisco, CA"
},
"unit" : {
"type" : "string" ,
"enum" : [ "Celsius" , "Fahrenheit" ],
"description" : "The temperature unit to use. Infer this from the user's location."
}
},
"required" : [ "location" , "unit" ]
}
}
},
{
"type" : "function" ,
"function" : {
"name" : "get_rain_probability" ,
"description" : "Get the probability of rain for a specific location" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"location" : {
"type" : "string" ,
"description" : "The city and state, e.g., San Francisco, CA"
}
},
"required" : [ "location" ]
}
}
}
]
response = client . chat . completions . create (
model = TEST_MODEL ,
messages = messages ,
tools = tools ,
tool_choice = { "type" : "function" , "function" : { "name" : "get_current_temperature" }},
)
response_message = response . choices [ 0 ]. message
print ( " n " , response_message , " n " )
tool_calls = response_message . tool_calls
# Step 2: check if the model wanted to call a function
if tool_calls :
# Step 3: call the function
# Note: the JSON response may not always be valid; be sure to handle errors
available_functions = {
"get_current_temperature" : get_current_temperature ,
"get_rain_probability" : get_rain_probability
} # only two functions in this example, but you can have multiple
messages . append ( response_message ) # extend conversation with assistant's reply
# Step 4: send the info for each function call and function response to the model
for tool_call in tool_calls :
print ( tool_call , " n " )
function_name = tool_call . function . name
function_to_call = available_functions [ function_name ]
function_args = json . loads ( tool_call . function . arguments )
function_response = function_to_call ( ** function_args )
messages . append (
{
"tool_call_id" : tool_call . id ,
"role" : "tool" ,
"name" : function_name ,
"content" : function_response ,
}
) # extend conversation with function response
second_response = client . chat . completions . create (
model = TEST_MODEL ,
messages = messages ,
) # get a new response from the model where it can see the function response
return second_response . choices [ 0 ]. message . content
print ( run_conversation ())
import openai
client = openai . OpenAI ( api_key = "anything" , base_url = "http://127.0.0.1:8000/v1/" , default_headers = { "Authorization" : "Bearer anything" })
images_url = client . images . generate (
model = "playground-v2.5" ,
prompt = "A cute baby sea otter" ,
n = 2 , # The number of images to generate
size = "1792x1024" # The size of image (view models.json for available sizes)
)
print ( images_url )
import openai
client = openai . OpenAI ( api_key = "anything" , base_url = "http://127.0.0.1:8000/v1/" , default_headers = { "Authorization" : "Bearer anything" })
images_url = client . images . edit (
image = "https://imgcdn.stablediffusionweb.com/2024/4/29/0b0b8798-1965-4e3d-b0a8-d153728320d4.jpg" ,
model = "sdxl" ,
prompt = "A cute baby sea otter wearing a raincoat" ,
n = 1 , # The number of images to generate
size = "1024x1024" # The size of image (view models.json for available sizes)
)
print ( images_url )
import openai
client = openai . OpenAI ( api_key = "anything" , base_url = "http://127.0.0.1:8000/v1/" , default_headers = { "Authorization" : "Bearer anything" })
models = client . models . list ()
print ( models )
import openai
client = openai . OpenAI ( api_key = "anything" , base_url = "http://127.0.0.1:8000/v1/" , default_headers = { "Authorization" : "Bearer anything" })
model = client . models . retrieve ( "gpt-3.5-turbo-instruct" )
print ( model )
tokens = {
'p-b' : 'p-b cookie here' ,
'p-lat' : 'p-lat cookie here' ,
}
# Default setup
from poe_api_wrapper import PoeApi
client = PoeApi ( tokens = tokens )
# Using Client with auto_proxy (default is False)
client = PoeApi ( tokens = tokens , auto_proxy = True )
# Passing proxies manually
proxy_context = [
{ "https://" : X1 , "http://" : X1 },
{ "https://" : X2 , "http://" : X2 },
...
]
client = PoeApi ( tokens = tokens , proxy = proxy_context )
# Add formkey and cloudflare cookies to pass challenges
tokens = {
'p-b' : 'p-b cookie here' ,
'p-lat' : 'p-lat cookie here' ,
'formkey' : 'formkey here' ,
'__cf_bm' : '__cf_bm cookie here' ,
'cf_clearance' : 'cf_clearance cookie here'
}
# Get chat data of all bots (this will fetch all available threads)
print ( client . get_chat_history ()[ 'data' ])
>> Output :
{ 'chinchilla' : [{ 'chatId' : 74397929 , 'chatCode' : '2ith0h11zfyvsta1u3z' , 'id' : 'Q2hhdDo3NDM5NzkyOQ==' , 'title' : 'Comparison' }], 'code_llama_7b_instruct' : [{ 'chatId' : 74397392 , 'chatCode' : '2ithbduzsysy3g178hb' , 'id' : 'Q2hhdDo3NDM5NzM5Mg==' , 'title' : 'Decent Programmers' }], 'a2' : [{ 'chatId' : 74396838 , 'chatCode' : '2ith9nikybn4ksn51l8' , 'id' : 'Q2hhdDo3NDM5NjgzOA==' , 'title' : 'Reverse Engineering' }, { 'chatId' : 74396452 , 'chatCode' : '2ith79n4x0p0p8w5yue' , 'id' : 'Q2hhdDo3NDM5NjQ1Mg==' , 'title' : 'Clean Code' }], 'leocooks' : [{ 'chatId' : 74396246 , 'chatCode' : '2ith82wj0tjrggj46no' , 'id' : 'Q2hhdDo3NDM5NjI0Ng==' , 'title' : 'Pizza perfection' }], 'capybara' : [{ 'chatId' : 74396020 , 'chatCode' : '2ith5o3p8c5ajkdwd3k' , 'id' : 'Q2hhdDo3NDM5NjAyMA==' , 'title' : 'Greeting' }]}
# Get chat data of a bot (this will fetch all available threads)
print ( client . get_chat_history ( "a2" )[ 'data' ])
>> Output :
{ 'a2' : [{ 'chatId' : 74396838 , 'chatCode' : '2ith9nikybn4ksn51l8' , 'id' : 'Q2hhdDo3NDM5NjgzOA==' , 'title' : 'Reverse Engineering' }, { 'chatId' : 74396452 , 'chatCode' : '2ith79n4x0p0p8w5yue' , 'id' : 'Q2hhdDo3NDM5NjQ1Mg==' , 'title' : 'Clean Code' }]}
# Get a defined number of most recent chat threads (using count param will ignore interval param)
# Fetching all bots
print ( client . get_chat_history ( count = 20 )[ 'data' ])
# Fetching 1 bot
print ( client . get_chat_history ( bot = "a2" , count = 20 )[ 'data' ])
# You can pass the number of bots fetched for each interval to both functions. (default is 50)
# Fetching 200 chat threads of all bots each interval
print ( client . get_chat_history ( interval = 200 )[ 'data' ])
# Fetching 200 chat threads of a bot each interval
print ( client . get_chat_history ( bot = "a2" , interval = 200 )[ 'data' ])
# Pagination Example:
# Fetch the first 20 chat threads
history = client . get_chat_history ( count = 20 )
pages = [ history [ 'data' ]]
new_cursor = history [ 'cursor' ]
# Set a while loop with a condition of your choice
while new_cursor != None :
# Fetch the next 20 chat threads with new_cursor
new_history = client . get_chat_history ( count = 20 , cursor = new_cursor )
# Append the next 20 chat threads
new_cursor = new_history [ 'cursor' ]
pages . append ( new_history [ 'data' ])
# Print the pages (20 chat threads each page)
for page in range ( len ( pages )):
print ( f'This is page { page + 1 } ' )
for bot , value in pages [ page ]. items ():
for thread in value :
print ({ bot : thread })
data = client . get_settings ()
print ( data )
bot = "a2"
message = "What is reverse engineering?"
# Create new chat thread
# Streamed example:
for chunk in client . send_message ( bot , message ):
print ( chunk [ "response" ], end = "" , flush = True )
print ( " n " )
# Non-streamed example:
for chunk in client . send_message ( bot , message ):
pass
print ( chunk [ "text" ])
# You can get chatCode and chatId of created thread to continue the conversation
chatCode = chunk [ "chatCode" ]
chatId = chunk [ "chatId" ]
# You can also retrieve msgPrice
msgPrice = chunk [ "msgPrice" ]
# Send message to an existing chat thread
# 1. Using chatCode
for chunk in client . send_message ( bot , message , chatCode = "2i58ciex72dom7im83r" ):
print ( chunk [ "response" ], end = "" , flush = True )
# 2. Using chatId
for chunk in client . send_message ( bot , message , chatId = 59726162 ):
print ( chunk [ "response" ], end = "" , flush = True )
# 3. Specify msgPrice manually (the wrapper automatically gets this, but you can also pass the param for less resources consumed)
for chunk in client . send_message ( bot , message , chatId = 59726162 , msgPrice = msgPrice ):
print ( chunk [ "response" ], end = "" , flush = True )
[!NOTE] Anzeigenamen sind die gleichen wie die Codenamen für benutzerdefinierte Bots. Sie können den Anzeigenamen des Bots einfach an
client.send_message(bot, message)
übergeben.
# Use at your own risk, increase timeout to avoid ratelimit (default is 20)
import time , threading
thread_count = 0
def message_thread ( prompt , counter ):
global thread_count
try :
for chunk in client . send_message ( "gpt3_5" , prompt ):
pass
print ( prompt + " n " + chunk [ "text" ] + " n " * 3 )
thread_count -= 1
except Exception as e :
pass
prompts = [
"Write a paragraph about the impact of social media on mental health." ,
"Write a paragraph about the history and significance of the Olympic Games." ,
"Write a paragraph about the effects of climate change on the world's oceans." ,
"Write a paragraph about the benefits and drawbacks of remote work for employees and companies." ,
"Write a paragraph about the role of technology in modern education." ,
"Write a paragraph about the history and impact of the Civil Rights Movement in America." ,
"Write a paragraph about the impact of COVID-19 on global economies." ,
"Write a paragraph about the rise and fall of the Roman Empire." ,
"Write a paragraph about the benefits and drawbacks of genetically modified organisms (GMOs)." ,
"Write a paragraph about the impact of globalization on cultural identity." ,
"Write a paragraph about the history and significance of the Mona Lisa painting." ,
"Write a paragraph about the benefits and drawbacks of renewable energy sources." ,
"Write a paragraph about the impact of social media on political discourse." ,
"Write a paragraph about the history and impact of the Industrial Revolution." ,
"Write a paragraph about the benefits and drawbacks of online shopping for consumers and businesses." ,
"Write a paragraph about the impact of artificial intelligence on the job market." ,
"Write a paragraph about the history and significance of the Great Wall of China." ,
"Write a paragraph about the benefits and drawbacks of standardized testing in schools." ,
"Write a paragraph about the impact of the feminist movement on women's rights." ,
"Write a paragraph about the history and impact of the American Revolution."
]
for i in range ( len ( prompts )):
t = threading . Thread ( target = message_thread , args = ( prompts [ i ], i ), daemon = True )
t . start ()
thread_count += 1
time . sleep ( 1 )
while thread_count :
time . sleep ( 0.01 )
for chunk in client . retry_message ( chatCode ):
print ( chunk [ 'response' ], end = '' , flush = True )
# Web urls example:
file_urls = [ "https://elinux.org/images/c/c5/IntroductionToReverseEngineering_Anderson.pdf" ,
"https://www.kcl.ac.uk/warstudies/assets/automation-and-artificial-intelligence.pdf" ]
for chunk in client . send_message ( bot , "Compare 2 files and describe them in 300 words" , file_path = file_urls ):
print ( chunk [ "response" ], end = "" , flush = True )
# Local paths example:
local_paths = [ "c: \ users \ snowby666 \ hello_world.py" ]
for chunk in client . send_message ( bot , "What is this file about?" , file_path = local_paths ):
print ( chunk [ "response" ], end = "" , flush = True )
[!NOTE] Die Dateigrößenbeschränkung ist für jedes Modell unterschiedlich.
for chunk in client . send_message ( bot , "Introduce 5 books about clean code" , suggest_replies = True ):
print ( chunk [ "response" ], end = "" , flush = True )
print ( " n " )
for reply in chunk [ "suggestedReplies" ]:
print ( reply )
# You can use an event to trigger this function
# Example:
# Note that keyboard library may not be compatible with MacOS, Linux, Ubuntu
import keyboard
for chunk in client . send_message ( bot , message ):
print ( chunk [ "response" ], end = "" , flush = True )
# Press Q key to stop the generation
if keyboard . is_pressed ( 'q' ):
client . cancel_message ( chunk )
print ( " n Message is now cancelled" )
break
# Delete 1 chat
# Using chatCode
client . delete_chat ( bot , chatCode = "2i58ciex72dom7im83r" )
# Using chatId
client . delete_chat ( bot , chatId = 59726162 )
# Delete n chats
# Using chatCode
client . delete_chat ( bot , chatCode = [ "LIST_OF_CHAT_CODES" ])
# Using chatId
client . delete_chat ( bot , chatId = [ "LIST_OF_CHAT_IDS" ])
# Delete all chats of a bot
client . delete_chat ( bot , del_all = True )
# 1. Using chatCode
client . chat_break ( bot , chatCode = "2i58ciex72dom7im83r" )
# 2. Using chatId
client . chat_break ( bot , chatId = 59726162 )
# Purge a defined number of messages (default is 50)
# 1. Using chatCode
client . purge_conversation ( bot , chatCode = "2i58ciex72dom7im83r" , count = 10 )
# 2. Using chatId
client . purge_conversation ( bot , chatId = 59726162 , count = 10 )
# Purge all messsages of the thread
# 1. Using chatCode
client . purge_conversation ( bot , chatCode = "2i58ciex72dom7im83r" , del_all = True )
# 2. Using chatId
client . purge_conversation ( bot , chatId = 59726162 , del_all = True )
client . purge_all_conversations ()
# Get a defined number of messages (default is 50)
# Using chatCode
previous_messages = client . get_previous_messages ( 'code_llama_34b_instruct' , chatCode = '2itg2a7muygs42v1u0k' , count = 2 )
# Using chatId
previous_messages = client . get_previous_messages ( 'code_llama_34b_instruct' , chatId = 74411139 , count = 2 )
for message in previous_messages :
print ( message )
>> Output :
{ 'author' : 'human' , 'text' : 'nice to meet you' , 'messageId' : 2861709279 }
{ 'author' : 'code_llama_34b_instruct' , 'text' : " Nice to meet you too! How are you doing today? Is there anything on your mind that you'd like to talk about? I'm here to listen and help" , 'messageId' : 2861873125 }
# Get all previous messages
# Using chatCode
previous_messages = client . get_previous_messages ( 'code_llama_34b_instruct' , chatCode = '2itg2a7muygs42v1u0k' , get_all = True )
# Using chatId
previous_messages = client . get_previous_messages ( 'code_llama_34b_instruct' , chatId = 74411139 , get_all = True )
for message in previous_messages :
print ( message )
>> Output :
{ 'author' : 'human' , 'text' : 'hi there' , 'messageId' : 2861363514 }
{ 'author' : 'code_llama_34b_instruct' , 'text' : " Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?" , 'messageId' : 2861363530 }
{ 'author' : 'chat_break' , 'text' : "" , 'messageId' : 2872383991 }
{ 'author' : 'human' , 'text' : 'nice to meet you' , 'messageId' : 2861709279 }
{ 'author' : 'code_llama_34b_instruct' , 'text' : " Nice to meet you too! How are you doing today? Is there anything on your mind that you'd like to talk about? I'm here to listen and help" , 'messageId' : 2861873125 }
[!NOTE] Es werden Nachrichten von der neuesten zur ältesten abgerufen, die Reihenfolge der Anzeige ist jedoch umgekehrt.
# Get a defined number of sources (default is 10)
print ( client . get_available_knowledge ( botName = "BOT_NAME" , count = 2 ))
>> Output :
{ 'What is Quora?' : [ 86698 ], 'Founders of Quora' : [ 86705 ]}
# Get all available sources
print ( client . get_available_knowledge ( botName = "BOT_NAME" , get_all = True ))
# Web urls example:
file_urls = [ "https://elinux.org/images/c/c5/IntroductionToReverseEngineering_Anderson.pdf" ,
"https://www.kcl.ac.uk/warstudies/assets/automation-and-artificial-intelligence.pdf" ]
source_ids = client . upload_knowledge ( file_path = file_urls )
print ( source_ids )
>> Output :
{ 'er-1-intro_to_re.pdf' : [ 86344 ], 'automation-and-artificial-intelligence.pdf' : [ 86345 ]}
# Local paths example:
local_paths = [ "c: \ users \ snowby666 \ hello_world.py" ]
source_ids = client . upload_knowledge ( file_path = local_paths )
print ( source_ids )
>> Output :
{ 'hello_world.py' : [ 86523 ]}
# Plain texts example:
knowledges = [
{
"title" : "What is Quora?" ,
"content" : "Quora is a popular online platform that enables users to ask questions on various topics and receive answers from a diverse community. It covers a wide range of subjects, from academic and professional queries to personal experiences and opinions, fostering knowledge-sharing and meaningful discussions among its users worldwide."
},
{
"title" : "Founders of Quora" ,
"content" : "Quora was founded by two individuals, Adam D'Angelo and Charlie Cheever. Adam D'Angelo, who previously served as the Chief Technology Officer (CTO) at Facebook, and Charlie Cheever, a former Facebook employee as well, launched Quora in June 2009. They aimed to create a platform that would enable users to ask questions and receive high-quality answers from knowledgeable individuals. Since its inception, Quora has grown into a widely used question-and-answer platform with a large user base and a diverse range of topics covered."
},
]
source_ids = client . upload_knowledge ( text_knowledge = knowledges )
print ( source_ids )
>> Output :
{ 'What is Quora?' : [ 86368 ], 'Founders of Quora' : [ 86369 ]}
# Hybrid example:
source_ids = client . upload_knowledge ( file_path = file_urls , text_knowledge = knowledges )
print ( source_ids )
>> Output :
{ 'What is Quora?' : [ 86381 ], 'Founders of Quora' : [ 86383 ], 'er-1-intro_to_re.pdf' : [ 86395 ], 'automation-and-artificial-intelligence.pdf' : [ 86396 ]}
client . edit_knowledge ( knowledgeSourceId = 86381 , title = 'What is Quora?' , content = 'Quora is a question-and-answer platform where users can ask questions, provide answers, and engage in discussions on various topics.' )
bot = 'gpt-4'
print ( client . get_botInfo ( handle = bot ))
>> Output :
{ 'handle' : 'GPT-4' , 'model' : 'beaver' , 'supportsFileUpload' : True , 'messageTimeoutSecs' : 15 , 'displayMessagePointPrice' : 350 , 'numRemainingMessages' : 20 , 'viewerIsCreator' : False , 'id' : 'Qm90OjMwMDc=' }
print ( client . get_available_creation_models ())
>> Output :
{ 'text' : [ 'claude_3_igloo' , 'gpt4_o_mini' , 'gpt4_o' , 'gemini_1_5_flash' , 'gemini_1_5_pro' , 'claude_2_1_bamboo' , 'claude_3_haiku' , 'claude_2_1_cedar' , 'gemini_1_5_flash_128k' , 'gemini_1_5_pro_128k' , 'gemini_1_5_flash_1m' , 'gemini_1_5_pro_1m' , 'gpt4_o_mini_128k' , 'gpt4_o_128k' , 'beaver' , 'gemini_pro' , 'chinchilla' , 'vizcacha' , 'claude_3_igloo_200k' , 'claude_3_sonnet_200k' , 'claude_3_haiku_200k' , 'claude_3_opus_200k' , 'mixtral8x7bchat' , 'claude_2_short' , 'a2_2' , 'mythomaxl213b' , 'a2' , 'a2_100k' ], 'image' : [ 'playgroundv25' , 'ideogram' , 'dalle3' , 'stablediffusion3' , 'sd3turbo' , 'stablediffusionxl' ], 'video' : [ 'pika' ]}
client . create_bot ( handle = "BOT_NAME" , prompt = "PROMPT_HERE" , base_model = "a2" )
# Using knowledge bases (you can use source_ids from uploaded knowledge bases for your custom bot)
client . create_bot ( handle = "BOT_NAME" , prompt = "PROMPT_HERE" , base_model = "a2" , knowledgeSourceIds = source_ids , shouldCiteSources = True )
client . edit_bot ( handle = "BOT_NAME" , prompt = "PROMPT_HERE" , new_handle = "NEW_BOT_NAME" , base_model = 'chinchilla' )
# Adding knowledge bases
client . edit_bot ( handle = "BOT_NAME" , prompt = "PROMPT_HERE" , new_handle = "NEW_BOT_NAME" , base_model = 'chinchilla' , knowledgeSourceIdsToAdd = source_ids , shouldCiteSources = True )
# Removing knowledge bases
client . edit_bot ( handle = "BOT_NAME" , prompt = "PROMPT_HERE" , new_handle = "NEW_BOT_NAME" , base_model = 'chinchilla' , knowledgeSourceIdsToRemove = source_ids , shouldCiteSources = True )
[!TIP] Sie können
knowledgeSourceIdsToAdd
undknowledgeSourceIdsToRemove
auch gleichzeitig verwenden.
client . delete_bot ( handle = "BOT_NAME" )
# Get a defined number of bots (default is 25)
print ( client . get_available_bots ( count = 10 ))
# Get all available bots
print ( client . get_available_bots ( get_all = True ))
handle = 'poe'
print ( client . get_user_bots ( user = handle ))
print ( client . get_available_categories ())
>> Output :
[ 'Official' , 'Popular' , 'New' , 'ImageGen' , 'AI' , 'Professional' , 'Funny' , 'History' , 'Cooking' , 'Advice' , 'Mind' , 'Programming' , 'Travel' , 'Writing' , 'Games' , 'Learning' , 'Roleplay' , 'Utilities' , 'Sports' , 'Music' ]
# Explore section example:
# Get a defined number of bots (default is 50)
print ( client . explore ( count = 10 ))
# Get all available bots
print ( client . explore ( explore_all = True ))
# Search for bots by query example:
# Get a defined number of bots (default is 50)
print ( client . explore ( search = "Midjourney" , count = 30 ))
# Get all available bots
print ( client . explore ( search = "Midjourney" , explore_all = True ))
# Search for bots by category example (default is defaultCategory):
# Get a defined number of bots (default is 50)
print ( client . explore ( categoryName = "Popular" , count = 30 ))
# Get all available bots
print ( client . explore ( categoryName = "AI" , explore_all = True ))
# Search for people example:
# Get a defined number of people (default is 50)
print ( client . explore ( search = "Poe" , entity_type = 'user' , count = 30 ))
# Get all available people
print ( client . explore ( search = "Poe" , entity_type = 'user' , explore_all = True ))
# Share a defined number of messages (from the lastest to the oldest)
# Using chatCode
shareCode = client . share_chat ( "a2" , chatCode = "2roap5g8nd7s28ul836" , count = 10 )
# Using chatId
shareCode = client . share_chat ( "a2" , chatId = 204052028 , count = 10 )
# Share all messages
# Using chatCode
shareCode = client . share_chat ( "a2" , chatCode = "2roap5g8nd7s28ul836" )
# Using chatId
shareCode = client . share_chat ( "a2" , chatId = 204052028 )
# Set up the 2nd Client and import messages from the shareCode
client2 = PoeApi ( "2nd_TOKEN_HERE" )
print ( client2 . import_chat ( bot , shareCode ))
>> Output :
{ 'chatId' : 72929127 , 'chatCode' : '2iw0xcem7a18wy1avd3' }
print ( client . get_citations ( messageId = 141597902621 ))
bots = [
{ 'bot' : 'yayayayaeclaude' , 'name' : 'Yae' },
{ 'bot' : 'gepardL' , 'name' : 'gepard' },
{ 'bot' : 'SayukiTokihara' , 'name' : 'Sayuki' }
]
client . create_group ( group_name = 'Hangout' , bots = bots )
[!NOTE]
bot
arg ist der Modell-/Anzeigename.name
arg ist der Name, den Sie im Gruppenchat erwähnen würden.
# User engagement example:
while True :
message = str ( input ( ' n