ChatGPT 長期記憶體包是一個強大的工具,旨在使您的專案能夠處理大量並髮用戶。它透過 OpenAI 的 GPT、llama 向量索引和 Redis 資料儲存等尖端技術無縫整合廣泛的知識庫和自適應記憶體來實現這一目標。借助這套全面的功能,您可以創建高度可擴展的應用程序,提供上下文相關且引人入勝的對話,從而增強整體用戶體驗和互動。
可擴展性:ChatGPT 長期記憶體包旨在高效處理大量並髮用戶,使其適合用戶需求較高的應用程式。
廣泛的知識庫:受益於知識庫的集成,該知識庫允許您以 TXT 檔案的形式合併個人化資料。此功能使系統能夠提供上下文相關的回應並參與有意義的對話。
自適應記憶體:該軟體包利用 GPT、llama 向量索引和 Redis 資料儲存等尖端技術來確保自適應記憶體系統。此功能可以提高效能和連貫的交互,使對話更加自然和引人入勝。
與 GPT 模型靈活整合:該軟體包允許與 GPT 模型無縫交互,讓您可以選擇使用上下文記憶體與 GPT 模型聊天。這使您能夠使用最先進的語言模型來執行更高級的語言處理任務。
輕鬆設定與設定:此軟體包使用pip
提供簡單的安裝步驟,您可以使用 OpenAI 的 API 金鑰快速設定您的環境。配置選項是可自訂的,可讓您自訂軟體包以滿足您的特定專案要求。
Redis 資料儲存的利用:與 Redis 資料儲存的整合可確保高效的資料儲存和檢索,有助於系統的整體可擴充性和回應能力。
API 與 OpenAI 整合:該軟體包利用 OpenAI 的 API 來支援其基於 GPT 的功能。這確保了獲得語言處理和 GPT 模型功能的最新進展。
持續學習與改進:作為基於 GPT 的系統,ChatGPT 長期記憶包受益於持續學習和改進,緊跟著語言理解和生成的最新發展。
可自訂的對話流程:此軟體包提供可自訂的對話流程,能夠包含使用者的聊天歷史記錄和知識庫資料。這增強了對情境的理解和回應的相關性。
易於使用的介面:提供的程式碼片段和介面使開發人員可以輕鬆地將 ChatGPT 長期記憶包整合到他們的專案中,從而最大限度地減少學習曲線並簡化開發過程。
這些關鍵功能的組合使 ChatGPT 長期記憶包成為您專案的寶貴補充,使您能夠創建具有強大語言處理功能的互動式動態對話應用程式。
要在您的專案中使用 Chatgpt 長期記憶包,請按照以下步驟操作:
pip install chatgpt_long_term_memory
export OPENAI_API_kEY=sk-******
拉取 redis docker 映像並運行:
docker pull redis
docker network create --subnet=172.0.0.0/16 mynet123
docker run --name redis-db -d --net mynet123 --ip 172.0.0.22 -p 6379:6379 -p 8001:8001 redis:latest
您可以透過設定knowledge_base=True來利用索引內存,以位於目錄:{your_root_path}/resources/data中的TXT檔案的形式合併您的個人化資料。確保資源/資料目錄的正確尋址,以便無縫存取儲存的資料。
# example/usage_index_memory.py
from utils import get_project_root
from chatgpt_long_term_memory . conversation import ChatGPTClient
from chatgpt_long_term_memory . llama_index_helpers import ( IndexConfig ,
RetrieversConfig )
from chatgpt_long_term_memory . memory import ChatMemoryConfig
# Get project's root path
root_path = get_project_root ()
"""
First:
Initialize llama indexes config to create a index from knowledge base and user's chat history.
The root_path specifies the directory where the index will be stored.
The knowledge_base flag specifies whether to index the knowledge base.
The model_name specifies the name of the language model to use for indexing.
The temperature parameter controls the randomness of the output.
The context_window parameter specifies the size of the context window to use for indexing.
The num_outputs parameter specifies the number of output tokens to generate for each input token.
The max_chunk_overlap parameter specifies the maximum overlap between chunks.
The chunk_size_limit parameter specifies the maximum size of a chunk.
"""
doc_indexer_config = IndexConfig (
root_path = f" { root_path } /example" ,
knowledge_base = True ,
model_name = "gpt-3.5-turbo" ,
temperature = 0 ,
context_window = 4096 ,
num_outputs = 700 ,
max_chunk_overlap = 0.5 ,
chunk_size_limit = 600
)
"""
Second:
# Initialize retrievers config to configure the retrievers class.
# The `top_k` parameter specifies the number of top-k documents to retrieve for each query.
# The `max_tokens` parameter specifies the maximum number of tokens to return for each document.
"""
retrievers_config = RetrieversConfig (
top_k = 7 ,
max_tokens = 1000
)
"""
Then:
Initialize chat memory config to configure the chat memory class.
The `redis_host` parameter specifies the hostname of the Redis server.
The `redis_port` parameter specifies the port of the Redis server.
"""
chat_memory_config = ChatMemoryConfig (
redis_host = "172.0.0.22" ,
redis_port = 6379
)
"""
Create a `ChatGPTClient` object to start the conversation.
The `doc_indexer_config` parameter specifies the configuration for the document indexer.
The `retrievers_config` parameter specifies the configuration for the retrievers.
The `chat_memory_config` parameter specifies the configuration for the chat memory.
"""
chatgpt_client = ChatGPTClient (
doc_indexer_config = doc_indexer_config ,
retrievers_config = retrievers_config ,
chat_memory_config = chat_memory_config
)
# Start a conversation with the user.
user_id = 1
while True :
# Get the user's input.
user_input = input ( "User Input:" )
# If the user enters "q", break out of the loop.
if user_input == "q" :
break
# Get the response from the chatbot.
index , response = chatgpt_client . converse ( user_input , user_id = user_id )
# Print the response to the user.
print ( response )
在這種情況下,您無法使用自己的資料庫,但可以與 GPT 模型互動並使用上下文記憶體。
# example/usage_context_memory.py
from utils import get_project_root
from chatgpt_long_term_memory . conversation import ChatbotClient
from chatgpt_long_term_memory . llama_index_helpers import ( IndexConfig ,
RetrieversConfig )
from chatgpt_long_term_memory . memory import ChatMemoryConfig
from chatgpt_long_term_memory . openai_engine import OpenAIChatConfig
# Get project's root path
root_path = get_project_root ()
"""
First:
Initialize llama indexes config to create a index from knowledge base and user's chat history.
The root_path specifies the directory where the index will be stored.
The knowledge_base flag specifies whether to index the knowledge base.
The model_name specifies the name of the language model to use for indexing.
The temperature parameter controls the randomness of the output.
The context_window parameter specifies the size of the context window to use for indexing.
The num_outputs parameter specifies the number of output tokens to generate for each input token.
The max_chunk_overlap parameter specifies the maximum overlap between chunks.
The chunk_size_limit parameter specifies the maximum size of a chunk.
"""
doc_indexer_config = IndexConfig (
root_path = f" { root_path } /example" ,
knowledge_base = True ,
model_name = "gpt-3.5-turbo" ,
temperature = 0 ,
context_window = 4096 ,
num_outputs = 700 ,
max_chunk_overlap = 0.5 ,
chunk_size_limit = 600
)
"""
Second:
# Initialize retrievers config to configure the retrievers class.
# The `top_k` parameter specifies the number of top-k documents to retrieve for each query.
# The `max_tokens` parameter specifies the maximum number of tokens to return for each document.
"""
retrievers_config = RetrieversConfig (
top_k = 7 ,
max_tokens = 1000
)
"""
Then:
Initialize chat memory config to configure the chat memory class.
The `redis_host` parameter specifies the hostname of the Redis server.
The `redis_port` parameter specifies the port of the Redis server.
"""
chat_memory_config = ChatMemoryConfig (
redis_host = "172.0.0.22" ,
redis_port = 6379
)
# Method 2: chat with gpt models, use context memory in this scenario you can't use your own db
openai_chatbot_config = OpenAIChatConfig (
model_name = "gpt-4" ,
max_tokens = 1000 ,
temperature = 0 ,
top_p = 1 ,
presence_penalty = 0 ,
frequency_penalty = 0 ,
# keep in mind if you change prompt, consider history and human input
prompt = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
History: {}
Human: {}
Assistant:"""
)
# Initialize the chatbot client.
chat_app = ChatbotClient (
doc_indexer_config = doc_indexer_config ,
retrievers_config = retrievers_config ,
chat_memory_config = chat_memory_config ,
openai_chatbot_config = openai_chatbot_config
)
# Start a conversation with the user.
user_id = 2
while True :
# Get the user's input.
user_input = input ( "User Input:" )
# If the user enters "q", break out of the loop.
if user_input == "q" :
break
# Get the response from the chatbot.
index , response = chat_app . converse ( user_input , user_id = user_id )
# Print the response to the user.
print ( response )