ChatGPT 长期内存包是一个强大的工具,旨在使您的项目能够处理大量并发用户。它通过 OpenAI 的 GPT、llama 向量索引和 Redis 数据存储等尖端技术无缝集成广泛的知识库和自适应内存来实现这一目标。借助这套全面的功能,您可以创建高度可扩展的应用程序,提供上下文相关且引人入胜的对话,从而增强整体用户体验和交互。
可扩展性:ChatGPT 长期内存包旨在高效处理大量并发用户,使其适合用户需求较高的应用程序。
广泛的知识库:受益于知识库的集成,该知识库允许您以 TXT 文件的形式合并个性化数据。此功能使系统能够提供上下文相关的响应并参与有意义的对话。
自适应内存:该软件包利用 GPT、llama 矢量索引和 Redis 数据存储等尖端技术来确保自适应内存系统。此功能可以提高性能和连贯的交互,使对话更加自然和引人入胜。
与 GPT 模型灵活集成:该软件包允许与 GPT 模型无缝交互,让您可以选择使用上下文内存与 GPT 模型聊天。这使您能够使用最先进的语言模型来执行更高级的语言处理任务。
轻松设置和配置:该软件包使用pip
提供简单的安装步骤,您可以使用 OpenAI 的 API 密钥快速设置您的环境。配置选项是可定制的,允许您定制软件包以满足您的特定项目要求。
Redis 数据存储的利用:与 Redis 数据存储的集成可确保高效的数据存储和检索,有助于系统的整体可扩展性和响应能力。
API 与 OpenAI 集成:该软件包利用 OpenAI 的 API 来支持其基于 GPT 的功能。这确保了获得语言处理和 GPT 模型功能的最新进展。
持续学习和改进:作为基于 GPT 的系统,ChatGPT 长期记忆包受益于持续学习和改进,紧跟语言理解和生成的最新发展。
可定制的对话流程:该软件包提供可定制的对话流程,能够包含用户的聊天历史记录和知识库数据。这增强了对上下文的理解和响应的相关性。
易于使用的界面:提供的代码片段和界面使开发人员可以轻松地将 ChatGPT 长期记忆包集成到他们的项目中,从而最大限度地减少学习曲线并简化开发过程。
这些关键功能的组合使 ChatGPT 长期记忆包成为您项目的宝贵补充,使您能够创建具有强大语言处理功能的交互式动态对话应用程序。
要在您的项目中使用 Chatgpt 长期记忆包,请按照以下步骤操作:
pip install chatgpt_long_term_memory
export OPENAI_API_kEY=sk-******
拉取 redis docker 镜像并运行:
docker pull redis
docker network create --subnet=172.0.0.0/16 mynet123
docker run --name redis-db -d --net mynet123 --ip 172.0.0.22 -p 6379:6379 -p 8001:8001 redis:latest
您可以通过设置knowledge_base=True来利用索引内存,以位于目录:{your_root_path}/resources/data中的TXT文件的形式合并您的个性化数据。确保资源/数据目录的正确寻址,以便无缝访问存储的数据。
# example/usage_index_memory.py
from utils import get_project_root
from chatgpt_long_term_memory . conversation import ChatGPTClient
from chatgpt_long_term_memory . llama_index_helpers import ( IndexConfig ,
RetrieversConfig )
from chatgpt_long_term_memory . memory import ChatMemoryConfig
# Get project's root path
root_path = get_project_root ()
"""
First:
Initialize llama indexes config to create a index from knowledge base and user's chat history.
The root_path specifies the directory where the index will be stored.
The knowledge_base flag specifies whether to index the knowledge base.
The model_name specifies the name of the language model to use for indexing.
The temperature parameter controls the randomness of the output.
The context_window parameter specifies the size of the context window to use for indexing.
The num_outputs parameter specifies the number of output tokens to generate for each input token.
The max_chunk_overlap parameter specifies the maximum overlap between chunks.
The chunk_size_limit parameter specifies the maximum size of a chunk.
"""
doc_indexer_config = IndexConfig (
root_path = f" { root_path } /example" ,
knowledge_base = True ,
model_name = "gpt-3.5-turbo" ,
temperature = 0 ,
context_window = 4096 ,
num_outputs = 700 ,
max_chunk_overlap = 0.5 ,
chunk_size_limit = 600
)
"""
Second:
# Initialize retrievers config to configure the retrievers class.
# The `top_k` parameter specifies the number of top-k documents to retrieve for each query.
# The `max_tokens` parameter specifies the maximum number of tokens to return for each document.
"""
retrievers_config = RetrieversConfig (
top_k = 7 ,
max_tokens = 1000
)
"""
Then:
Initialize chat memory config to configure the chat memory class.
The `redis_host` parameter specifies the hostname of the Redis server.
The `redis_port` parameter specifies the port of the Redis server.
"""
chat_memory_config = ChatMemoryConfig (
redis_host = "172.0.0.22" ,
redis_port = 6379
)
"""
Create a `ChatGPTClient` object to start the conversation.
The `doc_indexer_config` parameter specifies the configuration for the document indexer.
The `retrievers_config` parameter specifies the configuration for the retrievers.
The `chat_memory_config` parameter specifies the configuration for the chat memory.
"""
chatgpt_client = ChatGPTClient (
doc_indexer_config = doc_indexer_config ,
retrievers_config = retrievers_config ,
chat_memory_config = chat_memory_config
)
# Start a conversation with the user.
user_id = 1
while True :
# Get the user's input.
user_input = input ( "User Input:" )
# If the user enters "q", break out of the loop.
if user_input == "q" :
break
# Get the response from the chatbot.
index , response = chatgpt_client . converse ( user_input , user_id = user_id )
# Print the response to the user.
print ( response )
在这种情况下,您无法使用自己的数据库,但可以与 GPT 模型交互并使用上下文内存。
# example/usage_context_memory.py
from utils import get_project_root
from chatgpt_long_term_memory . conversation import ChatbotClient
from chatgpt_long_term_memory . llama_index_helpers import ( IndexConfig ,
RetrieversConfig )
from chatgpt_long_term_memory . memory import ChatMemoryConfig
from chatgpt_long_term_memory . openai_engine import OpenAIChatConfig
# Get project's root path
root_path = get_project_root ()
"""
First:
Initialize llama indexes config to create a index from knowledge base and user's chat history.
The root_path specifies the directory where the index will be stored.
The knowledge_base flag specifies whether to index the knowledge base.
The model_name specifies the name of the language model to use for indexing.
The temperature parameter controls the randomness of the output.
The context_window parameter specifies the size of the context window to use for indexing.
The num_outputs parameter specifies the number of output tokens to generate for each input token.
The max_chunk_overlap parameter specifies the maximum overlap between chunks.
The chunk_size_limit parameter specifies the maximum size of a chunk.
"""
doc_indexer_config = IndexConfig (
root_path = f" { root_path } /example" ,
knowledge_base = True ,
model_name = "gpt-3.5-turbo" ,
temperature = 0 ,
context_window = 4096 ,
num_outputs = 700 ,
max_chunk_overlap = 0.5 ,
chunk_size_limit = 600
)
"""
Second:
# Initialize retrievers config to configure the retrievers class.
# The `top_k` parameter specifies the number of top-k documents to retrieve for each query.
# The `max_tokens` parameter specifies the maximum number of tokens to return for each document.
"""
retrievers_config = RetrieversConfig (
top_k = 7 ,
max_tokens = 1000
)
"""
Then:
Initialize chat memory config to configure the chat memory class.
The `redis_host` parameter specifies the hostname of the Redis server.
The `redis_port` parameter specifies the port of the Redis server.
"""
chat_memory_config = ChatMemoryConfig (
redis_host = "172.0.0.22" ,
redis_port = 6379
)
# Method 2: chat with gpt models, use context memory in this scenario you can't use your own db
openai_chatbot_config = OpenAIChatConfig (
model_name = "gpt-4" ,
max_tokens = 1000 ,
temperature = 0 ,
top_p = 1 ,
presence_penalty = 0 ,
frequency_penalty = 0 ,
# keep in mind if you change prompt, consider history and human input
prompt = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
History: {}
Human: {}
Assistant:"""
)
# Initialize the chatbot client.
chat_app = ChatbotClient (
doc_indexer_config = doc_indexer_config ,
retrievers_config = retrievers_config ,
chat_memory_config = chat_memory_config ,
openai_chatbot_config = openai_chatbot_config
)
# Start a conversation with the user.
user_id = 2
while True :
# Get the user's input.
user_input = input ( "User Input:" )
# If the user enters "q", break out of the loop.
if user_input == "q" :
break
# Get the response from the chatbot.
index , response = chat_app . converse ( user_input , user_id = user_id )
# Print the response to the user.
print ( response )