A NestJS library for building efficient, scalable, and fast solutions using the OpenAI Assistant API (chatbots).
Kickstart your AI Assistant development in under 15 minutes
Introducing the NestJS library. Whether you're building a virtual assistant, or an interactive chatbot for engaging user experiences, our library empowers you to leverage cutting-edge AI capabilities with minimal effort.
The library provides ready-to-use API endpoints handling your assistant and WebSocket server for real-time communication between the client and the assistant. Install the library and paste the config to get it running.
Video - AI Assistant in 15 min
The repository contains a library but also provides additional features. You can just clone the repository and use it instantly to gain from all features:
In this section, you will learn how to integrate the AI Assistant library into your NestJS application. The following steps will guide you through the process of setting up the library and creating simple functionalities.
^20.0.0
version)^10.0.0
version)^10.0.0
version)^4.51.0
version)Open or create your NestJS application where you would like to integrate the AI Assistant. To create a new NestJS application, use the following command:
nest new project-name
Now you have to install the packages. Go to the next step.
Make sure you are in the root directory of your project.
Install the library and openai
package using npm:
npm i @boldare/openai-assistant openai --save
The library is installed but we have to configure it. Go to the next step.
Set up your environment variables, create environment variables in the .env
file in the root directory of the project, and populate it with the necessary secrets. The assistant ID is optional and serves as a unique identifier for your assistant. When the environment variable is not set, the assistant will be created automatically. You can use the assistant ID to connect to an existing assistant, which can be found in the OpenAI platform after creating an assistant.
Create a .env
file in the root directory of your project and populate it with the necessary secrets:
touch .env
Add the following content to the .env
file:
# OpenAI API Key
OPENAI_API_KEY=
# Assistant ID - leave it empty if you don't have an assistant to reuse
ASSISTANT_ID=
Please note that the .env
file should not be committed to the repository. Add the .env
file to the .gitignore
file to prevent it from being committed.
This was the first step needed to run the library. The next step is to configure the assistant.
The library provides a way to configure the assistant with the AssistantModule.forRoot
method. The method takes a configuration object as an argument. Create a new configuration file like in a sample configuration file (chat.config.ts) and fill it with the necessary configuration.
// chat.config.ts file
import { AssistantConfigParams } from '@boldare/openai-assistant';
import { AssistantCreateParams } from 'openai/resources/beta';
// Default OpenAI configuration
export const assistantParams: AssistantCreateParams = {
name: 'Your assistant name',
instructions: `You are a chatbot assistant. Speak briefly and clearly.`,
tools: [{ type: 'file_search'}],
model: 'gpt-4-turbo',
temperature: 0.05,
};
// Additional configuration for our assistant
export const assistantConfig: AssistantConfigParams = {
id: process.env['ASSISTANT_ID'],
params: assistantParams,
filesDir: './apps/api/src/app/knowledge',
toolResources: {
file_search: {
// Provide files if you use file_search tool
fileNames: ['example1.txt', 'example2.txt'],
},
},
};
More details about the configuration can be found in the wiki.
From now you can run your application and call the assistant.
Function calling allows you to extend the assistant's capabilities with custom logic. If you are not going to use function calling you can jump to: Step 5: Testing.
Create a new service that extends the AgentBase
class, fill the definition and implement the output
method.
output
method is the main method that will be called when the function is invoked by the assistant.definition
property is an object that describes the function and its parameters so the assistant can understand how to call it.For more information about function calling, you can refer to the OpenAI documentation.
The instructions for creating a function can be found in the wiki, while examples can be found in the agents directory.
If you've defined a function and the output method, you can now call it from the assistant just by asking him to do the action described in the function definition.
Run your application and this will allow you to test the assistant.
# use this if you are using the repository:
npm run start:dev
# if you are using your own NestJS application, please check the npm scripts in the package.json file
# defualt command for NestJS is:
npm run start
Then you can test the assistant.
/assistant/threads
endpoint with the empty object in the body./assistant/chat
endpoint with the following body:{
"threadId": "your-thread-id",
"content": "Hello, how are you?"
}
/assistant/chat
endpoint with the same body as in step 2.Congrats! You have successfully integrated the AI Assistant library into your NestJS application. ?
The complete documentation on how to run the demo with all applications and libraries from the repository can be found in the wiki.
Boldare's engineers are here to help you. If you have any questions or need help with the implementation, feel free to book a call with one of our engineers.
Learn more how Boldare can help you with AI development.
You can also ask questions in the GitHub Discussions section.
Would you like to see new features in the library?
@boldare/openai-assistant
and this repository is MIT licensed