Foundational LLM Chat is a Chainlit application built using AWS CDK and Converse API that allows you to interact with Amazon Bedrock language model. It provides a user-friendly interface to chat with Amazon Bedrock LLMs, upload images or docuements, and receive multimodal responses. The application is deployed on AWS using various services like Amazon Bedrock, Amazon Elastic Container Service, Amazon Cognito, Amazon CloudFront, and more.
The architecture diagram illustrates the AWS deployment of the Foundational LLM Chat application. Users interact with the application through a web interface secured by Amazon Cognito authentication. The application is globally distributed using Amazon CloudFront's CDN. Within a specific AWS region, the application is deployed across multiple Availability Zones using Amazon ECS for containerized deployment. The backend integrates with Amazon Bedrock to leverage various language models, enabling users to engage in multimodal conversations with the AI assistant.
The application is configured through a config.json
file in the ./bin
folder. Key configuration options include:
default_system_prompt
: This field contains the default system prompt that will be used by the chatbot if not specified below in the bedrock_models
field. It defines the initial instructions and behavior of the AI assistant. You can modify this value to change the assistant's persona or initial prompt.
max_characters_parameter
: This field specifies the maximum number of characters allowed in the input text. If set to the string "None"
, there is no character limit. You can change this value to limit the input text length if desired.
max_content_size_mb_parameter
: This field sets the maximum size of the input content (e.g., images) in megabytes. If set to the string "None"
, there is no size limit. You can modify this value to restrict the maximum size of input content.
default_aws_region
: This field specifies the AWS region where the application is deployed. You can set region also for each Amazon Bedrock model field.
prefix
: This field allows you to set a prefix for resource names created by the application. You can leave it empty or provide a custom prefix if desired.
This field contains a dictionary of Bedrock models that the chatbot can use. Each model is identified by a key (e.g., "Sonnet", "Haiku") and, the key is the name used in the Chainlit Chatprofile. Each model has the following properties at minimum:
id
: The ID or ARN of the Amazon Bedrock model. You can find the available model IDs in the AWS documentation.region
: an array of regions used to access the model. One if you did not enable cross-region inference, multiple for cross region inference.Optional configuration parameters include:
inference_profile
: Settings for cross-region inference
prefix
: Region prefix (e.g., "us")region
: Primary inference regionsystem_prompt
: Custom system promptcost
: Pricing information
input_1k_price
: The cost (in USD) for 1,000 input tokens. You can find the pricing information for different models on the AWS Bedrock pricing page.
output_1k_price
: The cost (in USD) for 1,000 output tokens.vision
[optional]: true or false. If vision capabilities are enabled for the model.document
[optional]: true or false. If document capabilities are enabled](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html) for the model.tool
[optional]: true or false. If tools capabilities are enabled](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html) for the model.default
[optional]: true or false. The default selected modelYou can modify the bedrock_models
section to include additional models or update the existing ones according to your requirements.
Here's an example of how to retrieve the model ID and pricing information:
To find the model ID or ARN, refer to the AWS Bedrock Model IDs documentation. For example, the ID for the Claude 3 Sonnet model is anthropic.claude-3-sonnet-20240229-v1:0
.
To find the pricing information, refer to the AWS Bedrock Pricing documentation. For the Claude 3 Sonnet model, the input and output pricing is as follows:
After making the desired changes to the config.json
file, you can proceed with the deployment as described in the README.
Here an example of the json:
{
"default_system_prompt": "you are an assistant",
"max_characters_parameter": "None",
"max_content_size_mb_parameter": "None",
"default_aws_region": "us-west-2",
"prefix": "",
"bedrock_models": {
"Claude Sonnet 3.5 New": {
"system_prompt": "you are an assistant",
"id": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
"inference_profile": {
"prefix": "us",
"region": "us-west-2"
},
"region": ["us-east-1", "us-west-2", "us-east-2"],
"cost": {
"input_1k_price": 0.003,
"output_1k_price": 0.015
},
"default": true,
"vision": true,
"document": true,
"tool": true
},
"Meta Llama 3.2 90B Vision Instruct": {
"id": "us.meta.llama3-2-90b-instruct-v1:0",
"inference_profile": {
"prefix": "us",
"region": "us-west-2"
},
"region": ["us-east-1", "us-west-2"],
"cost": {
"input_1k_price": 0.002,
"output_1k_price": 0.002
},
"vision": true,
"document": true,
"tool": true
},
"Mistral Large 2": {
"id": "mistral.mistral-large-2407-v1:0",
"cost": {
"input_1k_price": 0.003,
"output_1k_price": 0.009
},
"vision": false,
"document": true,
"tool": true
}
}
}
The application leverages Amazon Bedrock Prompt Manager for:
Currently the application supports 2 automatic variable substitutions:
%Y-%m-%d
of the day;%Y-%m-%d %H:%M:%S UTC
you can edit the extract_and_process_prompt
function inside chainlit_image/foundational-llm-chat_app/massages_utils.py
to add more direct substitutions.
The application uses Amazon Bedrock's Converse API, providing:
All system prompts are stored and managed through Amazon Bedrock Prompt Manager, offering:
When using system prompts to configure the behavior of language models, it's crucial to consider security implications and take measures to prevent potential misuse or vulnerabilities. One significant risk is prompt injection, where malicious inputs could manipulate the system prompt in unintended ways, potentially leading to harmful or biased outputs.
A good starting point is the following guide: Mitigating jailbreaks & prompt injections.
Prompt engineering refers to the practice of carefully crafting prompts or instructions to guide language models in generating the desired outputs. Effective prompt engineering is crucial for ensuring that language models understand and respond appropriately to the given context and task.
The following course is intended to provide you with a comprehensive step-by-step understanding of how to engineer optimal prompts within Claude, using Bedrock: Prompt Engineering with Anthropic Claude v3.
This guide covers various techniques and best practices for prompt engineering through a series of lessons and exercises, organized into three levels: Beginner, Intermediate, and Advanced.
By following the principles and techniques outlined in this guide, you can enhance the performance and reliability of your language model applications, ensuring that the AI assistant generates more relevant, coherent, and context-aware responses.
We recommend deploying with AWS Cloud9. If you'd like to use Cloud9 to deploy the solution, you will need the following before proceeding:
m5.large
as Instance type.Amazon Linux 2023
as the platform.If you have decided not to use AWS Cloud9, verify that your environment satisfies the following prerequisites:
Verify that your environment satisfies the following prerequisites:
You have:
An AWS account
An access policy that allows you to create resources contained in the AWS Sample
Both console and programmatic access
NodeJS lts installed
nvm
you can run the following before proceedingnvm install --lts
NPM lts installed
nvm
you can run the following before proceedingnvm install-latest-npm
AWS CLI installed and configured to use with your AWS account
AWS CDK CLI installed
Finch installed or Docker installed
Enable Amazon Bedrock models access in the deployment region: How to enable Amazon Bedrock model access.
Enable at least one of:
Clone the repository, open the folder, install dependencies:
git clone https://github.com/aws-samples/foundational-llm-chat.git
cd foundational-llm-chat
npm install
[Optional only if you did not not done it before in the deployment region] Bootstrap the CDK environment:
cdk bootstrap
Build and deploy the stack:
cdk deploy --region YOUR_DEPLOY_REGION
where YOUR_DEPLOY_REGION is the AWS region which you would like to deploy the application. For example: us-west-2
.
If you are using Finch instead of Docker please add CDK_DOCKER=finch
at the begin of the command like in the following example:
CDK_DOCKER=finch cdk deploy --region us-west-2
This will create all the necessary resources on AWS, including the ECS cluster, Cognito user pool, CloudFront distribution, and more.
After the deployment is complete, the CloudFront distribution URL will be displayed in the terminal. Use this URL to access the foundational-llm-chat application.
After the deployment you will get something similar to this:
✅ Foundational-LLM-ChatStack
Deployment time: 465.09s
Outputs:
FoundationalLlmChatStack.CognitoUserPool = ID
FoundationalLlmChatStack.NetworkingFoundationalLlmChatChatbotStack = CLOUDFRONT_DISTRIBUTION_ADDRESS
FoundationalLlmChatStack.ecsApplicationFoundationalLlmChatServiceLoadBalancer = ECS_LOAD_BALANCER
FoundationalLlmChatStack.ecsApplicationFoundationalLlmChatServiceServiceURL = ECS_LOAD_BALANCER_ADDRESS
Stack ARN: ARN
Total time: 469.14s
The Amazon CloudFront distribution is indicated in the following line: FoundationalLlmChatStack.NetworkingFoundationalLlmChat = CLOUDFRONT_DISTRIBUTION_ADDRESS
foundational-llm-chat-user-pool
. Open this user pool and create a user also verifing the email address;To avoid incurring unnecessary costs, it's recommended to clean up and delete the resources created by this sample when you're done using them. Follow these steps to delete the stack and associated resources:
Foundational-LLM-ChatStack
.This will delete the entire stack, including the ECS cluster, Cognito user pool, CloudFront distribution, and all other associated resources.
Alternatively, you can use the AWS CDK to delete the stack from the command line:
cdk destroy --region YOUR_DEPLOY_REGION
Replace YOUR_DEPLOY_REGION
with the AWS region where you deployed the application.
Note that deleting the stack will not automatically delete the CloudWatch logs and Amazon ECS task definition created during the deployment. You may want to manually delete these resources if you no longer need them to avoid incurring additional costs.
While the current architecture provides a good starting point for deploying the Foundational LLM Chat application, there are additional considerations for a production-ready deployment:
In the current architecture, communication between the CloudFront distribution and the Application Load Balancer (ALB) is over HTTP. For a production deployment, it is strongly recommended to use HTTPS (TLS/SSL) for secure communication:
Enabling HTTPS with TLS termination at both levels (ALB and ECS tasks) ensures end-to-end encryption and enhances the security of the application.
This AWS Sample is intended for demonstration and educational purposes only. It is not designed for production use without further modifications and hardening. Before deploying this application to a production environment, it is crucial to conduct thorough testing, security assessments, and optimizations based on your specific requirements and best practices.
System prompts for claude can be obtained directly from Anthropic documentation here: System Prompts
Contributions are welcome! Please follow the usual Git workflow:
This library is licensed under the MIT-0 License. See the LICENSE file.
You should consider doing your own independent assessment before using the content in this sample for production purposes. This may include (amongst other things) testing, securing, and optimizing the content provided in this sample, based on your specific quality control practices and standards.