This is a Lex based chatbot that will calculate calories made by trips to different fast food restaurants. It is enabled from a FB Messenger Chatbot that can be accessed from the Facebook Page, or through the Messenger App on your phone.
Table of Contents
This bot uses AWS Lex - a service that contains the intelligence to be able to decipher user requests and trigger intents based on data provided in the models. The intents then invoke lambda functions that contain business logic specific to the intent.
Currently there are many different intents that the NLU process sorts into. Here are the "core functions" of the bot.
There are also intents that complement the core features.
Then there are intents that form the 'personality' of the bot. These were created based on real user usage, and prevent the generic error message from being used to respond.
Within each of the intents, sample utterances are provided that construct the potential sentances that a user may provide. The value of the slot (i.e. Large Fry) gets passed to the lambda function as a unique attribute.
You can get the summary information from the AWS CLI by executing the following command.
aws lex-models get-bot --name FastFoodChecker --version-or-alias PROD
It is a combination of the sample utterances and slots that determine which intent the NLU models will invoke. These are maintained in Lex, and are used for training the models.
Currently, here are the custom slots that are used by the intents.
An item does not need to be specified in the slot for the NLU to place a value into it. However, if the data is sparse, it may degrade how the NLU interprets the user requests.
Usability of a chatbot requires natural interaction to occur with a user. One key concept is around how to incorporate multiple slots into a single intent. For example, a user could ask "How many calories in a Big Mac, Fries, and a Coke?" That is three different items that each need to be parsed out. Within this chatbot, the main processing has many different slots that map into intents. For example, here are the slots that map into the GetCalories intent.
There are a couple of items to note in this.
In the example request above, the NLU models would parse the data from the utterance into three different slots (Food, Extra, and Drink).
The slot order doesn't matter to the parsing, but it does drive what would be the next response (slot 1 - Which Restaurant are you at?)
There are two slots that aren't required in this intent - Ketchup and PacketsKetchup. This optional information is asked for if fries is asked for as a side item. This is driven by the code in the Lambda function that is invoked in the Validation code hook.
All of the logic in formulating responses to different intents is processed in a series of lambda functions. Which lambda function to invoke is managed within Lex, and set at the intent level. This enables modularity to be built within the application, keeping the functions lightweight.
There are two different spots within Lex that can invoke a lambda function. The first is through basic validation, and the attribute name that identifies it is called invocationSource. There are two potential values for this - DialogCodeHook and FulfillmentCodeHook. Here is where these Lambda functions are specified in the Lex Bot.
The first dropdown is the Validation, and calls the lambda function every time the bot is called. The attribute that it passes is called DialogCodeHook. The second dropdown is the Fulfillment, and only called once the mandatory slots have been completed, and the validation from the initial call is complete. This allows for the functions to be different, enabling better scalability in building the bot.
Here is an overview of each function currently written.
lambda.js - the main function that handles the basic validation for queries, sourced only in DialogCodeHook mode.
calculate.js - calculating the response for the actual calories in a meal is handled by this funciton, and is sourced by a FulfillmentCodeHook.
pizza.js - handles intents around calculating calories in a pizza, including the intent - WhatPizzaTypes.
misc.js - handles simple intents like help, the introduction, and more details around a meal.
chinese.js - handles intents around chinese food, and coupling the different slots together to form a meal.
The core functionality of this bot is to be able to answer queries of how many calories are in different meals. While the slots that Lex uses are helpful in training the NLU models, they don't have the ability to serve as lookup files. That's where the json objects come in that are stored in the /src/data/ folder.
Here is a sample of the format.
[
{
"restaurant":"Chipotle",
"foodItems":[
{"foodName":"Chicken Burrito", "foodType":"Burrito", "protein":"chicken", "calories":975},
{"foodName":"Steak Burrito", "foodType":"Burrito", "protein":"steak", "calories":945},
{"foodName":"Carnitas Burrito", "foodType":"Burrito", "protein":"carnitas", "calories":1005},
The lambda functions refer to these objects to respond to different queries, and to calculate calorie consumption for the user.
Each food item may be duplicated for different spellings and phrases used to retrieve. For example.
{"foodName":"Fries", "calories":340},
{"foodName":"Fry", "calories":340},
{"foodName":"Frys", "calories":340},
{"foodName":"French Fries", "calories":340},
{"foodName":"French Fry", "calories":340},
{"foodName":"Medium Fries", "calories":340},
{"foodName":"Medium Fry", "calories":340},
There are also lookup tables around sauces, dressings, and individual item adjustments. For example.
[
{
"dressingName":"Ranch",
"calories":200,
"carbs":11,
"restaurantNames":["McDonalds"]
},
{
"dressingName":"French",
"calories":300,
"carbs":22,
"restaurantNames":["McDonalds"]
},
Given that the NLU models do not correct spelling provided by the user, it's up to the Lambda functions to handle this part of the logic.
Managing large custom slots can be difficult, particularly if the data is dynamic. The main food lookup has several hundred unique values in it, and growing based on user usage. The process for creating this slot has been automated, and the data for the custom slot is taken from the foods.json data object. This is done through the AWS CLI that can load these directly from the command line. All of the files are contained in the [slots}(https://github.com/terrenjpeterson/caloriecounter/tree/master/src/slots) directory for reference. Here are the steps used to create.
The syntax looks like this.
# foods.json is the data object that will be passed to the lambda function
request=$(<foods.json)
# invoke the lambda function from the command line and write the output to output.json
aws lambda invoke --function-name convertFoodsObjForSlot --payload "$request" output.json
data=$(<output.json)
# invoke lex to create a new version of the FoodEntreeNames custom slot using the data from output.json
aws lex-models put-slot-type --name FoodEntreeNames --checksum <enter latest checksum here> --enumeration-values "$data" >> sysout.txt
Also, the checksum value is from the prior deployment of the custom slot. You can find the current checksum for a slot by the get-slot-type command.
# find the latest information about a custom slot
aws lex-models get-slot-type --name FoodOptions --slot-type-version '$LATEST'
The key to effective long-running conversations between a user and a bot is around managing context of the conversation. For example, a dialog could go on for several minutes, and invoke many intents.
Part of facilitating this is designing a flow of the conversation. Error messages should not be too abrupt, and should lead the user to an alternative query. The intents should also pass data between one another. This can be accomplished by saving the session data when completing an intent. This allows the next intent to retrieve the information and not require the user to repeat it with each request.
In the example above, the conversation begins with the user indicating which restaurant they are eating at. This gets persisted in the session by the FoodTypeOptions intent. The dialog shifts to details of the meal, but the restaraunt name gets saved. Also, the initial response on the calorie count is brief, but offers a more detailed explainatin if the user says 'more details'. Once again the data gets stored in the session data, and is passed back as part of the Lex framework. Here is example of one of the objects.
{
"messageVersion": "1.0",
"invocationSource": "FulfillmentCodeHook",
"userId": "1712299768809980",
"sessionAttributes": {
"restaurantName": "Burger King",
"foodName": "Whopper",
"foodCalories": "660",
"extraName": "Onion Rings",
"extraCalories": "410",
"drinkCalories": "310",
"drinkName": "32 oz. Large Coke",
"totalCalories": "1380"
},
"bot": {
"name": "FastFoodChecker",
"alias": "PROD",
"version": "42"
},
"outputDialogMode": "Text",
"currentIntent": {
"name": "DailyIntakeAnalysis",
"slots": {},
"slotDetails": {},
"confirmationStatus": "None"
},
"inputTranscript": "Analyze my meal"
}
The lambda functions in this bot are completely stateless, so any data from prior invocations must come through the request object.
One of the features in the major chatbot user interfaces (Messenger, Slack, etc.) is buttons. These reduce the effort by the user by providing a series of options like so.
Each messaging platform has their own implementation of this pattern, and here is what Messenger uses. Lex handles the translation to get the buttons into the correct format, and within Lex, the responseCard attribute needs to be provided with the specifics on the button detail.
Modifying Lex is done completely through the console. The lambda functions that serve the business logic are hosted in AWS lambda, and are deployed from an EC2 host.
The full deployment script is /src/build.sh but a quick overview can be found in the following instructions.
# this creates the build package as a zip file containing the code and relevant data objects
zip -r foodbot.zip lambda.js data/restaurants.json data/foods.json data/drinks.json
# this CLI command copies the build package to an s3 bucket for staging
aws s3 cp foodbot.zip s3://fastfoodchatbot/binaries/
# this CLI command takes the package from the s3 bucket, and overlays the lambda function 'myCalorieCounterGreen'
aws lambda update-function-code --function-name myCalorieCounterGreen --s3-bucket fastfoodchatbot --s3-key binaries/foodbot.zip
# this CLI command invokes the lambda function with the data object read into request, and writes out a response to the testOutput data object.
aws lambda invoke --function-name myCalorieCalculatorGreen --payload "$request" testOutput.json
This process is repeated for each of the lambda functions that are called by Lex. This includes having at least one test condition for each lambda function to ensure that the deployment was done correctly.
One of the topics in bot design is around having a personality. Something to consider when designing the intents is what are all of the possible questions that a user may ask. This should include off-topic questions, such as 'what is your name' or emotional responses like 'oh-no' or 'you suck'. These are easy to code - usually just a simple request-response with no slots involved, and does tend to make the dialogs more natural.
For an example, here is a brief response coded in the misc.js function that responds to if someone asks what the bots name is. In the models, an utterance of 'what is your name' resolves to this intent.
if (intentName === 'MyName') {
console.log("user requested bot name");
return getBotName(intentRequest, callback);
}
...
function getBotName(intentRequest, callback) {
const sessionAttributes = intentRequest.sessionAttributes || {};
var botResponse = "My name is Chuck. I'm a chatbot that helps people sort out " +
"fast food options. Talking about food all day makes me hungry!!!";
callback(close(sessionAttributes, 'Fulfilled',
{ contentType: 'PlainText', content: botResponse }));
}
As part of the initial effort, I was attempting to get this chatbot published to the slack store. As part of that, I needed to build a website for public support of the app. It's a work in progress, and called caloriecountbot.com. It's hosted by s3, and the source is located in the /website folder.