Framework for developing chat bot applications.
Make sure you have node and npm installed. As of now, this module has been tested against 0.12 node version within the Travis CI pipeline.
Simply run npm install
command to install:
npm install --save talkify
Require the main module, types and dependencies. The following command loads everything that you need from the module.
// Core dependency
const talkify = require('talkify');
const Bot = talkify.Bot;
// Types dependencies
const BotTypes = talkify.BotTypes;
const Message = BotTypes.Message;
const SingleLineMessage = BotTypes.SingleLineMessage;
const MultiLineMessage = BotTypes.MultiLineMessage;
// Skills dependencies
const Skill = BotTypes.Skill;
// Training dependencies
const TrainingDocument = BotTypes.TrainingDocument;
Once the dependencies have been loaded, you can initialise the bot core.
const bot = new Bot();
The Bot()
constructor also accepts parameters in the form of configuration object. Here you can pass in configuration switch values or alternate implementations for things like ContextStore
and Classifier
etc. We'll cover that afterwards in the Configuration Options section.
Once Bot has been initialised, the first thing you should do is to train it. To train it one document at a time synchronously, you can use the train
method:
bot.trainAll([
new TrainingDocument('how_are_you', 'how are you'),
new TrainingDocument('how_are_you', 'how are you going'),
new TrainingDocument('how_are_you', 'how is it going'),
new TrainingDocument('help', 'how can you help'),
new TrainingDocument('help', 'i need some help'),
new TrainingDocument('help', 'how could you assist me')
], function() {});
The code above trains the bot to recognise the topic how_are_you
when the text looks like how are you
or how are you doing
as well as how is it going
but to recognise topic help
when the text looks like how can you help
or i need some help
as well as how can you assist me
. This is how you would train the bot.
The trainAll
method accepts an array of TrainingDocument
objects as well as a callback function. The TrainingDocument
object constructor accepts two parameters. These are topicName
and trainingData
. The topicName
parameter is the name of the topic you want to train the trainingData
for and the trainingData
is the sentence that you are feeding the bot as its training data. The topicName
will later on map to actual skills the bot can respond to.
The callback for the trainAll
method is a function that the bot can call when the training is complete. If you have too much training data, you should implement this properly. In this example, since there is not much training data, we've passed in an empty function
.
Needless to say, the bot gets better with more training data. In this tutorial we are using the default classifier, which currently is the LogisticRegression
classifier from the talkify-natural-classifier library. This classifier typically needs bit more training data to start with but is more accurate than others in most conditions.
Once you have trained the bot for some topics, you need to add some skills. Skills are actions that the bot will execute when it recognises a topic. So topics and skills map to 1:1.
To add a skill, you need to create it first. A skill requires three things. Name of the skill that is unique to the bot. The name is used to relate skills later on within the context. A topic that it maps to and a function that the bot will call in order to execute the skill. This function will take four parameters, namely: context, request, response, next
. The context
parameter is used to store any useful contextual information from that skill. The request
parameter contains information about the request, same for response
. The next
parameter is a function that you can call to let the bot
know that you are done processing. Here's what a skill looks like:
var howAction = function(context, request, response, next) {
response.message = new SingleLineMessage('You asked: "' + request.message.content + '". I'm doing well. Thanks for asking.');
next();
};
var helpAction = function(context, request, response, next) {
response.message = new SingleLineMessage('You asked: "' + request.message.content + '". I can tell you how I'm doing if you ask nicely.');
next();
};
var howSkill = new Skill('how_skill', 'how_are_you', howAction);
var helpSkill = new Skill('help_skill', 'help', helpAction);
Note: Name of a skill can be undefined. However, please be aware that doing so will mean that the bot will execute that skill whenever its confidence level is 0 for responding to a given query.
Once you have defined some skills, you need to add them to the bot. Add the skill to the bot like so:
bot.addSkill(howSkill);
bot.addSkill(helpSkill);
Once added, you can now ask bot to resolve something. This is where you are querying the bot with a sentence and it will respond with a message asynchronously. The resolve function takes in three parameters: contextId, text, callback
. The contextId
helps bot resolve context from any previous conversation. The text
is the question or piece of natural language string that the bot needs to interpret and respond to. Lastly, the callback
is the callback function that the bot will call
with err, messages
parameters to indicate an error (if any) and it's reply messages.
var resolved = function(err, messages) {
if(err) return console.error(err);
return console.log(messages);
};
bot.resolve(123, 'Assistance required', resolved);
Run it like a simple node file and it should print the following in the console.
[ { type: 'SingleLine',
content: 'You asked: "Assistance required". I can tell you how I'm doing if you ask nicely.' } ]
Try changing bot.resolve
to this and notice the change in response.
bot.resolve(456, 'How's it going?', resolved);
Let's ask two things at once. Change bot.resolve
again to:
bot.resolve(456, 'How's it going? Assistance required please.', resolved);
When you run your code, you should get two messages back:
[ { type: 'SingleLine',
content: 'You asked: "How's it going? Assistance required please.". I'm doing well. Thanks for asking.' },
{ type: 'SingleLine',
content: 'You asked: "How's it going? Assistance required please.". I can tell you how I'm doing if you ask nicely.' } ]
Currently train
, addSkill
and resolve
methods are chainable. That means you can create Bot object and cascade methods like is mentioned below.
new Bot().train(topic, sentence).addSkill(skill).resolve(....)
The bot core also accepts an alternate implementation for the built in context store. Please see Context management for more details.
You can also supply your own version of the classifier to the bot. This option was primarily used to make testing easier, however, it can still be used in production if you have a better version of the built-in classifier.
The built in classifier is the talkify-natural-classifier. This classifier provides two implementations:
LogisticRegressionClassifier
BayesClassifier
The LogisticRegressionClassifier
is the default classifier. If you prefer to implement the BayesClassifier
from talkify-natural-classifier
, you can do the following:
var BayesClassifier = require('talkify-natural-classifier').BayesClassifier;
var bot = new Bot({classifier: new BayesClassifier()});
If you prefer to use IBM Watson's Natural Language Processing Classifier instead, you should use the talkify-watson-classifier library instead. Please see the guide on the Github repository page for more details on how to use that classifier.
If you think yours work better, give me a shout! I'd be delighted to know and possibly work towards implementing it within the core module.
To provide your own implementation of Skill Resolution Strategy, simply pass the function definition in configuration object as follows:
var mySkillResolutionStrategy = function() {
this.addSkill = function (skill, options) { ... };
this.getSkills = function () {...};
this.resolve = function (err, resolutionContext, callback) {
...
};
return this;
};
var bot = new Bot({
skillResolutionStrategy: mySkillResolutionStrategy
});
The bot core will create an instance of your skill resolution strategy object on init and will use it as single instance across all resolutions.
To provide your own implementation of Topic Resolution Strategy, simply pass the function definition in configuration object as follows:
var myTopicResolutionStrategy = function() {
this.collect = function (classification, classificationContext, callback) { callback() };
this.resolve = function (callback) { callback([{name: "topic_name", confidence: 0.5]) };
return this;
};
var bot = new Bot({
topicResolutionStrategy: myTopicResolutionStrategy
});
The bot core will create a new instance of your topic resolution strategy for every call it receives into the resolve method.
By default, the bot core uses its built in version of ContextStore. If you look into lib/ContextStore.js, you'll find that it is a very simple implementation where the context is stored in a simple in-memory map with the contextId
being the key and the context object being the value. Of course when you come to deploy this, the built-in context store will be very limiting.
Extending the context store is very easy. Within the config, you can provide your own implementation for the ContextStore object. The following code provides a very trivial implementation that simply logs the values to the console.
var myContextStore = {
put: function(id, context, callback) {
console.info('put');
console.info(id);
console.info(context);
},
get: function(id, callback) {
console.info('get');
console.info(id);
},
remove: function(id, callback) {
console.info('remove');
console.info(id);
}
}
var bot = new Bot({contextStore: myContextStore});
The current spec for ContextStore
requires three functions to be implemented. These are put, get and remove
. As long as these methods are provided, the bot does not care where the value for contextStore
field in config comes from.
If you were to run that code with some query resolves, you will find that the remove function never gets called. This is a work in progress as currently there is no limit as to how long a context must be remembered.
As mentioned before, the default classifier that the bot uses is from the talkify-natural-classifier library. You are free to write your own classifier and use it in your application. To do this, you need to extend the classifier interface defined in the talkify-classifier library.
Once you have successfully extended that implementation, you can supply your classifier to the bot like so:
var myClassifier = new MyAwesomeClassifier();
var bot = new Bot({ classifier: myClassifier });
I'd love to see your implementation of the talkify classifier. If you have extended the interface and successfully implemented your classifier give me a shout! I'd be delighted to know your experience using this library.
Since version 2.1.0, you can specify multiple classifiers for your bot. See docs on classifier for more info.
A skill resolution strategy is a component that is able to output a skill, given a resolution context. A resolution context is an object comprised of a list of topics and the original sentence, essential ingredients needed to resolve a skill.
+-------------+
| Topic | | |
+---------+ | +----> +--------------+
|-----------+ | | |
+-------------+ | Skill | +---------+
| Resolution +----> | Skill |
| Strategy | +---------+
+------------+ | |
| Sentence | +---> +--------------+
+------------+
A topic resolution strategy allows you to plug in custom logic to resolve a topic, given classification data. When plugging in a custom topic resolution strategy, the bot core expects the function definition to be passed in instead of result of the function execution. This is because the topic resolution strategy object is constructed using new
for every call to resolve
method.
The process of topic resolution works in two parts:
First stage of the topic resolution process is the collection phase. Here, the bot core sends the classification for every classification set returned from the classifier along with any required context. The bot core also passes in a callback function which is required to be invoked to let the bot core know that the invocation was successful.
+------------------+ +
| Classification | |
+------------------+ |
| +-----------+
+--------> Collect |
| +-----------+
+-----------+ |
| Context | |
+-----------+ +
Second stage is the resolution phase. Here, the bot core is expecting a list of classifications to be returned. The resolution is called only after all collections have finished executing.
+-----------+ +---------+-+-+
| Resolve +---->+ Topic | | |
+-----------+ +---------+ | |
|-----------+ |
+-------------+
A topic resolution strategy object must expose two methods:
The collect method is called everytime a classifier returns classification(s). It is called with classification, context, callback
signature. The classification
object contains the classification returned from the classifier (or set of classifiers if using quorums). The context
object is the object containing request context. The last parameter callback
is the function that must be invoked to let the bot core know that you have finished collecting the passed in parameters.
The resolve method is called once after the bot core is done calling collect
on your topic resolution strategy. This is the final call from bot core and is meant to collect topic resolution information. The resolve
method is called with a callback
parameter. This is the callback function that must be called with two parameters error, topics
. The error parameter must be defined as an error object in case an error has occurred when resolving the topic. In any other case, this object must be undefined
. The second topics
parameter must be an array of topics resolved by the resolution strategy.
Please see the contributing guide for more details.