bots

Create bots with TypeScript

Whenever I’m doing demos or other presentations I work very hard to keep things as real world as possible. While there will certainly be little “cheats”, such as keeping all items in a single file, or having snippets already available, to help make the demo easier to digest, there’s one big lie I wind up telling every bot presentation I do. I do all my demos in JavaScript, even though I create all my bots using TypeScript.

Why the lie?

Whenever I’m talking about how to build a chat bot using Bot Framework, I’m trying to demonstrate how you can do so using skills you already have. If you know JavaScript, and have even a cursory understanding of Node.js, you can get up and running relatively quickly with your first bot. The last thing in the world I want to do is throw another hurdle in front of attendees, like learning another programming language.

Even though I don’t want to put another hurdle in front of someone, the thing about TypeScript is it turns out it’s not much of a hurdle after all.

What is TypeScript

If you’re not already familiar with TypeScript, it’s a language headed by Anders Hejlsberg, whose claims to fame include Delphi and C#. TypeScript is designed to help overcome the various shortcomings of JavaScript, and do so in such a fashion that allows you to begin using cutting edge features, such as async/await, which aren’t globally available in ECMAScript.

TypeScript is designed to build on the existing syntax of JavaScript; in fact, 70% of the syntax in TypeScript is simply JavaScript. This helps make the transition to the language smoother for experienced JavaScript developers. In a lot of ways, TypeScript is a combination of Java and JavaScript.

TypeScript offers many features you expect from programming languages, such as static (or strong) typing, OOP, and better module management.

One key about TypeScript is it transcompiles into JavaScript, and to the version of JavaScript/ECMAScript you need. So you can write in TypeScript, take advantage of the features the language offers, knowing the resulting code will run in the environment you’re targeting.

Why use TypeScript when creating bots?

One of the biggest weaknesses of JavaScript is the inability to declare the type of a variable. This limits the amount of support an IDE, such as VS Code, can offer you. Below is the code one might have in a separate file for a dialog in a bot:

1
2
3
4
5
module.exports = [
(session) => {
session.endConversation('Thank you for your input!');
}
]

If you put that into a JavaScript file, you would notice when hitting dot after session, VS Code wouldn’t show you endConversation as an available option, or if it did, it’s because it’s seeing it from another file in your project. The IDE has no way of knowing that session is actually of type builder.session.

Contrast that with the following bit of TypeScript, where we are able to identify the type:

1
2
3
4
5
export default [
(session: builder.Session) => {
session.endConversation('Thank you for your input!');
}
]

When you create the TypeScript file, you’ll notice VS Code knows exactly what the session parameter is, and is able to offer you IntelliSense.

Getting started with TypeScript

Installing TypeScript

In order to start programming in TypeScript, you will need to install TypeScript. TypeScript is available as an NPM package, and you can simply add it as a developer dependency to your project. Personally, because I use TypeScript extensively, I install it globally.

1
npm install -g typescript

Creating and configuring the project

If you use the Bot Framework Yeoman generator I created you will notice there is a template already available for TypeScript. For purposes of this post, we’ll create everything from scratch, so you can see how it is all brought together.

Add packages and dependencies

Create a new folder, and add a package.json file with the following contents:

1
2
3
4
5
6
7
8
9
10
11
{
"name": "type-script-bot",
"dependencies": {
"botbuilder": "^3.4.4",
"restify": "^4.3.0"
},
"devDependencies": {
"@types/node": "^6.0.52",
"@types/restify": "^2.0.35"
}
}

The dependencies section is pretty standard for a bot, but you might not be familiar with the devDependencies. The devDependencies contain the types of the various packages we’ll be using. Types are the various interfaces for the objects and classes in a particular package. So @types/restify contains the interfaces provided by restify. This will add IntelliSense to the project. In the case of botbuilder, we don’t need to add a types file, as the framework is written in TypeScript, and contains all of the necessary types. After saving the file, run the installation process like normal.

1
npm install

Configuring TypeScript compilation

As mentioned before, TypeScript will be transcompiled to JavaScript. You configure how this occurs by using a tsconfig.json file.

The module and target options tell the transcompiler to emit JavaScript that is compliant with ES6. outDir specifies where the JavaScript files will be output. The files section identifies which files will be transcompiled. Add a file named tsconfig.json with the following content.

1
2
3
4
5
6
7
8
9
10
11
{
"compilerOptions": {
"module": "commonjs",
"target": "es6",
"outDir": "./built"
},
"files": [
"dialog.ts",
"app.ts"
]
}

Creating the bot and dialog

Creating the dialog

Let’s start by creating a basic dialog. Add a file to your project named dialog.ts, and add the code you see below. You will notice this is standard Node.js bot code, with a couple of differences.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// dialog.ts
import * as builder from 'botbuilder';
interface IResults {
response: string;
}
export default [
(session: builder.Session) => {
builder.Prompts.text(session, 'What is your name?');
},
(session: builder.Session, results: IResults) => {
session.endConversation(`Hello, ${results.response}`);
}
]

From the top, you’ll notice the import statement is different than in JavaScript. Rather than using require, you use the import command. The * means you’ll be importing everything from the package, and as allows you to identify an alias. The end result is effectively the same as using const builder = require('botbuilder');, like you would have done traditionally.

Second, you’ll notice the creation of the interface IResults. You added a single property named response, and marked it as type string. Interfaces, just as in C# or Java, give you the ability to identify the structure. In the case of TypeScript, interfaces are completely weightless; they are not compiled to the JavaScript. Interfaces help give you a better development experience.

Rather than using module.exports to export the array that contains your waterfall, you use export default, followed by the array. The syntax is slightly different, but the results are the same.

Finally, you’re declaring the data type of session and results on each of the waterfall steps to aid your development experience. results is using the interface you created earlier in the file. You’ll notice when you do this, and you start typing response.results, VS Code will provide IntelliSense, and show you response as an available property of type string.

Creating the bot and host

The code to create the bot will be similar to what you would typically do with JavaScript. The main difference you’ll notice is the way packages and other items are imported. Add a file named app.ts with the following code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// app.ts
import * as restify from 'restify';
import * as builder from 'botbuilder';
import dialog from './dialog'
const connector = new builder.ChatConnector({
appId: process.env.MICROSOFT_APP_ID,
appPassword: process.env.MICROSOFT_APP_PASSWORD
});
const bot = new builder.UniversalBot(connector, dialog);
const server = restify.createServer();
server.post('/api/messages', (bot.connector('*') as builder.ChatConnector).listen());
server.listen(process.env.PORT, () => console.log(`${server.name} listening to ${server.url}`));

As you enter the code, you’ll notice that VS Code is able to offer you support throughout the entire process. Because every variable has a type that VS Code can identify, it’s able to provide IntelliSense, unlike when using JavaScript, where the level of support will vary.

Running the code

In order to run the code, it will need to be transcompiled. You can do this by simply running tsc. Because we created a tsconfig.json file, the transcompiler will know what to transcompile, and how to do it. The watch switch will automatically detect changes to the TypeScript files, and transcompile on the fly.

1
tsc --watch

If you take a look at the JavaScript files, you’ll notice they’re relatively similar to what was created in TypeScript. A couple of key differences you’ll notice is the interface we created wasn’t emitted, and the type declarations for parameters are also not part of the JavaScript file that was created. Running the bot is done just as you normally would, using either node or nodemon.

1
node app.js

Next steps

There’s quite a bit that I left on the table when it comes to TypeScript. We could make better use of interfaces with our dialogs, we could create a class for our app, we could …, we could …. My goal with this post was to help get you up and running with TypeScript, and show some of the power that’s made available.

If you want to know more about TypeScript, you can check out the official site, or an edX course.


Working with custom buttons to drive conversations

If I’ve said anything about bots, it’s that they’re apps. They’re just apps with a conversational interface. This style of interface can be extremely powerful, as it allows the user to better express themselves, or “skip to the end” if they already know what it is they’re trying to accomplish. The problem, though, is without a bit of forethought to the design of the bot it’s easy to wind up back in this scenario, where the user isn’t sure what to do next:

Command Prompt

If you’re well versed in the set of commands you can quickly perform any operation you desire. But there is no guidance provided by the system. Just as they’re no guidance provided here:

Command Prompt

Buttons are a good thing

We need to guide the user.

Buttons exist for a reason. They succinctly show the user what options are available, and can guide the user towards what they’re looking for. In addition, they help reduce the amount of typing required, which is especially important when talking about someone accessing a bot on a mobile device with a tiny keyboard.

Providing choices

The most obvious place where buttons shine is when providing a list of choices for a user to select from. This might be a shipping method, a category for filtering, or, really, any other set of options. To support a list of choices, BotBuilder provides a choice prompt. The choice prompt, as you might expect, provides the user a list of options for them to choose from, and then provides access to that in the next step of the dialog.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// sample waterfall dialog for a choice
(session, args, next) => {
builder.Prompts.choice(
session,
`What color do you want?`,
['Red', 'Green', 'Blue'],
{
listStyle: builder.ListStyle.button,
retryPrompt: `Please choose from the list of options`,
maxRetries: 2,
}
);
},
(session, results, next) => {
if(results.response && results.response.entity)
session.endConversation(`You chose ${results.response.entity}`);
else
session.endConversation(`Sorry, I didn't understand your choice.`);
}

The choice prompt limits the user’s response to just the list of options you provide. You can limit the number of times the bot will ask the user for a response before moving onto the next step in the waterfall.

While choice is certainly nice for providing a simple list of options, it does force the user into choosing one of those options. As a result, it’s not as easy to use choice when trying to guide the user with a list of options while also allowing them to type free-form, which is what you’ll want to do when the user first starts a session with the bot. In addition, you don’t get control over the interface provided.

Customizing the list of prompts

If you wish to customize the list of prompts, you need to set up a card. This can be an Adaptive Card, or one of the built-in cards such as thumbnail or hero. By using a card you can provide a bit more guidance to the channel on how you’d like your list of options to provide.

To allow the user to select from a list of options, you will add buttons to the card. Buttons can be set to either imBack, meaning the client will send the message back to the bot just as if the user typed it, or postBack, meaning the client will send the message to the bot without displaying it inside the client. Generally speaking, imBack is a better choice, as it makes it clear to the user something has happened, and can give the user a clue as to what to type in the future, should they so decide.

WARNING!!!

The code below is the wrong way to use buttons to provide a list of options, but it’s the most common mistake I see people make when using buttons with Bot Framework.

In the code snippet below, I want you to notice the addition of the buttons using builder.CardAction.imBack, and the call to session.send (where the mistake is).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(session, args, next) => {
const card = new builder.ThumbnailCard(session)
.text('Please choose from the list of colors')
.title('Colors')
.buttons([
new builder.CardAction.imBack(session, 'Red', 'Red'),
new builder.CardAction.imBack(session, 'Blue', 'Blue'),
new builder.CardAction.imBack(session, 'Green', 'Green'),
]);
const message = new builder.Message(session)
.addAttachment(card);
session.send(message);
},
(session, results, next) => {
if(results.response && results.response.entity)
session.endConversation(`You chose ${results.response.entity}`);
else
session.endConversation(`Sorry, I didn't understand your choice.`);
}

If you added this dialog to a bot and ran it, you’d see the following output:

Repeating buttons

The mistake, as I mentioned above, is at session.send. When using session.send in the middle of a waterfall dialog, the bot is left in a state where it’s not expecting the user to respond. As a result, when the user does respond by clicking on Blue, the bot simply returns back to the current step in the waterfall, and not to the next one. You can click the buttons as long as you’d like, and you’ll see them continuing to pop up.

The correct way to do it

In order for the bot to be in a state that expects user input and continues to the next step of a waterfall, you must use a prompt. When using buttons inside of a card, you can choose either a text or choice prompt. When using a text prompt, the bot can accept any input in addition to the buttons you provided. This can allow the user to be more free-form as needed. choice prompts, however, will limit the user to the list of choices, just as if you created it the traditional way mentioned earlier.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// Using a choice prompt with custom buttons
// For simplicity, I removed the retry prompts, but you can continue to use them
// If you wanted to use a text prompt, you'd simply use:
// builder.Prompts.text(session, message);
(session, args, next) => {
const choices = ['Red', 'Blue', 'Green']
const card = new builder.ThumbnailCard(session)
.text('Please choose from the list of colors')
.title('Colors')
.buttons(choices.map(choice => new builder.CardAction.imBack(session, choice, choice)));
const message = new builder.Message(session)
.addAttachment(card);
builder.Prompts.choice(session, message, choices);
},
(session, results, next) => {
if(results.response && results.response.entity)
session.endConversation(`You chose ${results.response.entity}`);
else
session.endConversation(`Sorry, I didn't understand your choice.`);
}

Providing a menu

As I mentioned at the beginning of this post[1], one of the keys to a good user experience in a bot is to provide guidance to the user, otherwise you’re just giving them a C-prompt.Again, the easiest way to do this is via buttons.

We’ve already seen that imBack behaves just as if the user typed the value manually. We can take advantage of this fact by providing the list of options, and ensuring the values match the intents provided in the bot.

You’ll notice in the code sample below I created a bot with two simple dialogs, and the default dialog sends down the buttons inside of a card. By calling endConversation, the bot sends down the card and closes off the conversation. When the user clicks on a button it’s just as if the user typed in the value, and the bot will then route the request to the appropriate dialog. The user is free at this point to either click one of the provided buttons, or type in whatever command they desire.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
const bot = new builder.UniversalBot(
new builder.ChatConnector({
appId: process.env.MICROSOFT_APP_ID,
appPassword: process.env.MICROSOFT_APP_PASSWORD
}),
(session) => {
const card = new builder.ThumbnailCard(session)
.title('Sample bot')
.text(`Hi there! I'm the sample bot.
You can choose one of the options below
or type in a command of your own
(assuming I support it)`)
.buttons([
builder.CardAction.imBack(session, 'Hello', 'Hello'),
builder.CardAction.imBack(session, 'Greetings', 'Greetings'),
]);
const message = new builder.Message(session)
.addAttachment(card);
session.endConversation(message);
}
);
bot.dialog('Hello', (session) => {
session.endConversation(`The Hello Dialog`)
}).triggerAction( { matches: /Hello/ } );
bot.dialog('Greetings', (session) => {
session.endConversation(`The Greetings Dialog`)
}).triggerAction( { matches: /Greetings/ } );

The updated bot now performs as displayed below. In the dialog I started by typing test to trigger the bot. I then clicked on Hello, which displayed the Hello Dialog message. I completed the exchange by typing Hello, which, as you see, sent the same Hello Dialog message.

Introduction with buttons

Conclusion

I’ve said it before, and I’ll certainly say it again - buttons exist for a reason. Buttons can help you provide a good UI/UX for users in any type of application, and bots are no exception. You can use buttons to both limit the amount of typing required, and to help guide the user’s experience with the bot.

[1] This exceedingly long post?


Managing conversations and dialogs in Microsoft Bot Framework using Node.JS

Communication with a user via a bot built with Microsoft Bot Framework is managed via conversations, dialogs, waterfalls, and steps. As the user interacts with the bot, the bot will start, stop, and switch between various dialogs in response to the messages the user sends. Knowing how to manage dialogs in Bot Framework is one of the keys to successfully designing and creating a bot.

Dialogs and conversations, defined

At its most basic level, a dialog is a reusable module, a collection of methods, which performs an operation, such as completing an action on the user’s behalf, or collecting information from the user. By creating dialogs you can add reuse to your bot, enable better communication with the user, and simplify what would otherwise be complex logic. Dialogs also contain state specific to the dialog in dialogData.

A conversation is a parent to dialogs, and contains the dialog stack. It also maintains two types of state, conversationData, shared between all users in the conversation, and privateConversationData, which is state data specific to that user.

Waterfalls

Every dialog you create will have a collection of one or more methods that will be executed in a waterfall pattern. As each method completes, the next one in the waterfall will be executed.

Dialog Stack

Your bot will maintain a stack of dialogs. The stack works just like a normal LIFO stack), meaning the last dialog added will be the first one completed, and when a dialog completes control will then return to the previous dialog.

Managing dialogs

Bots come in many shapes, sizes, and forms. Some bots are simply front ends to existing APIs, and respond to simple commands. Others are more complex, with back and forth messages between the user and bot, branching based on information collected from the user and the current state of the application. Depending on the requirements for the bot you’re building, you’ll need various tools at your disposal to start and stop dialogs.

Starting dialogs

Dialogs can be started in a few ways. Every bot has a default, sometimes called a root dialog, which is executed when no other dialog has been started, and no other ones have been triggered via other means. You can create a dialog that responds globally to certain commands by using triggerAction or beginDialogAction. triggerAction is registered globally to the bot, while beginDialogAction registers the command to just that dialog. Finally, you can programmatically start a dialog by calling either beginDialog or replaceDialog, which will allow you to add a dialog to the stack or replace the current dialog, respectively.

Ending dialogs and conversations

When a bot reaches the end of a waterfall, the next message will look for the next step in the waterfall. If there is no step, the bot simply doesn’t respond, naturally ending the conversation or dialog. This can provide a bit of a confusing experience for the user, as they may need to retype their message to get a response from the bot. It can also be confusing for the developer, as there may be many ways a dialog might end depending on the logic.

As a result, when a conversation or dialog has come to an end, it’s a best practice to explicitly call endConversation, endDialog, or endDialogWithResult. endConversation both clears the current dialog stack and resets all data stored in the session, except userData. Both endDialog and endDialogWithResult end the dialog, clear out dialogData, and control to previous dialog in the stack. Unlike endDialog, endDialogWithResult allows you to pass arguments into the previous dialog, which will be available in the second parameter of the first method in the waterfall (typically named results).

State management

Ending a conversation or dialog will also remove the associated state data. This is important to remember when deciding where to store state data. The best practices of minimizing scope of state data apply to bots, just as they do to any other application.

The place where state lifespan becomes trickiest is dialogData. If you start a new dialog, the dialog doesn’t receive the data from the calling dialog. In addition, when a dialog completes, the previous dialog doesn’t receive the data from the calling dialog. You can overcome this by using arguments. endDialogWithResult allows you to pass arguments to the prior dialog, while both beginDialog and replaceDialog allow you to pass arguments into the new dialog.

The sample application

The sample application we will be building through the next set of examples is a simple calculator bot. Our calculator bot will allow the user to enter numbers, and once they say total we’ll display the total and allow them to start all over again. We’ll also want to allow the user to get help at any time, and to cancel as needed. The sample code is provided on GitHub.

Default dialog

Starting with version 3.5 of Microsoft Bot Framework, the default or root dialog is registered as the second parameter in the constructor for UniversalBot. In prior versions, this was done by adding a dialog named /, which led to naming similar to that of URLs, which really isn’t appropriate when naming dialogs.

The default dialog is executed whenever the dialog stack is empty, and no other dialog is triggered via LUIS or another recognizer. (We’ll see how to register dialogs using triggerAction a little later.) As a result, the default dialog should provide some contextual information to the user, such as a list of available commands and an overview of what the bot can perform.

From a design perspective, don’t be afraid to send buttons to the user to help guide them through the experience; bots don’t need to be text only. Buttons are a wonderful interface, as they can make it very clear what options the user can choose from, and limit the possibility of the user making a mistake.

To get started, we’ll set up our default dialog to present the user with two buttons, add and help. For our first pass, we’ll simply echo the user’s selection; we’ll add additional dialogs in the next section. We’ll do this by setting up a two step waterfall, where the first step will prompt the user, and the second will end the conversation.

Default dialog sample code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
const builder = require('botbuilder');
const connector = new builder.ChatConnector({
appId: process.env.MICROSOFT_APP_ID,
appPassword: process.env.MICROSOFT_APP_PASSWORD
});
const bot = new builder.UniversalBot(connector, [
(session, args, next) => {
const card = new builder.ThumbnailCard(session);
card.buttons([
new builder.CardAction(session).title('Add a number').value('Add').type('imBack'),
new builder.CardAction(session).title('Get help').value('Help').type('imBack'),
]).text(`What would you like to do?`);
const message = new builder.Message(session);
message.addAttachment(card);
session.send(`Hi there! I'm the calculator bot! I can add numbers for you.`);
const choices = ['Add', 'Help'];
builder.Prompts.choice(session, message, choices);
},
(session, results, next) => {
session.endConversation(`You chose ${results.response.entity}`);
},
]);

Sample code

Working with dialogs

One of the biggest challenges when creating a bot is dealing with the fact users can be random. Imagine the following exchange:

1
2
3
4
User: I'd like to make a reservation
Bot: Sure! How many people?
User: Do you have a vegan menu?
Bot: ???

This is a common scenario. The user sends a message to the bot. The bot responds. The user gets a new piece of information, in this case their friend is a vegan, and thus asks about a vegan menu. The bot is now stuck, because it wasn’t expecting that response. triggerAction allows you to register a global command of sorts with the bot, and ensure the appropriate dialog is executed for every request.

Naming dialogs

In prior versions of Bot Framework, developers typically started every dialog name with /. This was because when registering the default dialog in earlier versions you named it /. As you’ve already seen, that’s not the case starting with version 3.5. As a result, you give your dialog a name that appropriately describes the operation the dialog is built to perform.

Registering a dialog

bot.dialog is used to register a dialog. The two parameters you’ll provide are the name of the dialog, and the array of methods you wish to execute when the user enters the dialog. Let’s create the starter for add dialog. For now, we’ll leave it with the simple echo, and introduce new functionality as we go forward.

Dialog sample code

1
2
3
4
5
bot.dialog('AddNumber', [
(session, args, next) => {
session.endConversation(`This is the AddNumber dialog`);
},
]);

Using triggerAction to start a dialog

We want to register our AddNumber dialog with the bot so whenever the user types add this dialog will be executed. This is done through the use of triggerAction, which is a method available on Dialog. triggerAction accepts a parameter of type ITriggerActionOptions.

ITriggerActionOptions has a few properties, the most important of which is matches. Matches will either be a regular expression to match a string typed in by the user, such as add in our case, or a string literal if the match will be done through the use of a recognizer, such as one from LUIS.

Let’s update our bot to register AddNumber to be started when the user types add. We’ll remove the second step from the default dialog and take advantage of the behavior of our buttons, which will send the text of the button to the bot, much in the same way as if the user typed it themselves.

triggerAction sample code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// just the updated code
const bot = new builder.UniversalBot(connector, [
(session, args, next) => {
const card = new builder.ThumbnailCard(session);
card.buttons([
new builder.CardAction(session).title('Add a number').value('Add').type('imBack'),
new builder.CardAction(session).title('Get help').value('Help').type('imBack'),
]).text(`What would you like to do?`);
const message = new builder.Message(session);
message.addAttachment(card);
session.send(`Hi there! I'm the calculator bot! I can add numbers for you.`);
// we can end the conversation here
// the buttons will provide the appropriate message
session.endConversation(message);
},
]);
bot.dialog('AddNumber', [
(session, args, next) => {
session.endConversation(`This is the AddNumber dialog`);
},
]).triggerAction({matches: /^add$/i});

Sample code

triggerAction notes

triggerAction is a global registration of the command for the bot. If you wish to limit that to an individual dialog, use beginDialogAction, which we’ll discuss later.

Also, triggerAction replaces the entire current dialog stack with the new dialog. While that can be good for AddNumber, that wouldn’t be good for a dialog to provide help. We’ll see a little later how onSelectAction can be used to manage this behavior.

If you execute the bot at this point you’ll notice clicking Add on the buttons, or simply typing it, will cause the bot to send the message This is the AddNumber dialog. You’ll also notice that help, at present, does nothing. We’ll handle that in a bit.

Using replaceDialog to replace the current dialog

Let’s talk a little bit about our logic for AddNumber. We want to prompt the user for a number, add it to our running total, and then ask the user for the next number. Basically, we just need to restart the same dialog over and over again. We can use replaceDialog to perform this action.

In the first step of our waterfall, we’ll check to see if there is a running total available in privateConversationData, and create one if it doesn’t exist. We’ll then prompt the user for the number they want to add.

In the second step, we’ll retrieve the number, add it to our running total, and then start the dialog over again by calling replaceDialog.

replaceDialog sample code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
bot.dialog('AddNumber', [
(session, args, next) => {
let message = null;
if(!session.privateConversationData.runningTotal) {
message = `Give me the first number.`;
session.privateConversationData.runningTotal = 0;
} else {
message = `Give me the next number, or say **total** to display the total.`;
}
builder.Prompts.number(session, message, {maxRetries: 3});
},
(session, results, next) => {
if(results.response) {
session.privateConversationData.runningTotal += results.response;
session.replaceDialog('AddNumber');
} else {
session.endConversation(`Sorry, I don't understand. Let's start over.`);
}
},
]).triggerAction({matches: /^add$/i});

Sample code

replaceDialog notes

replaceDialog takes two parameters, the first being the name of the dialog with which you wish to replace the current dialog, and the second being the arguments for the new dialog. The object you provide as the second parameter will be available in the first function in the new dialog’s waterfall in the second parameter (typically named args).

Using beginDialogAction to localize commands

It doesn’t make a lot of sense for our bot to have a global total command. After all, it’s only valid if we’re currently adding numbers. Using beginDialogAction allows you to register commands specific to that dialog, rather than global to the bot. By using beginDialogAction, we can ensure total is only executed when we’re in the process of running a total.

The syntax for beginDialogAction is similar to triggerAction. You provide the name of the DialogAction you’re creating, the name of the Dialog you wish to start, and the parameters for controlling when the dialog will be started.

beginDialogAction sample code

1
2
3
4
5
6
7
8
9
10
11
bot.dialog('AddNumber', [
// existing waterfall code
])
.triggerAction({matches: /^add$/i})
.beginDialogAction('Total', 'Total', { matches: /^total$/});
bot.dialog('Total', [
(session, results, next) => {
session.endConversation(`The total is ${session.privateConversationData.runningTotal}`);
},
]);

Sample code

beginDialogAction notes

By using endConversation, we reset the entire conversation back to its starting state. This will automatically clear out any privateConversationData, as the conversation has ended.

Using onSelectAction to control triggerAction behavior

By default, triggerAction will reset the current dialog stack with the new dialog. In the case of AddNumber that’s just fine; the logic on the dialog is designed for the dialog to continually restart. But this is problematic when it comes to Help. Needless to say, we don’t want to reset the entire set of dialogs when the user types help; we want to allow the user to pick up right where they left off.

Bot Framework provides beginDialog for adding a dialog to the stack. When that dialog completes, it returns to the control to the active step in the prior dialog. Or, in terms of the case of our Help example, it will allow the user to pick up where they left off.

The onSelectAction property on ITriggerActionOptions executes when the bot is about to start the dialog being triggered. By using this event, we can change the way the dialog is started, using beginDialog, which will add the dialog to the stack instead of replacing stack. The first parameter is the name of the dialog we wish to start, which is provided in args.action, and the second is the args parameter we want to pass into the the dialog when it starts. The code sample below will ensure we return control to the prior dialog when this one completes.

onSelectAction sample code

1
2
3
4
5
6
7
8
9
10
bot.dialog('Help', [
(session, args, next) => {
session.endDialog(`You can type **add** to add numbers.`);
}
]).triggerAction({
matches: /^help/i,
onSelectAction: (session, args) => {
session.beginDialog(args.action, args);
}
});

Sample code

onSelectAction notes

When using beginDialog, don’t hard code the name of the dialog you’re about to start, but rather use args.action. Otherwise, you’ll notice the dialog won’t actually start.

Using beginDialogAction to centralize help messaging

One of the challenges with the help solution we created earlier is it can only provide generic help; whenever the user types help the exact same message is sent to the user. By using beginDialogAction you can parameters to the triggered dialog, allowing you to centralize messaging for help. In our case, we’ll use the name of the current action as the key to the message we want to send.

beginDialogAction to centralize help sample code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
bot.dialog('AddNumber', [
// existing waterfall code snipped
])
.triggerAction({matches: /^add$/i})
.beginDialogAction('Total', 'Total', { matches: /^total$/})
.beginDialogAction('HelpAddNumber', 'Help', { matches: /^help$/, dialogArgs: {action: 'AddNumber'} });
bot.dialog('Total', [
(session, results, next) => {
session.endConversation(`The total is ${session.privateConversationData.runningTotal}`);
},
]);
bot.dialog('Help', [
(session, args, next) => {
let message = '';
switch(args.action) {
case 'AddNumber':
message = 'You can either type the next number, or use **total** to get the total.';
break;
default:
message = 'You can type **add** to add numbers.';
break;
}
session.endDialog(message);
}
]).triggerAction({
matches: /^help/i,
onSelectAction: (session, args) => {
session.beginDialog(args.action, args);
}
});

Sample code

Using cancelAction and endConversationAction

If you’ve made it to this point in the article, you already have the skills necessary to create a global cancel operation - you’d add a new dialog, register it with triggerAction, and add a string match for the word cancel. The dialog would then call endConversation with a friendly message, and the user would be able to restart he operation.

However, let’s say you wanted to provide granular support for cancel operations, changing the behavior on different dialogs, or maybe not allowing a cancel on a dialog at all. This is where cancelAction and endConversationAction come into place. Both are tied to a specific dialog, and cancelAction cancels just the dialog, while endConversationAction cancels the entire conversation.

The second parameter you’ll pass into cancelAction is ICancelActionOptions, which includes the matches and onSelectAction properties we’ve seen before. It also adds confirmPrompt, which, if set, will prompt the user if they actually want to cancel.

cancelAction sample code

1
2
3
4
5
6
7
bot.dialog('AddNumber', [
// prior code for AddNumber snipped for clarity
])
.endConversationAction('CancelAddNumber', 'Operation cancelled', {
matches: /^cancel$/,
confirmPrompt: `Are you sure you wish to cancel?`
})

Sample code

Conclusion

Bot Framework offers many options and methods for managing dialogs and responding to user requests. Harnessing the the power provided by dialogs allows you to create bots that can have conversations with your users that feel more natural.

Acknowledgements

Thank you to Nafis Zaman for the catch on the behavior of cancelAction.


Simple Bot Creation with QnA Maker

Note: this blog assumes you have used Azure to create services in the past

The problem

One of the the most compelling scenarios for a bot is to add it to Facebook. A Facebook page is rather static. Finding information about a business on a Facebook page can be a bit of a challenge. And while users can comment, or send a message, the only replies they’ll ever receive is from a human, meaning the owner of the small business needs to monitor Facebook.

Of course, if it’s a small business that the page is representing, there’s a good chance the business doesn’t have the resources to create a bot on their own. Or, even if the business is of a size where they have access to developers, the developers aren’t the domain experts - that’s the salespeople, managers, or other coworkers.

To make a long story short, developers are often required to create the bot, and build the knowledge base the bot will be using to provide answers. This is not an ideal situation.

The solution

Enter QnA Maker.

QnA Maker is a service that can look at an existing, structured, FAQ document, and extract a set of question and answers into a knowledge base. The knowledge base is editable through an interface designed for information workers, and is exposed via an easy to call REST endpoint for developers.

Getting started

To get started with QnA Maker, head on over to https://qnamaker.ai/. You can create a new service by clicking on New Service. From there, you’ll be able to give your service a name, and point to one or more FAQ pages on the web, or a document - Word, PDF, etc. - containing the questions and answers. After clicking create, the service will do its thing, creating a knowledge base that can be accessed via the endpoint.

Create new service

Managing the knowledge base

The knowledge base is a set of questions and answers. After creating it, you can manage it much in the same way you edit a spreadsheet. You can add new pairs by clicking on Add new QnA pair. You can also edit existing pairs in the table directly. Finally, if you wish to add a new question to an existing answer, you can hover over the question on the left side, click the ellipsis, and choose Add alternate phrasing.

One important thing to note about the knowledge base, is each question and answer is an individual entity; there is no parent/child relationship between multiple questions and a single answer. As a result, if you need to provide additional ways to ask a particular question with the same answer, you will need to have multiple copies of the same answer.

Managing the knowledge base

Testing and further tweaking the knowledge base

Once you’re happy with the first version of your knowledge base, click Save and retrain to ensure it’s up to date. Then, click Test on the left bar, which will present you with a familiar bot interface. From this interface, you can start testing your bot by typing in questions and seeing the various answers.

You’re also able to update the knowledge base from this interface. For example, if you type a question that’s a little ambiguous, the interface will show you multiple answers on the left side. You can simply click the answer you like the most to update the knowledge base to use that answer for the question you provided.

In addition, after asking a question, and being provided an answer, you can add additional phrasings of the same question on the right side.

Testing and tweaking the knowledge base

Some design notes

First and foremost, remember the eventual user experience for this knowledge base is via a bot. Bots should typically have personality, so don’t be afraid to modify some of the answers from their original form to make it read a bit more like a human typed it out, rather than a straight statement of facts. In addition, make sure you add multiple questions related to hello, hi, help, etc., to introduce your bot and help guide your user to understand the types of questions your knowledge base can answer. Finally, remember that while a single form of a question works well on a FAQ page, users can type the same question in multiple forms. It’s not a bad idea to ask other people to test your knowledge base to ensure you’re able to answer the same question in multiple forms.

And, once you’re ready to make the service available to a bot, click Save and retrain, and then Publish.

Using the knowledge base in a bot

QnA Maker exposes your knowledge base as a simple REST endpoint. You can access it via POST, passing a JSON object with a single property of question. The reply will be a JSON object with two properties - answer, which contains the answer, and score, which is a 0-100 integer of how sure the service is it has the right answer. In fact, you can use this endpoint in non-bot services as well.

Of course, the goal of this blog post is to show how you can deploy this without writing code. To achieve that goal, we’re going to use Azure Bot Services, which is built on top of Azure Functions. Azure Bot Services contains a set of prebuilt templates, including one for QnA Maker.

In the Azure Portal, click New, and then search for Bot Service (preview). The Azure Portal will walk you through creating the website and resource group. After it’s created, and you open the service, you will be prompted to create a bot in Bot Framework. This requires both an ID and a key, which you’ll create by clicking on Create Microsoft App ID and Password.

IMPORTANT: Make sure you copy the password after it’s created; it’s not displayed again! When you click on Finish and go back to Bot Framework, the ID will be copied automatically, but the key will not.

Once you’ve entered the ID and key, you can choose the language (C# or NodeJS), and then the template. The template you’ll want is Question and Answer. When you click Create bot, you’ll be prompted to select your knowledge base (or create a new one).

And you’re done!

And that’s it! Your bot is now on the Bot Framework, ready for testing, to be added to Skype, Facebook, etc. You now have a bot that can answer questions about your company, without having to write a single bit of code. In addition, you’ll be able to allow the domain experts update the knowledge base without any need for code updates - simply save and retrain, then publish, and your bot is updated.

A couple of last thoughts

While the focus has been on a no-code solution, you are absolutely free to incorporate a QnA Maker knowledge base into an existing bot, or to update the bot you just created to add your own custom code. And if you’re looking for somewhere to get started on creating bots, check out the Bots posts on this very blog, or the MVA I created with Ryan Volum.


Providing help through DialogAction

One of the greatest advantages of the bot interface is it allows the user to type effectively whatever it is they want.

One of the greatest challenges of the bot interface is it allows the user to type effectively whatever it is they want.

We need to guide the user, and to make it easy for them to figure out what commands are available, and what information they’re able to send to the bot. There are a few ways that we can assist the user, including providing buttons and choices. But sometimes it’s just as easy as allowing the user to type help.

Adding a global commands

If you’re going to add a help command, you need to make sure the user can type it wherever they are, and trigger the block of code to inform the user what is available to them. Bot Framework allows you to do this by creating a DialogAction. But before we get into creating a DialogAction, let’s discuss the concept of dialogs and conversations in a bot.

Dialogs and conversations

Bots contain a hierarchy of conversations and dialogs, which you get to define.

A dialog is a collection of messages back and forth between the user and the bot to collect information and perform an action on their behalf. A dialog might be the appropriate messages to obtain the type of service the user is interested in, determine which location the user is referring to when asking for store information, or the time the user wants to make a reservation for.

A conversation is a collection of dialogs. The conversation might use a dialog to walk through the steps listed above - service type, location and time - to complete the process of creating an appointment. By using dialogs, you can simplify the bot’s code, and enable reuse.

We will talk more in future blog posts about how to manage dialogs, but for right now this will enable us to create a DialogAction.

What is a DialogAction?

At the end of the day a DialogAction is a global way of starting a dialog. Unlike a traditional dialog, where it will be started or stopped based on a flow you define, a DialogAction is started based on the user typing in a particular keyword, regardless of where in the flow the user currently is. DialogActions are perfect for adding commands such as help, cancel or representative.

Creating a DialogAction

You register a DialogAction by using the bot function beginDialogAction. beginDialogAction accepts three parameters, a name for the DialogAction, the name of the Dialog you wish to start, and a named parameter with the regular expression the bot should look for when starting the dialog.

1
2
3
4
5
6
7
8
9
bot.beginDialogAction('help', '/help', { matches: /^help/ });
bot.dialog('/help', [
(session) => {
// whatever you need the dialog to do,
// such as sending a list of available commands
session.endDialog('in help');
}
]);

The first line registers a DialogAction named help, calling a Dialog named help. The DialogAction will be launched when the user types anything that begins with the word help.

The next line registers a dialog, named help. This dialog is just like a normal dialog. You could prompt the user at this point for additional information about what they might like, query the message property from session to determine the full text of what the user typed in order to provide more specific help.

DialogAction flow

The next question is what happens when the help Dialog (what it’s called in our case) completes. When endDialog is called, where in the flow will the user be dropped? As it turns out, they’ll pick up right where they left off.

Imagine if we had the following bot:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
const builder = require('botbuilder');
const connector = new builder.ConsoleConnector();
const bot = new builder.UniversalBot(connector);
const dialog = new builder.IntentDialog()
.matches(/^load$/i, [
(session) => {
builder.Prompts.text(session, 'Please enter the name');
},
(session, results) => {
session.endConversation(`You're looking for ${results.response}`);
}
])
.onDefault((session) => {
session.endConversation(`Hi there! I'm a GitHub bot. I can load user profile information if you send the command **load**.`);
});
bot.dialog('/', dialog);
bot.beginDialogAction('help', '/help', { matches: /^help/ });
bot.dialog('/help', [
(session) => {
session.endDialog('This bot allows you to load GitHub data.');
}
]);
connector.listen();

Notice we have have an IntentDialog built with a load “command”. This kicks of a simple waterfall dialog which will prompt the user for the name of the user they wish to load, and then echos it back. If you ran the bot, and sent the commands load, followed by help, you’d see the following flow:

1
2
3
4
5
6
7
8
9
User: load
Bot: Please enter the name
User: help
Bot: This bot allows you to load GitHub data.
Bot: Please enter the name
User: GeekTrainer
Bot: You're looking for GeekTrainer

Notice that after the help dialog completes the user is again prompted to enter the name, picking right up where you left off. This simplifies the injection of the global help command, as you don’t need to code in where the user left, and then returned. The Bot Framework handles that for you.

Summary

One of the biggest issues in creating a flow with a chat bot is the fact a user can say nearly anything, or could potentially get lost and not know what messages the bot is looking to receive. A DialogAction allows you to add global commands, such as help or cancel, which can create a more elegant flow to the dialog.


Determining Intent Using Dialogs

What did you say?

Bots give you the ability to allow users to interact with your app through communication. As a result, figuring out what the user is trying to say, or their intent, is core to all bots you write. There are numerous ways to do this, including regular expressions and external recognizers such as LUIS.

For purposes of this blog post, we’re going to focus our attention on regular expressions. This will give us the ability to focus on design and dialogs without having to worry about training an external service. Don’t worry, though, we’ll absolutely see how to use LUIS, just not in this post.

Dialogs

In Bot Framework, a dialog is the core component to interacting with a user. A dialog is a set of back and forth messages between your bot and the user. In this back and forth you’ll figure out what the user is trying to accomplish, and collect the necessary information to complete the operation on their behalf.

Every dialog you create will have a match. The match will kick off the set of questions you’ll ask the user, and start the user down the process of fulfilling their request.

As mentioned above, there are two ways to “match” or determine the user’s intent, regular expressions or LUIS. Regular expressions are perfect for bots that respond to explicit commands such as create, stop or load. They’re also a great way to offer the user help.

Design Note

One big thing to keep in mind when designing a bot is no natural language processor is perfect. When people create their first bot, the most common mistake is to allow the user to type almost anything. The challenge is this is almost guaranteed to frustrate the user, and lead to more complex code trying to detect the user’s intent, only to misunderstand a higher percentage of statements.

Generally speaking, you want to guide the user as much as possible, and encourage them to issue terse commands. Not only will this make it easier for your bot to understand what the user is trying to tell it, it actually makes it easier for the user.

Think about a mobile phone, which is one of the most common bot clients. Typing on a small keyboard is a challenge at best, and the user isn’t going to type “I would like to find the profile GeekTrainer” or the like. By using terse commands and utterances, you’ll not only increase the percentage of statements you understand without clarification, you’ll make it easier for the user to interact with your bot. That’s a win/win.

In turn, make it easy for your user to understand what commands are available. By guiding the user through a set of questions, in an almost wizard-like pattern, you’ll increase the chances of success.

Creating dialogs

To determine the user’s intent by using regular expressions or other external recognizers, you use the IntentDialog. IntentDialog effectively has a set of events exposed via matches which allow you to execute at least one function in response to the detected event.

Let’s say you wanted to respond to the user’s command of “load”, and send a message in response. You could create a dialog by using the following code:

1
2
3
4
5
// snippet
let dialog = new builder.IntentDialog()
.matches(/load/i, (session) => {
session.send('Load message detected.');
});

matches takes two parameters - a regular expression which will be used to match the message sent by the user, and the function (or array of functions) to be called should there be a match. The function, or event handler if you will, takes three parameters, session, which we saw previously, args, which contains any additional information sent to the function, and next, which can be used to call the next function should we provide more than one in an array. For the moment, the only one that’s important, and the only one we’ve used thus far, is session.

To use this with a bot, you’ll create it and add the dialog like we did previously, only adding in the dialog object rather than a function.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// text.js
// full code
const builder = require('botbuilder');
const connector = new builder.ConsoleConnector();
const bot = new builder.UniversalBot(connector);
const dialog = new builder.IntentDialog()
.matches(/load/i, (session) => {
session.send('Load message detected.');
});
bot.dialog('/', dialog);
connector.listen();

If you run the code, and send the word load, you’ll notice it sends the expected message.

1
2
3
node text.js
load
// output: Load message detected

Handling default

Over time you’ll add more intents. However, as we mentioned earlier, we want to make sure we are able to give the user a bit of guidance, especially if they send a message that we don’t understand at all. Dialogs support this through onDefault. onDefault, as you might suspect, executes as the default message when no matches are found. onDefault works just like any other handler, accepting one or more functions to execute in response to the user’s intent.

1
2
3
4
5
6
7
8
9
10
11
// existing code
const dialog = new builder.IntentDialog()
.matches(/load/i, (session) => {
session.send('Load message detected.');
})
.onDefault((session) => {
session.endConversation(`Hi there! I'm a GitHub bot. I can load user profile information if you send the command **load**.`);
});
// existing code

You’ll notice you don’t give onDefault a name because it’s of course also a name. You’ll also notice we used session.endConversation to send the message. endConversation ends the conversation, and the next message starts from the very top. In the case of our help message this is the perfect behavior. We’ve given the user the list of everything they can do. The next message they send, in theory anyway, will be one of those commands, and we’ll want to process it. The easiest way to handle it is to use the existing infrastructure we just created.

If you test the bot you just created, you should see the following:

1
2
3
node text.js
Hello
// output: Hi there! I'm a GitHub bot. I can load user profile information if you send the command load.

Summary

When creating a bot, the first thing you’ll do is determine what the user’s intent is; what are they trying to accomplish? This is done in a standard app by the user clicking on a button. Obviously, there are no buttons. When you get down to the basics, a bot is a text based application. Dialogs can make it easier to determine the user’s intent.


Getting started with Bots

Introducing the Bot Framework

One of the most common phrases when I’m talking about technology for end users is “meet them where they’re at.” A big reason applications fail to be adopted is they require too large of a change in behavior from the users in question, having to open yet another tool, access another application, etc. We as humans have a tendency to revert to our previously learned behaviors. As a result, if we want to get our users using a new process or application we need to minimize the ask as much as possible.

This is one of the biggest places where bots can shine: they can be placed where our users already are. Users are already using Office, Slack, Skype, etc. A bot can then provide information to the user in the environment they’re already in, without having to open another application. Or, if they want to open an application, the bot can make that easier as well. In addition, the user can interact with the bot in a natural language, reducing the learning curve, making it seem more human, and maybe even fun.

At //build 2016 Microsoft announced the Microsoft Bot Framework, a set of APIs available for .NET and Node.js, to make it easier for you to create bots. In addition, we also announced Language Understanding Intelligent Service (LUIS), which helps break down natural speech into intents and parameters your bot can easily understand.

What I’d like to do over a handful of posts is help get you up and running with a bot of your own. We’ll use Node.js to create a simple “Hello, world!” bot, and then add functionality, allowing it to look up user information in GitHub, and then integrate it with various chat services.

Important notice

The Bot Framework is currently under development. As a result, things are changing. While many of the concepts we’ll talk about will likely remain the same, there may be breaking code changes in the future. You have been warned. ;-)

Getting started

Couple of prerequisites to take care of right up front. We are going to be using Node.js, so you will need to be familiar with JavaScript, and have some understanding of Node. There is an MVA on Node if you’re interested. I’m going to assume knowledge of npm as well. Finally, I’ll be using ES6 syntax as appropriate.

With that in mind, let’s create a folder in which to store our code, and install botbuilder.

1
2
npm init
npm install --save botbuilder

As for the initialization, I’m not overly concerned with the settings you choose there, as we really just need the package.json file; you can just choose all of the defaults.

Hello, bot

Let’s start with the stock, standard, “Hello, world!”, or, in this case, “Hello, bot!”

Creating an interactive bot requires creating two items, the bot itself, which houses the logic, and the connector, which allows the bot to interact with users through various mechanisms, such as Skype, Slack and Facebook.

In regards to the connector, there’s two connectors provided in the framework - a ConsoleConnector, perfect for testing and proof of concept as you simply use a Windows console window to interact with your bot, and the ChatConnector, which allows for communication with other clients, such as Slack, Skype, etc. You’ll start with the console connector, as it doesn’t require any other client than the standard Windows console.

As for the bot, you’ll create a simple bot that will send “Hello, bot” as a message. To create the bot, you will pass in the connector you create.

Create a file named text.js, and add the following code:

1
2
3
4
5
6
7
8
9
10
11
12
// text.js
const builder = require('botbuilder');
const connector = new builder.ConsoleConnector();
const bot = new builder.UniversalBot(connector);
bot.dialog('/', (session) => {
session.send('Hello');
});
connector.listen();

Let’s start from the top. The first line is the import of botbuilder, which will act as the factory for many objects we’ll be using, including ConsoleConnector, as you see in the second line.

To create a bot, you need to specify its connector, which is why you’ll create that to start. The connector is used to allow the bot to communicate with the outside world. In our case we’ll be interacting with the bot using the command line, thus ConsoleConnector. Once you’ve created the connector, you can then pass that into the bot’s constructor.

The design of a bot is to interactively communicate with a human through what are known as dialogs. The next line adds in a dialog named /. Dialogs are named similar to folders, so / will be our starting point or root dialog. You can of course add additional dialogs by calling dialog yet again, but more on that in a later post. The second parameter is the callback, which, for now, will accept session. session manages the discussions for the user, and knows the current state. You’ll either use session directly to communicate with the user, or pass it into helper functions to communicate on your behalf.

The simplest method on session is send which, as you might imagine, will send a message to the user. If you run text.js, and type in anything and hit enter (make sure you type something in to activate the bot!), you’ll see the message.

1
2
3
4
node text.js
// Type: Yo
// Output: Hello, bot!

Interaction note

You need to send a signal to the bot first in order to “wake it up.” When you’re using the command line for initial testing this can be a bit confusing, as you’ll run the application and notice that nothing is displayed on the screen. When you run your bot, just make sure you send it a message to get things started.

Retrieving user input

Obviously, displaying a static message isn’t much of a bot. We want to interact with our user. The first step to doing this is to retrieve the message the user sent us. Conveniently enough, the message is a property of session. The message will allow us to access where it was sent from, the type, and, key to what we’re doing, the text.

Let’s upgrade our simple bot to an echo bot, displaying the message the user sent to us.

1
2
3
4
5
6
7
8
9
10
11
12
// text.js (full code)
const builder = require('botbuilder');
const connector = new builder.ConsoleConnector();
const bot = new builder.UniversalBot(connector);
bot.dialog('/', (session) => {
session.send(session.message.text);
});
connector.listen();

You’ll notice we updated the session.send call to retrieve text from message, which contains the user input. Now if we run it we’ll see the information the user typed in.

1
2
3
node text.js
Hello from the user
// Output: Message: Hello from the user

Wrapup

Bots are a way for users to interact with services in a language that’s natural, and in applications they’re already using. You can integrate a bot with an existing application, such as a web app, or with a tool users are already invested in, such as Slack or Skype. We got started in this blog post with bots by first obtaining the SDK, and then creating a simple bot echo service. From here we can continue to build on what we’ve learned to create truly interactive bots.