Even though I don’t want to put another hurdle in front of someone, the thing about TypeScript is it turns out it’s not much of a hurdle after all.
await, which aren’t globally available in ECMAScript.
TypeScript offers many features you expect from programming languages, such as static (or strong) typing, OOP, and better module management.
session, VS Code wouldn’t show you
endConversation as an available option, or if it did, it’s because it’s seeing it from another file in your project. The IDE has no way of knowing that
session is actually of type
Contrast that with the following bit of TypeScript, where we are able to identify the type:
When you create the TypeScript file, you’ll notice VS Code knows exactly what the
session parameter is, and is able to offer you IntelliSense.
In order to start programming in TypeScript, you will need to install TypeScript. TypeScript is available as an NPM package, and you can simply add it as a developer dependency to your project. Personally, because I use TypeScript extensively, I install it globally.
If you use the Bot Framework Yeoman generator I created you will notice there is a template already available for TypeScript. For purposes of this post, we’ll create everything from scratch, so you can see how it is all brought together.
Create a new folder, and add a
package.json file with the following contents:
The dependencies section is pretty standard for a bot, but you might not be familiar with the devDependencies. The devDependencies contain the types of the various packages we’ll be using. Types are the various interfaces for the objects and classes in a particular package. So
@types/restify contains the interfaces provided by restify. This will add IntelliSense to the project. In the case of botbuilder, we don’t need to add a types file, as the framework is written in TypeScript, and contains all of the necessary types. After saving the file, run the installation process like normal.
files section identifies which files will be transcompiled. Add a file named
tsconfig.json with the following content.
Let’s start by creating a basic dialog. Add a file to your project named
dialog.ts, and add the code you see below. You will notice this is standard Node.js bot code, with a couple of differences.
require, you use the
import command. The
* means you’ll be importing everything from the package, and
as allows you to identify an alias. The end result is effectively the same as using
const builder = require('botbuilder');, like you would have done traditionally.
Second, you’ll notice the creation of the interface
IResults. You added a single property named
response, and marked it as type
Rather than using
module.exports to export the array that contains your waterfall, you use
export default, followed by the array. The syntax is slightly different, but the results are the same.
Finally, you’re declaring the data type of
results on each of the waterfall steps to aid your development experience.
results is using the interface you created earlier in the file. You’ll notice when you do this, and you start typing
response.results, VS Code will provide IntelliSense, and show you
response as an available property of type string.
app.ts with the following code
In order to run the code, it will need to be transcompiled. You can do this by simply running
tsc. Because we created a
tsconfig.json file, the transcompiler will know what to transcompile, and how to do it. The
watch switch will automatically detect changes to the TypeScript files, and transcompile on the fly.
node or nodemon.
There’s quite a bit that I left on the table when it comes to TypeScript. We could make better use of interfaces with our dialogs, we could create a class for our app, we could …, we could …. My goal with this post was to help get you up and running with TypeScript, and show some of the power that’s made available.
If I’ve said anything about bots, it’s that they’re apps. They’re just apps with a conversational interface. This style of interface can be extremely powerful, as it allows the user to better express themselves, or “skip to the end” if they already know what it is they’re trying to accomplish. The problem, though, is without a bit of forethought to the design of the bot it’s easy to wind up back in this scenario, where the user isn’t sure what to do next:
If you’re well versed in the set of commands you can quickly perform any operation you desire. But there is no guidance provided by the system. Just as they’re no guidance provided here:
We need to guide the user.
Buttons exist for a reason. They succinctly show the user what options are available, and can guide the user towards what they’re looking for. In addition, they help reduce the amount of typing required, which is especially important when talking about someone accessing a bot on a mobile device with a tiny keyboard.
The most obvious place where buttons shine is when providing a list of choices for a user to select from. This might be a shipping method, a category for filtering, or, really, any other set of options. To support a list of choices, BotBuilder provides a choice prompt. The choice prompt, as you might expect, provides the user a list of options for them to choose from, and then provides access to that in the next step of the dialog.
The choice prompt limits the user’s response to just the list of options you provide. You can limit the number of times the bot will ask the user for a response before moving onto the next step in the waterfall.
While choice is certainly nice for providing a simple list of options, it does force the user into choosing one of those options. As a result, it’s not as easy to use choice when trying to guide the user with a list of options while also allowing them to type free-form, which is what you’ll want to do when the user first starts a session with the bot. In addition, you don’t get control over the interface provided.
If you wish to customize the list of prompts, you need to set up a card. This can be an Adaptive Card, or one of the built-in cards such as thumbnail or hero. By using a card you can provide a bit more guidance to the channel on how you’d like your list of options to provide.
To allow the user to select from a list of options, you will add buttons to the card. Buttons can be set to either
imBack, meaning the client will send the message back to the bot just as if the user typed it, or
postBack, meaning the client will send the message to the bot without displaying it inside the client. Generally speaking,
imBack is a better choice, as it makes it clear to the user something has happened, and can give the user a clue as to what to type in the future, should they so decide.
The code below is the wrong way to use buttons to provide a list of options, but it’s the most common mistake I see people make when using buttons with Bot Framework.
In the code snippet below, I want you to notice the addition of the buttons using
builder.CardAction.imBack, and the call to
session.send (where the mistake is).
If you added this dialog to a bot and ran it, you’d see the following output:
The mistake, as I mentioned above, is at
session.send. When using
session.send in the middle of a waterfall dialog, the bot is left in a state where it’s not expecting the user to respond. As a result, when the user does respond by clicking on Blue, the bot simply returns back to the current step in the waterfall, and not to the next one. You can click the buttons as long as you’d like, and you’ll see them continuing to pop up.
In order for the bot to be in a state that expects user input and continues to the next step of a waterfall, you must use a prompt. When using buttons inside of a card, you can choose either a
choice prompt. When using a
text prompt, the bot can accept any input in addition to the buttons you provided. This can allow the user to be more free-form as needed.
choice prompts, however, will limit the user to the list of choices, just as if you created it the traditional way mentioned earlier.
As I mentioned at the beginning of this post, one of the keys to a good user experience in a bot is to provide guidance to the user, otherwise you’re just giving them a C-prompt.Again, the easiest way to do this is via buttons.
We’ve already seen that
imBack behaves just as if the user typed the value manually. We can take advantage of this fact by providing the list of options, and ensuring the values match the intents provided in the bot.
You’ll notice in the code sample below I created a bot with two simple dialogs, and the default dialog sends down the buttons inside of a card. By calling
endConversation, the bot sends down the card and closes off the conversation. When the user clicks on a button it’s just as if the user typed in the value, and the bot will then route the request to the appropriate dialog. The user is free at this point to either click one of the provided buttons, or type in whatever command they desire.
The updated bot now performs as displayed below. In the dialog I started by typing test to trigger the bot. I then clicked on Hello, which displayed the Hello Dialog message. I completed the exchange by typing Hello, which, as you see, sent the same Hello Dialog message.
I’ve said it before, and I’ll certainly say it again - buttons exist for a reason. Buttons can help you provide a good UI/UX for users in any type of application, and bots are no exception. You can use buttons to both limit the amount of typing required, and to help guide the user’s experience with the bot.
 This exceedingly long post?
Communication with a user via a bot built with Microsoft Bot Framework is managed via conversations, dialogs, waterfalls, and steps. As the user interacts with the bot, the bot will start, stop, and switch between various dialogs in response to the messages the user sends. Knowing how to manage dialogs in Bot Framework is one of the keys to successfully designing and creating a bot.
At its most basic level, a dialog is a reusable module, a collection of methods, which performs an operation, such as completing an action on the user’s behalf, or collecting information from the user. By creating dialogs you can add reuse to your bot, enable better communication with the user, and simplify what would otherwise be complex logic. Dialogs also contain state specific to the dialog in dialogData.
A conversation is a parent to dialogs, and contains the dialog stack. It also maintains two types of state, conversationData, shared between all users in the conversation, and privateConversationData, which is state data specific to that user.
Every dialog you create will have a collection of one or more methods that will be executed in a waterfall pattern. As each method completes, the next one in the waterfall will be executed.
Your bot will maintain a stack of dialogs. The stack works just like a normal LIFO stack), meaning the last dialog added will be the first one completed, and when a dialog completes control will then return to the previous dialog.
Bots come in many shapes, sizes, and forms. Some bots are simply front ends to existing APIs, and respond to simple commands. Others are more complex, with back and forth messages between the user and bot, branching based on information collected from the user and the current state of the application. Depending on the requirements for the bot you’re building, you’ll need various tools at your disposal to start and stop dialogs.
Dialogs can be started in a few ways. Every bot has a default, sometimes called a root dialog, which is executed when no other dialog has been started, and no other ones have been triggered via other means. You can create a dialog that responds globally to certain commands by using
triggerAction is registered globally to the bot, while
beginDialogAction registers the command to just that dialog. Finally, you can programmatically start a dialog by calling either
replaceDialog, which will allow you to add a dialog to the stack or replace the current dialog, respectively.
When a bot reaches the end of a waterfall, the next message will look for the next step in the waterfall. If there is no step, the bot simply doesn’t respond, naturally ending the conversation or dialog. This can provide a bit of a confusing experience for the user, as they may need to retype their message to get a response from the bot. It can also be confusing for the developer, as there may be many ways a dialog might end depending on the logic.
As a result, when a conversation or dialog has come to an end, it’s a best practice to explicitly call
endConversation both clears the current dialog stack and resets all data stored in the session, except
endDialogWithResult end the dialog, clear out
dialogData, and control to previous dialog in the stack. Unlike
endDialogWithResult allows you to pass arguments into the previous dialog, which will be available in the second parameter of the first method in the waterfall (typically named
Ending a conversation or dialog will also remove the associated state data. This is important to remember when deciding where to store state data. The best practices of minimizing scope of state data apply to bots, just as they do to any other application.
The place where state lifespan becomes trickiest is
dialogData. If you start a new dialog, the dialog doesn’t receive the data from the calling dialog. In addition, when a dialog completes, the previous dialog doesn’t receive the data from the calling dialog. You can overcome this by using arguments.
endDialogWithResult allows you to pass arguments to the prior dialog, while both
replaceDialog allow you to pass arguments into the new dialog.
The sample application we will be building through the next set of examples is a simple calculator bot. Our calculator bot will allow the user to enter numbers, and once they say total we’ll display the total and allow them to start all over again. We’ll also want to allow the user to get help at any time, and to cancel as needed. The sample code is provided on GitHub.
Starting with version 3.5 of Microsoft Bot Framework, the default or root dialog is registered as the second parameter in the constructor for
UniversalBot. In prior versions, this was done by adding a dialog named
/, which led to naming similar to that of URLs, which really isn’t appropriate when naming dialogs.
The default dialog is executed whenever the dialog stack is empty, and no other dialog is triggered via LUIS or another recognizer. (We’ll see how to register dialogs using
triggerAction a little later.) As a result, the default dialog should provide some contextual information to the user, such as a list of available commands and an overview of what the bot can perform.
From a design perspective, don’t be afraid to send buttons to the user to help guide them through the experience; bots don’t need to be text only. Buttons are a wonderful interface, as they can make it very clear what options the user can choose from, and limit the possibility of the user making a mistake.
To get started, we’ll set up our default dialog to present the user with two buttons, add and help. For our first pass, we’ll simply echo the user’s selection; we’ll add additional dialogs in the next section. We’ll do this by setting up a two step waterfall, where the first step will prompt the user, and the second will end the conversation.
One of the biggest challenges when creating a bot is dealing with the fact users can be random. Imagine the following exchange:
This is a common scenario. The user sends a message to the bot. The bot responds. The user gets a new piece of information, in this case their friend is a vegan, and thus asks about a vegan menu. The bot is now stuck, because it wasn’t expecting that response.
triggerAction allows you to register a global command of sorts with the bot, and ensure the appropriate dialog is executed for every request.
In prior versions of Bot Framework, developers typically started every dialog name with /. This was because when registering the default dialog in earlier versions you named it /. As you’ve already seen, that’s not the case starting with version 3.5. As a result, you give your dialog a name that appropriately describes the operation the dialog is built to perform.
bot.dialog is used to register a dialog. The two parameters you’ll provide are the name of the dialog, and the array of methods you wish to execute when the user enters the dialog. Let’s create the starter for add dialog. For now, we’ll leave it with the simple echo, and introduce new functionality as we go forward.
We want to register our AddNumber dialog with the bot so whenever the user types add this dialog will be executed. This is done through the use of
triggerAction, which is a method available on
triggerAction accepts a parameter of type
ITriggerActionOptions has a few properties, the most important of which is
matches. Matches will either be a regular expression to match a string typed in by the user, such as add in our case, or a string literal if the match will be done through the use of a recognizer, such as one from LUIS.
Let’s update our bot to register AddNumber to be started when the user types add. We’ll remove the second step from the default dialog and take advantage of the behavior of our buttons, which will send the text of the button to the bot, much in the same way as if the user typed it themselves.
triggerAction is a global registration of the command for the bot. If you wish to limit that to an individual dialog, use
beginDialogAction, which we’ll discuss later.
triggerAction replaces the entire current dialog stack with the new dialog. While that can be good for AddNumber, that wouldn’t be good for a dialog to provide help. We’ll see a little later how
onSelectAction can be used to manage this behavior.
If you execute the bot at this point you’ll notice clicking Add on the buttons, or simply typing it, will cause the bot to send the message This is the AddNumber dialog. You’ll also notice that help, at present, does nothing. We’ll handle that in a bit.
Let’s talk a little bit about our logic for AddNumber. We want to prompt the user for a number, add it to our running total, and then ask the user for the next number. Basically, we just need to restart the same dialog over and over again. We can use
replaceDialog to perform this action.
In the first step of our waterfall, we’ll check to see if there is a running total available in
privateConversationData, and create one if it doesn’t exist. We’ll then prompt the user for the number they want to add.
In the second step, we’ll retrieve the number, add it to our running total, and then start the dialog over again by calling
replaceDialog takes two parameters, the first being the name of the dialog with which you wish to replace the current dialog, and the second being the arguments for the new dialog. The object you provide as the second parameter will be available in the first function in the new dialog’s waterfall in the second parameter (typically named
It doesn’t make a lot of sense for our bot to have a global total command. After all, it’s only valid if we’re currently adding numbers. Using
beginDialogAction allows you to register commands specific to that dialog, rather than global to the bot. By using
beginDialogAction, we can ensure total is only executed when we’re in the process of running a total.
The syntax for
beginDialogAction is similar to
triggerAction. You provide the name of the
DialogAction you’re creating, the name of the
Dialog you wish to start, and the parameters for controlling when the dialog will be started.
endConversation, we reset the entire conversation back to its starting state. This will automatically clear out any
privateConversationData, as the conversation has ended.
triggerAction will reset the current dialog stack with the new dialog. In the case of AddNumber that’s just fine; the logic on the dialog is designed for the dialog to continually restart. But this is problematic when it comes to Help. Needless to say, we don’t want to reset the entire set of dialogs when the user types help; we want to allow the user to pick up right where they left off.
Bot Framework provides
beginDialog for adding a dialog to the stack. When that dialog completes, it returns to the control to the active step in the prior dialog. Or, in terms of the case of our Help example, it will allow the user to pick up where they left off.
onSelectAction property on
ITriggerActionOptions executes when the bot is about to start the dialog being triggered. By using this event, we can change the way the dialog is started, using
beginDialog, which will add the dialog to the stack instead of replacing stack. The first parameter is the name of the dialog we wish to start, which is provided in
args.action, and the second is the
args parameter we want to pass into the the dialog when it starts. The code sample below will ensure we return control to the prior dialog when this one completes.
beginDialog, don’t hard code the name of the dialog you’re about to start, but rather use
args.action. Otherwise, you’ll notice the dialog won’t actually start.
One of the challenges with the help solution we created earlier is it can only provide generic help; whenever the user types help the exact same message is sent to the user. By using beginDialogAction you can parameters to the triggered dialog, allowing you to centralize messaging for help. In our case, we’ll use the name of the current action as the key to the message we want to send.
If you’ve made it to this point in the article, you already have the skills necessary to create a global cancel operation - you’d add a new dialog, register it with
triggerAction, and add a string match for the word cancel. The dialog would then call
endConversation with a friendly message, and the user would be able to restart he operation.
However, let’s say you wanted to provide granular support for cancel operations, changing the behavior on different dialogs, or maybe not allowing a cancel on a dialog at all. This is where
endConversationAction come into place. Both are tied to a specific dialog, and
cancelAction cancels just the dialog, while
endConversationAction cancels the entire conversation.
The second parameter you’ll pass into cancelAction is
ICancelActionOptions, which includes the
onSelectAction properties we’ve seen before. It also adds
confirmPrompt, which, if set, will prompt the user if they actually want to cancel.
Bot Framework offers many options and methods for managing dialogs and responding to user requests. Harnessing the the power provided by dialogs allows you to create bots that can have conversations with your users that feel more natural.
Thank you to Nafis Zaman for the catch on the behavior of
The New York Marathon. There’s really nothing else you need to say to runners and non-runners alike. It’s the largest marathon in the world, and arguably the most prestigious. While it doesn’t have the qualifying cache that Boston does, it’s a marathon everyone knows, and is on the bucket list of every runner, or at least all the ones I know.
As a Jersey Boy, it’s a race I’ve wanted to do for as long as I can remember, long before I laced up a pair of running shoes and heaved my way around Fiesta Island with a friend for my first “run”. I’d entered the lottery 3 times prior with no luck. So my joy upon seeing that email that contained the word “Congratulations” cannot even begin to be described. I’m honestly getting chills just sitting here thinking back to that day.
Anyone who knows me knows my snake-bitten history with marathon training. This cycle was no exception in that aspect, but there were a few other factors that contributed to a less-than-optimal summer.
For starters, and I’m just going to point the biggest finger at myself, I was frankly just tired. I’d done Grandma’s Marathon at the end of June. For those of you scoring at home (or even if you’re alone), that’s just 4 months before the New York Marathon, or about 20 weeks. That doesn’t give you much “I’m just going to sit on my keester and do nothing” time. The moment I finished my last marathon I was already back in training mode. That was a bit much for me, and I was burned out going into the next round. As such, I wasn’t as committed as I should have been, and it certainly showed.
In addition, my travel schedule was a struggle. While I used to travel full time, my schedule and destinations were relatively predictable, so it was easy to work my training into the week. During the training period I had a handful of oddball trips that threw off everything, including a trip to Japan. As a perfect example, I’d hoped to knock out an 18 miler in Japan - the weather conspired against me, and my body quit after 14. (Actually, it quit after 8, but I pushed through the rest.) While it did give me an opportunity to do 18 with a great friend the next week, it wasn’t where I needed to be.
The week after said 18 I’d intended to do “the 20 miler”. After about 3 miles I had a tendon behind my knee start to complain. I kept thinking it just needed to loosen up, but after I finished 6, and the group I was with got back to the parking lot where we were going to meet more people for the rest of the run, I knew I was done for the day. I tried to stretch, which elicited a stream of curse words that would make a sailor blush. I hopped in the car, had a good cry thinking I wouldn’t be able to run after all. I went to my PT, who threw everything he had at it, rested, compressed, and everything else, in hopes I was able to run.
Amazingly, I was able to meet the one true goal every marathoner has: toeing the starting line. And this time it was at the foot of the Verrazano.
I knew I wanted to stay in the Financial District (FiDi), because it’s both quiet at night, and walking distance to the Staten Island Ferry. I found a nice little Airbnb that was just a few blocks away from the terminal, and thought all was good. Until about 6 weeks before the race when I received an email from the host saying his building’s management wouldn’t let him rent the place out any longer. Anyone who knows Airbnb in New York knows how little the government cares for Airbnb, and how little Airbnb cares for government regulations. So despite Airbnb being one of the sponsors of the event, it was pretty clear to me this wasn’t going to be an option, or at least not a reliable one. (FWIW, I’d suggest avoiding Airbnb in New York for this exact reason.)
Fortunately I managed to get a great rate on the DoubleTree, which is about 3 blocks from the terminal. It’s also a hotel I’d stayed at many times, so I was familiar with both the hotel and the area around it. I’m all about the comfort provided by routine, and this was going to give me exactly that.
I arrived on the Thursday before the race, so I could see Tim Minchin perform, and to get adjusted to the time zone. Landed in Newark, Lyft (speaking of companies with contentious relationships with the government) up to FiDi, and focused on relaxing as much as possible.
I’d been told many times to get to the expo as early in the day, and the week, as possible. Heed this advice! While they take over a convention center floor, and have an amazing amount of real estate, there’s still 50,000 runners that need to make their way through the area, not to mention family and friends they bring along for support.
I got there at about 10:45, 45 minutes after opening, and it was already very busy.
That said, it’s as well organized as an event of this size can be. If you were smart enough to print out your check-in sheet at home you could head straight over to pickup. Or, if you were like me, you head over to a little kiosk, and get a little receipt printout, and then go get your packet. From there it’s over to grab your shirt, where there was a huge line for Men’s Medium. Fortunately I still have a few extra pounds on my frame, so I was grabbing a Large, and was through that pretty quickly.
Next up - swag. Yeah, we’re going to ignore the race fee, and the free shirt they just gave me. I needed more swag. So through the swag store I went, picking up a jacket. And another shirt. And a pint glass. And a hat. (I’m honestly I was that restrained.)
From there it’s on to the main expo floor, where you’ll find vendors selling all manner of snake oil, running gear, and last minute supplies such as gels. I made a bee-line for the CEP section to find a quad sleeve to help my ailing hamstring tendon. Upon acquiring that, I checked out a great little seminar put on by the Whippets running group, who walked everyone through the course. If it’s your first New York Marathon, I can’t recommend attending this enough. As an added bonus, I happened to see someone with a custom bib with his name on it; he pointed me at the station that was doing that, and grabbed ones that said “Chris” and “Jersey”, unsure of which one I was going to wear on race day.
Finally I realized I was tired, and hungry, and needed to get out of there. I spent about 3 hours there, and I’d say that’s probably about average for most people.
My wife took the red-eye on Friday into New York, and my brother caught an early flight from Burlington down. My support crew had arrived. Just having them there gave me great comfort.
We spent Saturday doing a dry run of the three different viewing spots they were going to cheer me on at. It worked well for them, as they got to see the locations, and which trains they needed. It worked well for me, as it gave me great visuals of where I was going to be on the course, and where to look for them.
We also walked about the last 2 miles of the race. While it was much longer than I really wanted to walk, I wanted to see the last mile. In the end, I’m glad that I walked the distance. It allowed me to make a few mental notes in regards to landmarks, and to see the exit and re-entry into Central Park.
It also allowed me to see the hill that is the .2 of the 26.2. Make sure you’re ready for that! They make you work for that medal.
BTW, if you’re looking for a good cheer strategy, take a train out to somewhere close to Barclay’s Center (it was the R train for us). You can see people at the 8 mile mark, just after where all the runners come together (more on that later). From there, hop the 4 train to somewhere along 1st. The crowds start to think towards the 100 blocks; my cheer crew waited for me on 86th. From there they can walk over to 5th for one last cheer. From 5th, you can catch one of the paths across the park around 86th and 5th. I was able to meet up with them on Columbus and 74th. It all worked out perfectly. Granted, you can’t be much faster than about a 4 hour time for the 3x strategy to work, so YMMV.
We bid my brother farewell for the night as he had family obligations (which I managed to dodge.) My wife and I went off to find Japanese food (rice is my pre-race carb of choice), and then off to sleep with visions of PRs dancing in my head.
Now I should mention at this point that I, like many a runner, have a delicate stomach. Part of my plan behind paying the money for the DoubleTree in FiDi was to be in a familiar neighborhood, with familiar shops. During the short period of time we were there I’d asked the Essen deli/bagel shop about their hours. They assured me they were 24/7. Perfect!
I woke up after a pretty good night’s sleep, and began my preparations. I’d already laid out my deflated runner, so I knew what I was wearing, and that I had everything in regards to that. I put everything together, donning my running outfit, filling my water bottles, and tossing on my donation outfit of a sweatshirt, sweatpants and a robe.
Yes, a robe. I mean, if you’re going to be up that early, you may as well have a robe.
After kissing my wife goodbye, and getting the good luck wishes I certainly needed, I roamed over to Essen for my english muffin and peanut butter. Only, the “grill”, so to speak, wasn’t open. OK, deep breath. I’m not going to let this get into my head. So I bought two raw english muffins from the guy behind the counter, and a Kind Bar, which I hoped would be OK with my stomach.
I walked over to the ferry, with hundreds of other runners. If it didn’t already start to hit me that I was about to run the New York Marathon, it became a reality at that point. I chatted with a couple of other runners on the way over. Seeing the Staten Island Ferry sign elicited a couple of tears.
The security presence was obvious, but not overwhelming. My bags were sniffed by a couple of rather cute dogs, and away I went onto the 6:30 ferry. I was originally set for the 6:45 ferry, and I was hoping to see a couple of Seattle Green Lake Running Group (SGLRG) runners, but the draw of just getting to Staten Island was too much. (As it turned out, the ferries after 7:30 started having issues from what I’ve heard, so maybe it’s just as well.)
The ferry was full of runners, save for a few people that were riding it because it’s, well, the Staten Island Ferry, who weren’t necessarily thrilled we were there. I <3 NY. The ride is relatively quick, and gives you an amazing view of Lady Liberty, which is a wonderful way to start any morning.
Upon arriving on Staten Island it was a bus ride over to the start. Relax. That’s where the line really starts, as well as the waiting. Bring a paper, a copy of Runner’s World, or something to pass the time. Or, just people watch. Between the people who’ve done numerous marathons, to the first timers, to everyone else, there’s plenty of sights to see. Take it all in. You’re about to run the greatest marathon on the planet (again - sorry, Boston).
We unloaded at Fort Wadsworth, where we were greeted by more security, and more dogs. And then it was time to get prepped for the race.
I’d read a few things in the past that said to get to Fort Wadsworth as late as possible. I landed a good 2 hours before the start, and I felt like that was perfect. Next time I run the race I fully intend on getting over to Staten Island relatively early.
The organizers have this down to a science. They know exactly what they’re doing. The race is broken down into 4 separate start times, and then 3 different colors, and then corrals from there. On top of that, the start area has a ton of real estate on which to spread out. As a result, it oddly feels like a much smaller race than it actually is. There was plenty of space to spread out, to take care of last minute preparations, or to just close your eyes and relax on the grass.
That said, the porta-potty lines are ridiculous. Make sure you give yourself plenty of time to answer nature’s call. In fact, it’s not a terrible idea to take care of things an hop back in line, just in case.
As for me, I worked on my last bits of prep. I assembled the rest of my outfit, attaching my bib and my name bib to my shirt. I drank more and more water. I worked on not letting the fact I was about to run the New York Marathon hit me, with mixed results. Seeing the Verazano Narrows Bridge in the distance is hard to ignore.
At some point I should probably talk about my time goal. Every runner has one, despite how much they might deny it. If they are admitting to a time goal, they probably have a faster one they’re not really not wanting to make public.
I’m not that runner. I have one simple marathon goal: I want the first number to be a 3. I don’t care if it’s followed by 59:59, my white unicorn starts with the number 3. Just once I want to finish under four hours.
Considering the disjointed training plan I had I wasn’t sure what my body might offer. But during that “taper” period, where I was mostly just trying to not upset that tendon, I was running a comfortable 8:40 pace, or about 30 seconds faster than what I’d need on race day. In fact, the last run I had with my Canadian Running Wife (CRW) featured a push up a hill which made her work, and she’s much faster than I am.
After all of that, I thought I’d be able to finally find that white unicorn.
The race of course starts on the Verazano Narrows Bridge. There’s two stories on this bridge. And a couple of entrance/exit ramps on the other side in Brooklyn. And, of course, 50,000 humans to try and work with.
As a result, they break things down into waves, colors, and corrals. The corrals are mostly what you’d expect - packs of runners. And the waves are the various start times. Where things are truly different than most races are the colors. There are actually three paths for the first few miles of the race. Two colors, blue and green, take the top deck of the bridge, while orange takes the lower deck. Each of the three colors exits on a different ramp in Brooklyn. Because they’re all at different paces, there’s enough time, and distance, for everyone to naturally spread out before hitting the point in Brooklyn where everyone is brought together. Don’t get me wrong, there will always be people around you, but you’ll never feel like you’re fighting for elbow room.
They called my (now updated) corral of Wave 3, Blue, Corral A. I roamed over and waited. And waited. And waited. They were still unloading wave 2, which took quite a while. I used the time for last minute prep, ditching my robe (sadly). I dropped a Nuun into my water belt bottles, and filled them with water. And I started to get a feel for the weather.
There are few things runners obsess over more than the weather, save for maybe pre-race-porta-potties. You could not have asked for a better day for a marathon. It was in the 50s to start, and sunny. No threat of rain, but there was talk of a little bit of wind, which did hit at times. But really, gorgeous running weather.
They opened the corral, which is really just another brief waiting area before you walk to the start. I saw a glimpse of the 4:00 pacer, who was towards the front of the corral, as I waited in line to, well, take care of business. By the time I got out they were long since gone.
I walked towards the start line with everyone as they released us, working on breathing exercises. Someone sang God Bless America, rather than the anthem, which got my blood going. I kept trying to find that 4:00 sign, wanting a pack to run with. Alas, I wasn’t able to find them. I ditched the sweatsuit and focused on the goal.
Then the cannon went off.
- Don’t go out too fast
- See Rule 1
It seems so simple. Especially in New York where the first mile is straight uphill. I mean, really, go slow. In theory, that first mile for someone trying to break 4:00 should still be around 10:00, if not even slower.
But, there’s a cannon. And cameras. And the fact that it’s the New York Marathon.
The adrenaline carried me up the hill in 9:40. I didn’t mean to run that fast, but there I was at the top of the bridge. Amazingly I still couldn’t find that 4:00 pacer, but at that point my concerns were focused around that lovely quad compression sleeve I’d purchased which was now around my knee.
Compression had been helping my tendon leading up to the race, so I was really hoping to have the sleeve for the race. The sleeve, however, had other plans. I’m no doctor, but I’m pretty sure having all of that around one’s knee is not a good thing. Once we hit the top of the bridge I stopped for a few seconds to pull the sleeve off and ditch it.
Then I focused my attention on relaxing. It’s just mile 2. I need to slow down. 9:40 was not where I wanted to be that first mile, so let me settle in and just enjoy the downhill.
The watch beeped at mile 2 at 8:20. So much for relaxing.
I’d always been told that the New York Marathon was like no other marathon in regards to fans, that there would be fans the entire race. Obviously there are no fans on the Verazano, but landing in Brooklyn brought the first pack of fans.
It oddly felt at that point like a lot of other early marathon stages. Having run the San Diego Rock-n-Roll, it felt like I’d turned onto Washington St. There were people lining the streets, but only about one deep. And there were bands.
But there was still a different energy. There was a crescendo building.
I tried to settle in. Tried. I wanted nothing more than to just settle into a 9:09 pace (4:00 hours). But my legs just refused to go that slow. I was caught up in the energy. The fans and other runners carried me.
The crowds continued to build. Even though I was running in the middle of the street to try to just get into my own head and find my pace, I could still feel the energy they were giving me.
I crossed the 10K mark a full minute ahead of schedule. This was not good. And I knew it wasn’t good. My body started to feel it. The four to six mile section of the course features a steady downhill, which beat up my quads. But I kept hoping my legs would come back to me, and I knew that I had my cheering squad at mile 8.
I focused on the couple of turns that took us through the heart of Brooklyn and towards Abram and Karin. I took the right, drifted towards the left, and saw the green and pink poster boards they had. Giving them both high fives filled me with more energy than I can explain.
The crowds through that area are amazing. They’re 4 to 5 deep. And screaming at the top of their lungs. You feel like an absolute rock star. You’re on top of the world.
Around mile 10 you hit the traditional Jewish part of town. There are still fans there, but it’s more quiet. People are just going on about their day, mostly just ambivalent or annoyed at your presence. It was surreal coming out of such an energy filled section of the course to the exact opposite, and get a glimpse of the town going on about its day.
As for me, well, that was when I started spewing axle grease all over the course. My quads started to give way. As did almost every other subsystem in my body. I felt lightheaded. And nauseated. And miserable. I started walking, weaving a little side to side. While it’s certainly hard to self-diagnose, I’d be willing to bet I was over-hydrated. Whatever it was, I just refused to let it stop me. Slow me down, sure. But I wasn’t going to stop.
I focused, took a deep breath, found some form of a cadence, and kept moving.
You don’t spend much time in Queens. From a marathoner’s perspective, about the only thing you do in Queens is get ready for the bridge. You do have to climb the Pulaski Bridge to get into Queens, but you’re only in the borough long enough to make a few turns.
I did have to walk a bit in Queens. My motivation was still high, and I still had a goal. At this point I knew my white unicorn was gone, but I was still hoping for about a 4:15.
OK, maybe not yet.
Now, full disclaimer, I’m generally not one to swear. Because there’s only one way to tell this story, and that’s as follows.
We took the left from Queens onto the Queensboro Fucking Bridge. I’m convinced that’s its real name. The Queensboro Fucking Bridge (QFB).
At this point, my will to live was slowly sucked away.
The QFB has many terrible features.
For starters, there are no fans. There’s no way onto the QFB unless you’re a runner. It’s just you, and the sounds of everyone else around you.
In addition, you’re on the lower deck. That, for me, and many others I’ve talked to, creates this terrible illusion the crest of the hill is just ahead, but it’s not. It seems to just go on forever.
The views are certainly amazing.
But the rest of the experience is disheartening.
I finally got to the end of this interminable bridge, and my quads were truly gone. I had to stop as I got onto 59th to stretch, something I’d never had to do in any marathon prior to New York.
If you’ve read anything about the New York Marathon, you’ve certainly heard about the crowds that await you at the end of the QFB. All of that is real, and it continues for a good couple of miles. Fans will be there 4 to 5 deep. There are truly no words in my vernacular to describe how amazing the atmosphere is through this section of the course.
When you finally take the left onto First you are greeted by unmatched energy, and a view of a sea of runners stretching out as far as one can see. It’s breathtaking. It truly feels at this point like you’re running the New York Marathon.
As for me, it’s now truly a struggle. My goal has been adjusted to just making the New York Times, who does a special section for the marathon listing off marathoners; I had heard the cutoff is 4:30.
Walk as needed, force myself to run as much as possible. But just keep moving.
I knew Karin and Abram were at 87th, and that was all I was focused on. I saw them, and gave them both a hug. Normally I wouldn’t have done that, but considering most of my time goals were shot, I wanted to take the time to thank them.
I shared my hatred for the QFB with them, and said I was happy I wasn’t the guy I passed a little earlier who’d pooped himself. It’s all about perspective.
From there I kept working my way on to the Bronx, with a single focus - seeing Karin and Abram one more time. I’m not going to say I would have dropped out of the race, but at this point I was rather miserable, and my main reason for staying on the course was to see them one last time on 5th.
As much as I loathe the QFB, and I do, I have to say that the bridge leading into The Bronx has its own special kind of awful. It’s cambered, and while not long is just steep enough to truly frustrate you. Or, at least frustrate me.
That said, the fans in The Bronx, while certainly not as numerous as Manhattan, are feverish. They were truly proud of their neighborhood, and wanted you to know it. I appreciated that more than my face showed.
Although, my face only showed misery at that point.
At this point my body is just shot. Mentally, I’m still in the game. After all, I’m running the New York Marathon. I mean, what could be better than that? But I couldn’t get much behind a 50/50 run/walk, and even a 50/50 ratio was a struggle at best. My goal had shifted to just finishing in under five hours.
After climbing the last bridge (finally!!) into Manhattan, I tried to just enjoy the atmosphere. The atmosphere through Harlem was all that I had hoped it would be. There was a church choir out singing, and, again, a prideful neighborhood.
It’s at this point I realize how well the course shows off the city and its neighborhoods. You get a great feel for what makes New York the greatest city on the planet.
It’s also when I realize the hill that is mile 23, and 5th Avenue.
As I mentioned earlier, one of the driving forces I had was to see Karin and Abram. We had made plans for them to be in the mid-to-upper 90s along 5th, on the left side. After taking the quick right, left, left, and right around the park, it was all I was focused on.
The hill was tougher than I expected. That said, the crowds were beyond comprehension.
At this point I’d like to double back to the name tag debate I had in my head before the race of Chris vs Jersey. While I truly hate being called Chris, I was worried that wearing “Jersey” in New York would bring me nothing but heckling. I asked a friend of mine who’d run the race a couple of years prior, Elaine, who told me there is no negativity on race day.
I can safely say she’s right. I heard nothing but either “Jersey Strong!!” or people chanting “Jersey”.
And down 5th, that energy kept me going.
I hung to the left side where we’d agreed to meet, and didn’t see them. While I was disappointed, I kept to the left hoping they’d simply not ventured that far north, while accepting the fact I’d missed them.
And then there they were. I gave them both a hug.
Shortly after seeing Karin and Abram I turned right into Central Park. I’d seen the signs the day before, but seeing them on race day brought me to tears.
I love Central Park.
It is my favorite place on the planet to run, full stop.
Seeing that sign not only meant I had less than 3 miles to go, it also brought me into my “running mecca”.
It had been a while since I’ve run Central Park, so the walk the day before helped remind me that the path back towards Central Park South is longer than it seems. I focused on the sights, but also on hitting the 40K mark at 4:45. I knew, if nothing else, that I could force myself through 2K in under 15 minutes.
Coming down 5th, and then into Central Park, you’re just surrounded by runners and energy. It’s near deafening. It’s truly special.
Taking the right from Columbus Circle back into the park was one of the best feelings I’ve ever had. I saw the 26 Mile sign and started bawling. After a few steps I realized it’s hard to breathe while crying and managed to contain myself.
Then you hit the uphill that makes up that last 385 yards. It’s tough.
And then the finish. No words can describe finishing a marathon, and nowhere is that more true than in New York.
More tears. More joy. I just finished the New York Marathon.
Yes, you read that right. There’s one more mile to go as you work your way out of the finishing chute. There’s 50,000 runners to contend with. As large as Central Park is, they still need to go somewhere. Add to that the fact that there’s only certain exit spots from the Park to the City, and you’re looking at a solid mile walk to the finish if you didn’t gear check (which I didn’t.) On top of it all, you have to walk uphill.
After making it through the next mile I met up with Abram and Karin at 75th & Columbus. Karin gave me a huge hug, and Abram handed me a can of Heady Topper. We figured on race day open container laws would be overlooked. :-)
It was then time to go find food and celebrate.
We walked into a German beer hall (Reichenbach Hall), where they cheered every runner who stumbled in. This city embraces the marathon.
And the celebration was in full swing.
When it comes time to do it again (because there will be another go):
- Focus on training
- Focus on hills - lots of hills
- Respect the course
- Go out slower
- Enjoy it all over again!
- Wear your name on your shirt. Yes, you’ll feel a bit dorky, but I’m here to tell you that hearing your name chanted at mile 23 makes all the difference in the world.
- If you’re going to have a cheer crew, work out exactly where they’re going to be. There are a lot of runners, and it’s easier for the runner to spot the fan. Work out the side, the corner, what they’re going to be wearing, what signs they’ll be holding, etc.
- There are porta-potties where the busses pick up at the Staten Island Ferry. If you don’t feel the need to have an actual restroom, there are no lines there. It’s a great place to, well, take care of business.
- Do not check a bag if at all possible. Having to walk through the bag check area is a challenge you don’t want to face when you finish.
- Bring a phone if you can for post-race coordination. Having a Metro card and a $20 is also a good idea.
- Enjoy it. It’s the New York Marathon.
Note: this blog assumes you have used Azure to create services in the past
One of the the most compelling scenarios for a bot is to add it to Facebook. A Facebook page is rather static. Finding information about a business on a Facebook page can be a bit of a challenge. And while users can comment, or send a message, the only replies they’ll ever receive is from a human, meaning the owner of the small business needs to monitor Facebook.
Of course, if it’s a small business that the page is representing, there’s a good chance the business doesn’t have the resources to create a bot on their own. Or, even if the business is of a size where they have access to developers, the developers aren’t the domain experts - that’s the salespeople, managers, or other coworkers.
To make a long story short, developers are often required to create the bot, and build the knowledge base the bot will be using to provide answers. This is not an ideal situation.
Enter QnA Maker.
QnA Maker is a service that can look at an existing, structured, FAQ document, and extract a set of question and answers into a knowledge base. The knowledge base is editable through an interface designed for information workers, and is exposed via an easy to call REST endpoint for developers.
To get started with QnA Maker, head on over to https://qnamaker.ai/. You can create a new service by clicking on New Service. From there, you’ll be able to give your service a name, and point to one or more FAQ pages on the web, or a document - Word, PDF, etc. - containing the questions and answers. After clicking create, the service will do its thing, creating a knowledge base that can be accessed via the endpoint.
The knowledge base is a set of questions and answers. After creating it, you can manage it much in the same way you edit a spreadsheet. You can add new pairs by clicking on Add new QnA pair. You can also edit existing pairs in the table directly. Finally, if you wish to add a new question to an existing answer, you can hover over the question on the left side, click the ellipsis, and choose Add alternate phrasing.
One important thing to note about the knowledge base, is each question and answer is an individual entity; there is no parent/child relationship between multiple questions and a single answer. As a result, if you need to provide additional ways to ask a particular question with the same answer, you will need to have multiple copies of the same answer.
Once you’re happy with the first version of your knowledge base, click Save and retrain to ensure it’s up to date. Then, click Test on the left bar, which will present you with a familiar bot interface. From this interface, you can start testing your bot by typing in questions and seeing the various answers.
You’re also able to update the knowledge base from this interface. For example, if you type a question that’s a little ambiguous, the interface will show you multiple answers on the left side. You can simply click the answer you like the most to update the knowledge base to use that answer for the question you provided.
In addition, after asking a question, and being provided an answer, you can add additional phrasings of the same question on the right side.
First and foremost, remember the eventual user experience for this knowledge base is via a bot. Bots should typically have personality, so don’t be afraid to modify some of the answers from their original form to make it read a bit more like a human typed it out, rather than a straight statement of facts. In addition, make sure you add multiple questions related to hello, hi, help, etc., to introduce your bot and help guide your user to understand the types of questions your knowledge base can answer. Finally, remember that while a single form of a question works well on a FAQ page, users can type the same question in multiple forms. It’s not a bad idea to ask other people to test your knowledge base to ensure you’re able to answer the same question in multiple forms.
And, once you’re ready to make the service available to a bot, click Save and retrain, and then Publish.
QnA Maker exposes your knowledge base as a simple REST endpoint. You can access it via POST, passing a JSON object with a single property of question. The reply will be a JSON object with two properties - answer, which contains the answer, and score, which is a 0-100 integer of how sure the service is it has the right answer. In fact, you can use this endpoint in non-bot services as well.
Of course, the goal of this blog post is to show how you can deploy this without writing code. To achieve that goal, we’re going to use Azure Bot Services, which is built on top of Azure Functions. Azure Bot Services contains a set of prebuilt templates, including one for QnA Maker.
In the Azure Portal, click New, and then search for Bot Service (preview). The Azure Portal will walk you through creating the website and resource group. After it’s created, and you open the service, you will be prompted to create a bot in Bot Framework. This requires both an ID and a key, which you’ll create by clicking on Create Microsoft App ID and Password.
IMPORTANT: Make sure you copy the password after it’s created; it’s not displayed again! When you click on Finish and go back to Bot Framework, the ID will be copied automatically, but the key will not.
Once you’ve entered the ID and key, you can choose the language (C# or NodeJS), and then the template. The template you’ll want is Question and Answer. When you click Create bot, you’ll be prompted to select your knowledge base (or create a new one).
And that’s it! Your bot is now on the Bot Framework, ready for testing, to be added to Skype, Facebook, etc. You now have a bot that can answer questions about your company, without having to write a single bit of code. In addition, you’ll be able to allow the domain experts update the knowledge base without any need for code updates - simply save and retrain, then publish, and your bot is updated.
While the focus has been on a no-code solution, you are absolutely free to incorporate a QnA Maker knowledge base into an existing bot, or to update the bot you just created to add your own custom code. And if you’re looking for somewhere to get started on creating bots, check out the Bots posts on this very blog, or the MVA I created with Ryan Volum.
One of the greatest advantages of the bot interface is it allows the user to type effectively whatever it is they want.
One of the greatest challenges of the bot interface is it allows the user to type effectively whatever it is they want.
We need to guide the user, and to make it easy for them to figure out what commands are available, and what information they’re able to send to the bot. There are a few ways that we can assist the user, including providing buttons and choices. But sometimes it’s just as easy as allowing the user to type help.
If you’re going to add a help command, you need to make sure the user can type it wherever they are, and trigger the block of code to inform the user what is available to them. Bot Framework allows you to do this by creating a DialogAction. But before we get into creating a DialogAction, let’s discuss the concept of dialogs and conversations in a bot.
Bots contain a hierarchy of conversations and dialogs, which you get to define.
A dialog is a collection of messages back and forth between the user and the bot to collect information and perform an action on their behalf. A dialog might be the appropriate messages to obtain the type of service the user is interested in, determine which location the user is referring to when asking for store information, or the time the user wants to make a reservation for.
A conversation is a collection of dialogs. The conversation might use a dialog to walk through the steps listed above - service type, location and time - to complete the process of creating an appointment. By using dialogs, you can simplify the bot’s code, and enable reuse.
We will talk more in future blog posts about how to manage dialogs, but for right now this will enable us to create a DialogAction.
At the end of the day a DialogAction is a global way of starting a dialog. Unlike a traditional dialog, where it will be started or stopped based on a flow you define, a DialogAction is started based on the user typing in a particular keyword, regardless of where in the flow the user currently is. DialogActions are perfect for adding commands such as help, cancel or representative.
You register a DialogAction by using the bot function
beginDialogAction accepts three parameters, a name for the DialogAction, the name of the Dialog you wish to start, and a named parameter with the regular expression the bot should look for when starting the dialog.
The first line registers a DialogAction named help, calling a Dialog named help. The DialogAction will be launched when the user types anything that begins with the word help.
The next line registers a dialog, named help. This dialog is just like a normal dialog. You could prompt the user at this point for additional information about what they might like, query the message property from session to determine the full text of what the user typed in order to provide more specific help.
The next question is what happens when the help Dialog (what it’s called in our case) completes. When
endDialog is called, where in the flow will the user be dropped? As it turns out, they’ll pick up right where they left off.
Imagine if we had the following bot:
Notice we have have an
IntentDialog built with a load “command”. This kicks of a simple waterfall dialog which will prompt the user for the name of the user they wish to load, and then echos it back. If you ran the bot, and sent the commands load, followed by help, you’d see the following flow:
Notice that after the help dialog completes the user is again prompted to enter the name, picking right up where you left off. This simplifies the injection of the global help command, as you don’t need to code in where the user left, and then returned. The Bot Framework handles that for you.
One of the biggest issues in creating a flow with a chat bot is the fact a user can say nearly anything, or could potentially get lost and not know what messages the bot is looking to receive. A DialogAction allows you to add global commands, such as help or cancel, which can create a more elegant flow to the dialog.
Bots give you the ability to allow users to interact with your app through communication. As a result, figuring out what the user is trying to say, or their intent, is core to all bots you write. There are numerous ways to do this, including regular expressions and external recognizers such as LUIS.
For purposes of this blog post, we’re going to focus our attention on regular expressions. This will give us the ability to focus on design and dialogs without having to worry about training an external service. Don’t worry, though, we’ll absolutely see how to use LUIS, just not in this post.
In Bot Framework, a dialog is the core component to interacting with a user. A dialog is a set of back and forth messages between your bot and the user. In this back and forth you’ll figure out what the user is trying to accomplish, and collect the necessary information to complete the operation on their behalf.
Every dialog you create will have a match. The match will kick off the set of questions you’ll ask the user, and start the user down the process of fulfilling their request.
As mentioned above, there are two ways to “match” or determine the user’s intent, regular expressions or LUIS. Regular expressions are perfect for bots that respond to explicit commands such as create, stop or load. They’re also a great way to offer the user help.
One big thing to keep in mind when designing a bot is no natural language processor is perfect. When people create their first bot, the most common mistake is to allow the user to type almost anything. The challenge is this is almost guaranteed to frustrate the user, and lead to more complex code trying to detect the user’s intent, only to misunderstand a higher percentage of statements.
Generally speaking, you want to guide the user as much as possible, and encourage them to issue terse commands. Not only will this make it easier for your bot to understand what the user is trying to tell it, it actually makes it easier for the user.
Think about a mobile phone, which is one of the most common bot clients. Typing on a small keyboard is a challenge at best, and the user isn’t going to type “I would like to find the profile GeekTrainer” or the like. By using terse commands and utterances, you’ll not only increase the percentage of statements you understand without clarification, you’ll make it easier for the user to interact with your bot. That’s a win/win.
In turn, make it easy for your user to understand what commands are available. By guiding the user through a set of questions, in an almost wizard-like pattern, you’ll increase the chances of success.
To determine the user’s intent by using regular expressions or other external recognizers, you use the
IntentDialog effectively has a set of events exposed via
matches which allow you to execute at least one function in response to the detected event.
Let’s say you wanted to respond to the user’s command of “load”, and send a message in response. You could create a dialog by using the following code:
matches takes two parameters - a regular expression which will be used to match the message sent by the user, and the function (or array of functions) to be called should there be a match. The function, or event handler if you will, takes three parameters,
session, which we saw previously,
args, which contains any additional information sent to the function, and
next, which can be used to call the next function should we provide more than one in an array. For the moment, the only one that’s important, and the only one we’ve used thus far, is
To use this with a bot, you’ll create it and add the dialog like we did previously, only adding in the dialog object rather than a function.
If you run the code, and send the word load, you’ll notice it sends the expected message.
Over time you’ll add more intents. However, as we mentioned earlier, we want to make sure we are able to give the user a bit of guidance, especially if they send a message that we don’t understand at all. Dialogs support this through
onDefault, as you might suspect, executes as the default message when no matches are found.
onDefault works just like any other handler, accepting one or more functions to execute in response to the user’s intent.
You’ll notice you don’t give
onDefault a name because it’s of course also a name. You’ll also notice we used
session.endConversation to send the message.
endConversation ends the conversation, and the next message starts from the very top. In the case of our help message this is the perfect behavior. We’ve given the user the list of everything they can do. The next message they send, in theory anyway, will be one of those commands, and we’ll want to process it. The easiest way to handle it is to use the existing infrastructure we just created.
If you test the bot you just created, you should see the following:
When creating a bot, the first thing you’ll do is determine what the user’s intent is; what are they trying to accomplish? This is done in a standard app by the user clicking on a button. Obviously, there are no buttons. When you get down to the basics, a bot is a text based application. Dialogs can make it easier to determine the user’s intent.
One of the most common phrases when I’m talking about technology for end users is “meet them where they’re at.” A big reason applications fail to be adopted is they require too large of a change in behavior from the users in question, having to open yet another tool, access another application, etc. We as humans have a tendency to revert to our previously learned behaviors. As a result, if we want to get our users using a new process or application we need to minimize the ask as much as possible.
This is one of the biggest places where bots can shine: they can be placed where our users already are. Users are already using Office, Slack, Skype, etc. A bot can then provide information to the user in the environment they’re already in, without having to open another application. Or, if they want to open an application, the bot can make that easier as well. In addition, the user can interact with the bot in a natural language, reducing the learning curve, making it seem more human, and maybe even fun.
At //build 2016 Microsoft announced the Microsoft Bot Framework, a set of APIs available for .NET and Node.js, to make it easier for you to create bots. In addition, we also announced Language Understanding Intelligent Service (LUIS), which helps break down natural speech into intents and parameters your bot can easily understand.
What I’d like to do over a handful of posts is help get you up and running with a bot of your own. We’ll use Node.js to create a simple “Hello, world!” bot, and then add functionality, allowing it to look up user information in GitHub, and then integrate it with various chat services.
The Bot Framework is currently under development. As a result, things are changing. While many of the concepts we’ll talk about will likely remain the same, there may be breaking code changes in the future. You have been warned. ;-)
npm as well. Finally, I’ll be using ES6 syntax as appropriate.
With that in mind, let’s create a folder in which to store our code, and install
As for the initialization, I’m not overly concerned with the settings you choose there, as we really just need the
package.json file; you can just choose all of the defaults.
Let’s start with the stock, standard, “Hello, world!”, or, in this case, “Hello, bot!”
Creating an interactive bot requires creating two items, the bot itself, which houses the logic, and the connector, which allows the bot to interact with users through various mechanisms, such as Skype, Slack and Facebook.
In regards to the connector, there’s two connectors provided in the framework - a ConsoleConnector, perfect for testing and proof of concept as you simply use a Windows console window to interact with your bot, and the ChatConnector, which allows for communication with other clients, such as Slack, Skype, etc. You’ll start with the console connector, as it doesn’t require any other client than the standard Windows console.
As for the bot, you’ll create a simple bot that will send “Hello, bot” as a message. To create the bot, you will pass in the connector you create.
Create a file named
text.js, and add the following code:
Let’s start from the top. The first line is the import of
botbuilder, which will act as the factory for many objects we’ll be using, including
ConsoleConnector, as you see in the second line.
To create a bot, you need to specify its connector, which is why you’ll create that to start. The connector is used to allow the bot to communicate with the outside world. In our case we’ll be interacting with the bot using the command line, thus
ConsoleConnector. Once you’ve created the connector, you can then pass that into the bot’s constructor.
The design of a bot is to interactively communicate with a human through what are known as
dialogs. The next line adds in a dialog named
/. Dialogs are named similar to folders, so
/ will be our starting point or root dialog. You can of course add additional dialogs by calling
dialog yet again, but more on that in a later post. The second parameter is the callback, which, for now, will accept
session manages the discussions for the user, and knows the current state. You’ll either use
session directly to communicate with the user, or pass it into helper functions to communicate on your behalf.
The simplest method on
send which, as you might imagine, will send a message to the user. If you run
text.js, and type in anything and hit enter (make sure you type something in to activate the bot!), you’ll see the message.
You need to send a signal to the bot first in order to “wake it up.” When you’re using the command line for initial testing this can be a bit confusing, as you’ll run the application and notice that nothing is displayed on the screen. When you run your bot, just make sure you send it a message to get things started.
Obviously, displaying a static message isn’t much of a bot. We want to interact with our user. The first step to doing this is to retrieve the message the user sent us. Conveniently enough, the
message is a property of
session. The message will allow us to access where it was sent from, the type, and, key to what we’re doing, the
Let’s upgrade our simple bot to an echo bot, displaying the message the user sent to us.
You’ll notice we updated the
session.send call to retrieve
message, which contains the user input. Now if we run it we’ll see the information the user typed in.
Bots are a way for users to interact with services in a language that’s natural, and in applications they’re already using. You can integrate a bot with an existing application, such as a web app, or with a tool users are already invested in, such as Slack or Skype. We got started in this blog post with bots by first obtaining the SDK, and then creating a simple bot echo service. From here we can continue to build on what we’ve learned to create truly interactive bots.
Long road relays seem to be all the craze in running these days. Considering the basic concept is you get a bunch of your friends together and cover 200 miles in shifts, the appeal is pretty obvious. Well, obvious to runners anyway. ;-) Chances are if you’re a runner you’re familiar with this style of race, and you’re probably considering doing one. I just finished my first, and while I’m certainly no expert, I did learn some lessons that I wish I’d have known about before the race. So, I’m going to share them with you.
You will want one outfit per leg. After all you run your leg, and then rest, either in your van or elsewhere, for the next few hours. You’re either going to be wearing wet, stinky clothes, much to the chagrin of your van-mates, or you’re going to be putting on wet running gear to head out for your next leg, which is certainly not something you want to do.
2 gallon Ziploc bags are your friends. Not only are they a great way to group together gear, they’re also a great place to store those wet clothes we talked about above. They’ll not only keep the stink contained, they’ll make it easy to keep everything else nice and dry.
When I did my race I meticulously packed each of my three outfits into three separate bags. What I discovered, though, was I wound up swapping things around from my original plans. Next time I’d keep tops in one, bottoms in another, and things like socks in a third. And then just toss your dirty stuff into a single Ziploc.
Make sure you know ahead of time what everyone’s goals are. I did my race with a friend, and we were originally thinking we’d be running a slightly faster training pace. It was only after we joined our team that we discovered everyone wanted to race their legs. We adjusted and rolled with the punches, but it would have been good to have those expectations ahead of time.
Depending on the speed of your runners, you’re not necessarily going to have a lot of time at the exchanges to change. On top of that, the only place you will have to change in private is in a portapotty, and that’s not really the best place to be for anything but the original design of the equipment.
This means, often, the best place to change is going to be in the van. You can set up an area in the back seat, with a couple of towels, that can help give a shield to the person changing and keep the right parts covered. This isn’t to say that you have to give up all privacy, but being comfortable enough to simply have people not look goes a long way to making it easier to get out of those wet clothes or into the next outfit you’re going to wear.
As I mentioned above, there isn’t always a lot of time in the transition areas. You’ll want to have a plan in place on what you’re going to be doing in the transition area. There are three runners you need to support at all times - the one who just finished, the one who is about to start, and the runner that’s currently on the road. You’ll want to make sure you figure out how you’re going to balance all of those runners to make sure everyone has what they need.
If at all possible, have a dedicated driver. Having to run and drive, which I did, makes for a very long day.
Try to keep everyone at some state of ready as you go from transition to transition. Trying to load a van to head to the next exchange can be a bit like herding cats, as someone realizes they need a headlamp, or a reflective vest, or a granola bar, or … This takes time, and makes it that much harder to get out to cheer on your runner, and get to the next exchange in a good amount of time.
Make sure everyone has a headlamp, vest, and back light of their own. You don’t want to worry about trying to share them. And keep those in a separate bag, or maybe in the same bag with your socks (see above). This way you know where everything is at any given time.
Lost car keys are always a risk as everyone keeps hopping in and out of the van in sporadic order. In addition, you’ll be passing the keys around as different people are driving or going into the van. Having them on a lanyard, and around someone’s neck, decreases the chances you’ll lose it, and makes it easier to spot who has the keys.
When you’re packing, make sure you have a sleeping bag and a good pad (or small air mattress) to sleep on. You’re not going to have a lot of time to sleep, so you’ll want to make the best of it. Those little creature comforts will make all of the difference in the world. They’ll also give you the flexibility to sleep outside (maybe pack a small tent?) rather than the high school.
While we’re at it, an eye mask and earplugs are an absolute must. Trust me, you absolutely need them.
I didn’t have any of the above, and I was not a happy man come the following morning.
Ragnar (I can’t speak to the others) does a good job of marking the course, but not always. There was one turn in particular where they had everyone running on one side of the road, but the sign to turn was on the opposite side - very easy to miss, and one runner I know did. Bring the map.
In fact, one thing you might want to consider is leaving someone at the challenging corner if you see it while driving along the route to help the runner make the turn. The little bit of lost time to pick up the person left behind is far less than risking losing a runner.
And the other runners as well.
There’s not going to be a lot of runners out while you’re running, and not much in the way of support beyond those running the race. As a runner, you know how much support helps. Give that support to your runner, and the other teams while you’re at it. They’ll appreciate it.
Chances are you’ll be away from home when you finish. You’re not going to want to drive right home afterwards, and why would you even if you could? I mean, you just finished a great race with your new best friends! You should celebrate it.
If you rent a big house you’ll be able to get showers (you’re going to want a shower!), beds, a kitchen for food, etc.
This goes without saying, but enjoy the experience! It’s great being able to see the sun go down, and then come back up. It’s great being out on a country road, at night, running along. And you’ll share laughs, and a great time.
I remember working for a .NET development shop back in 2005. .NET 2.0 was still in its nascent phase, and the team I was on was still relatievely new to .NET. We were all trying to figure out best practices, object design, etc. But we were good developers, and knew that it’s always best to go with libraries and frameworks that are already written, ones that will simplify the task at hand. The library that was popular among the team was CSLA.
Now I should mention right up front, for full disclosure, that I’m personally not a fan of CSLA. But everyone has their own opinion, and I know many teams have used CSLA to great success. The post below is not about CSLA, but rather about finding the right tool, and the right functionality.
There’s this famous scene in Spinal Tap, a mockumentary about a fictitious rock band, where the lead guitarist, Nigel Tufnel, explains to the reporter how his amp is louder because it goes to 11. When asked why he simply didn’t make 10 louder, Nigel is befuddled and again states the amp goes to 11, which is one more than 10. 11 must be louder, right?
When we discussed using CSLA in our team, one of the arguments that was given in its favor was CSLA had built-in support for remoting, a precursor to WCF. Our project, however, not only had no need for remoting, it would never need remoting. The fact that the framework supported remoting was of no concern to our application.
However, “this one goes up to 11.” “This supports remoting. And isn’t that cool?”
Yes, it’s cool - if we need remoting. Otherwise, we’re simply adding complexity for complexity’s sake.
If you look out over the developer landscape, you’ll notice tool upon tool, framework upon framework, all offering some set of functionality, with promises to make your life as a developer easier. And most frameworks and tools will do exactly that. But, those tools may come at a cost in increased complexity.
On the web side of things, you have NPM, Grunt, Gulp, Bower, …, to help manage packages, files, workflows, etc. And you have jQuery, Bootstrap, Knockout, Angular, …, to make developing front ends that much easier. And the list goes on and on.
All of those various tools and frameworks have their place, and they can all bring additional power, and help you create applications that much faster. But they can also add unnecessary bloat. And complexity.
Before taking a dependency on a tool or framework, make sure the features it provides are what you actually need. For example, Ember.js has this amazing data store; it’s my favorite feature of that framework. But if you’re not making many Ajax calls and working with data, but rather need to simply update the front end dynamically, why choose that as your framework? Why not use jQuery? Or maybe Knockout?
As a perfect example, I am currently pecking out a sample NodeJS application. I dutifully sat down and started adding in various packages, started tweaking my Gulp file, and then stopped myself.
It’s a simple sample application.
Do I need to use LESS? Since I’m just going to be using the default Bootstrap theme, there’s no need for me to worry about pre-processors or the like. Could I use LESS? Sure, but I don’t need it to survive.
If the sample application starts to grow and become more complex, maybe I’ll revisit those decisions. But right now, I don’t need them. Why would I take a dependency on something that offers me features that I don’t need?
Instead, make 10 louder.