Communication with a user via a bot built with Microsoft Bot Framework is managed via conversations, dialogs, waterfalls, and steps. As the user interacts with the bot, the bot will start, stop, and switch between various dialogs in response to the messages the user sends. Knowing how to manage dialogs in Bot Framework is one of the keys to successfully designing and creating a bot.
At its most basic level, a dialog is a reusable module, a collection of methods, which performs an operation, such as completing an action on the user’s behalf, or collecting information from the user. By creating dialogs you can add reuse to your bot, enable better communication with the user, and simplify what would otherwise be complex logic. Dialogs also contain state specific to the dialog in dialogData.
A conversation is a parent to dialogs, and contains the dialog stack. It also maintains two types of state, conversationData, shared between all users in the conversation, and privateConversationData, which is state data specific to that user.
Every dialog you create will have a collection of one or more methods that will be executed in a waterfall pattern. As each method completes, the next one in the waterfall will be executed.
Your bot will maintain a stack of dialogs. The stack works just like a normal LIFO stack), meaning the last dialog added will be the first one completed, and when a dialog completes control will then return to the previous dialog.
Bots come in many shapes, sizes, and forms. Some bots are simply front ends to existing APIs, and respond to simple commands. Others are more complex, with back and forth messages between the user and bot, branching based on information collected from the user and the current state of the application. Depending on the requirements for the bot you’re building, you’ll need various tools at your disposal to start and stop dialogs.
Dialogs can be started in a few ways. Every bot has a default, sometimes called a root dialog, which is executed when no other dialog has been started, and no other ones have been triggered via other means. You can create a dialog that responds globally to certain commands by using
triggerAction is registered globally to the bot, while
beginDialogAction registers the command to just that dialog. Finally, you can programmatically start a dialog by calling either
replaceDialog, which will allow you to add a dialog to the stack or replace the current dialog, respectively.
When a bot reaches the end of a waterfall, the next message will look for the next step in the waterfall. If there is no step, the bot simply doesn’t respond, naturally ending the conversation or dialog. This can provide a bit of a confusing experience for the user, as they may need to retype their message to get a response from the bot. It can also be confusing for the developer, as there may be many ways a dialog might end depending on the logic.
As a result, when a conversation or dialog has come to an end, it’s a best practice to explicitly call
endConversation both clears the current dialog stack and resets all data stored in the session, except
endDialogWithResult end the dialog, clear out
dialogData, and control to previous dialog in the stack. Unlike
endDialogWithResult allows you to pass arguments into the previous dialog, which will be available in the second parameter of the first method in the waterfall (typically named
Ending a conversation or dialog will also remove the associated state data. This is important to remember when deciding where to store state data. The best practices of minimizing scope of state data apply to bots, just as they do to any other application.
The place where state lifespan becomes trickiest is
dialogData. If you start a new dialog, the dialog doesn’t receive the data from the calling dialog. In addition, when a dialog completes, the previous dialog doesn’t receive the data from the calling dialog. You can overcome this by using arguments.
endDialogWithResult allows you to pass arguments to the prior dialog, while both
replaceDialog allow you to pass arguments into the new dialog.
The sample application we will be building through the next set of examples is a simple calculator bot. Our calculator bot will allow the user to enter numbers, and once they say total we’ll display the total and allow them to start all over again. We’ll also want to allow the user to get help at any time, and to cancel as needed. The sample code is provided on GitHub.
Starting with version 3.5 of Microsoft Bot Framework, the default or root dialog is registered as the second parameter in the constructor for
UniversalBot. In prior versions, this was done by adding a dialog named
/, which led to naming similar to that of URLs, which really isn’t appropriate when naming dialogs.
The default dialog is executed whenever the dialog stack is empty, and no other dialog is triggered via LUIS or another recognizer. (We’ll see how to register dialogs using
triggerAction a little later.) As a result, the default dialog should provide some contextual information to the user, such as a list of available commands and an overview of what the bot can perform.
From a design perspective, don’t be afraid to send buttons to the user to help guide them through the experience; bots don’t need to be text only. Buttons are a wonderful interface, as they can make it very clear what options the user can choose from, and limit the possibility of the user making a mistake.
To get started, we’ll set up our default dialog to present the user with two buttons, add and help. For our first pass, we’ll simply echo the user’s selection; we’ll add additional dialogs in the next section. We’ll do this by setting up a two step waterfall, where the first step will prompt the user, and the second will end the conversation.
One of the biggest challenges when creating a bot is dealing with the fact users can be random. Imagine the following exchange:
This is a common scenario. The user sends a message to the bot. The bot responds. The user gets a new piece of information, in this case their friend is a vegan, and thus asks about a vegan menu. The bot is now stuck, because it wasn’t expecting that response.
triggerAction allows you to register a global command of sorts with the bot, and ensure the appropriate dialog is executed for every request.
In prior versions of Bot Framework, developers typically started every dialog name with /. This was because when registering the default dialog in earlier versions you named it /. As you’ve already seen, that’s not the case starting with version 3.5. As a result, you give your dialog a name that appropriately describes the operation the dialog is built to perform.
bot.dialog is used to register a dialog. The two parameters you’ll provide are the name of the dialog, and the array of methods you wish to execute when the user enters the dialog. Let’s create the starter for add dialog. For now, we’ll leave it with the simple echo, and introduce new functionality as we go forward.
We want to register our AddNumber dialog with the bot so whenever the user types add this dialog will be executed. This is done through the use of
triggerAction, which is a method available on
triggerAction accepts a parameter of type
ITriggerActionOptions has a few properties, the most important of which is
matches. Matches will either be a regular expression to match a string typed in by the user, such as add in our case, or a string literal if the match will be done through the use of a recognizer, such as one from LUIS.
Let’s update our bot to register AddNumber to be started when the user types add. We’ll remove the second step from the default dialog and take advantage of the behavior of our buttons, which will send the text of the button to the bot, much in the same way as if the user typed it themselves.
triggerAction is a global registration of the command for the bot. If you wish to limit that to an individual dialog, use
beginDialogAction, which we’ll discuss later.
triggerAction replaces the entire current dialog stack with the new dialog. While that can be good for AddNumber, that wouldn’t be good for a dialog to provide help. We’ll see a little later how
onSelectAction can be used to manage this behavior.
If you execute the bot at this point you’ll notice clicking Add on the buttons, or simply typing it, will cause the bot to send the message This is the AddNumber dialog. You’ll also notice that help, at present, does nothing. We’ll handle that in a bit.
Let’s talk a little bit about our logic for AddNumber. We want to prompt the user for a number, add it to our running total, and then ask the user for the next number. Basically, we just need to restart the same dialog over and over again. We can use
replaceDialog to perform this action.
In the first step of our waterfall, we’ll check to see if there is a running total available in
privateConversationData, and create one if it doesn’t exist. We’ll then prompt the user for the number they want to add.
In the second step, we’ll retrieve the number, add it to our running total, and then start the dialog over again by calling
replaceDialog takes two parameters, the first being the name of the dialog with which you wish to replace the current dialog, and the second being the arguments for the new dialog. The object you provide as the second parameter will be available in the first function in the new dialog’s waterfall in the second parameter (typically named
It doesn’t make a lot of sense for our bot to have a global total command. After all, it’s only valid if we’re currently adding numbers. Using
beginDialogAction allows you to register commands specific to that dialog, rather than global to the bot. By using
beginDialogAction, we can ensure total is only executed when we’re in the process of running a total.
The syntax for
beginDialogAction is similar to
triggerAction. You provide the name of the
DialogAction you’re creating, the name of the
Dialog you wish to start, and the parameters for controlling when the dialog will be started.
endConversation, we reset the entire conversation back to its starting state. This will automatically clear out any
privateConversationData, as the conversation has ended.
triggerAction will reset the current dialog stack with the new dialog. In the case of AddNumber that’s just fine; the logic on the dialog is designed for the dialog to continually restart. But this is problematic when it comes to Help. Needless to say, we don’t want to reset the entire set of dialogs when the user types help; we want to allow the user to pick up right where they left off.
Bot Framework provides
beginDialog for adding a dialog to the stack. When that dialog completes, it returns to the control to the active step in the prior dialog. Or, in terms of the case of our Help example, it will allow the user to pick up where they left off.
onSelectAction property on
ITriggerActionOptions executes when the bot is about to start the dialog being triggered. By using this event, we can change the way the dialog is started, using
beginDialog, which will add the dialog to the stack instead of replacing stack. The first parameter is the name of the dialog we wish to start, which is provided in
args.action, and the second is the
args parameter we want to pass into the the dialog when it starts. The code sample below will ensure we return control to the prior dialog when this one completes.
beginDialog, don’t hard code the name of the dialog you’re about to start, but rather use
args.action. Otherwise, you’ll notice the dialog won’t actually start.
One of the challenges with the help solution we created earlier is it can only provide generic help; whenever the user types help the exact same message is sent to the user. By using beginDialogAction you can parameters to the triggered dialog, allowing you to centralize messaging for help. In our case, we’ll use the name of the current action as the key to the message we want to send.
If you’ve made it to this point in the article, you already have the skills necessary to create a global cancel operation - you’d add a new dialog, register it with
triggerAction, and add a string match for the word cancel. The dialog would then call
endConversation with a friendly message, and the user would be able to restart he operation.
However, let’s say you wanted to provide granular support for cancel operations, changing the behavior on different dialogs, or maybe not allowing a cancel on a dialog at all. This is where
cancelAction comes into place.
cancelAction allows you to register a cancel command for a specific dialog. In addition,
cancelAction only cancels the current dialog, not the entire conversation. This gives you a bit more control over how the cancel operation will behave.
The second parameter you’ll pass into cancelAction is
ICancelActionOptions, which includes the
onSelectAction properties we’ve seen before. It also adds
confirmPrompt, which, if set, will prompt the user if they actually want to cancel.
onSelectAction you’re able to end the entire conversation, resetting everything; the default behavior is to just cancel the current dialog.
Bot Framework offers many options and methods for managing dialogs and responding to user requests. Harnessing the the power provided by dialogs allows you to create bots that can have conversations with your users that feel more natural.
Note: this blog assumes you have used Azure to create services in the past
One of the the most compelling scenarios for a bot is to add it to Facebook. A Facebook page is rather static. Finding information about a business on a Facebook page can be a bit of a challenge. And while users can comment, or send a message, the only replies they’ll ever receive is from a human, meaning the owner of the small business needs to monitor Facebook.
Of course, if it’s a small business that the page is representing, there’s a good chance the business doesn’t have the resources to create a bot on their own. Or, even if the business is of a size where they have access to developers, the developers aren’t the domain experts - that’s the salespeople, managers, or other coworkers.
To make a long story short, developers are often required to create the bot, and build the knowledge base the bot will be using to provide answers. This is not an ideal situation.
Enter QnA Maker.
QnA Maker is a service that can look at an existing, structured, FAQ document, and extract a set of question and answers into a knowledge base. The knowledge base is editable through an interface designed for information workers, and is exposed via an easy to call REST endpoint for developers.
To get started with QnA Maker, head on over to https://qnamaker.ai/. You can create a new service by clicking on New Service. From there, you’ll be able to give your service a name, and point to one or more FAQ pages on the web, or a document - Word, PDF, etc. - containing the questions and answers. After clicking create, the service will do its thing, creating a knowledge base that can be accessed via the endpoint.
The knowledge base is a set of questions and answers. After creating it, you can manage it much in the same way you edit a spreadsheet. You can add new pairs by clicking on Add new QnA pair. You can also edit existing pairs in the table directly. Finally, if you wish to add a new question to an existing answer, you can hover over the question on the left side, click the ellipsis, and choose Add alternate phrasing.
One important thing to note about the knowledge base, is each question and answer is an individual entity; there is no parent/child relationship between multiple questions and a single answer. As a result, if you need to provide additional ways to ask a particular question with the same answer, you will need to have multiple copies of the same answer.
Once you’re happy with the first version of your knowledge base, click Save and retrain to ensure it’s up to date. Then, click Test on the left bar, which will present you with a familiar bot interface. From this interface, you can start testing your bot by typing in questions and seeing the various answers.
You’re also able to update the knowledge base from this interface. For example, if you type a question that’s a little ambiguous, the interface will show you multiple answers on the left side. You can simply click the answer you like the most to update the knowledge base to use that answer for the question you provided.
In addition, after asking a question, and being provided an answer, you can add additional phrasings of the same question on the right side.
First and foremost, remember the eventual user experience for this knowledge base is via a bot. Bots should typically have personality, so don’t be afraid to modify some of the answers from their original form to make it read a bit more like a human typed it out, rather than a straight statement of facts. In addition, make sure you add multiple questions related to hello, hi, help, etc., to introduce your bot and help guide your user to understand the types of questions your knowledge base can answer. Finally, remember that while a single form of a question works well on a FAQ page, users can type the same question in multiple forms. It’s not a bad idea to ask other people to test your knowledge base to ensure you’re able to answer the same question in multiple forms.
And, once you’re ready to make the service available to a bot, click Save and retrain, and then Publish.
QnA Maker exposes your knowledge base as a simple REST endpoint. You can access it via POST, passing a JSON object with a single property of question. The reply will be a JSON object with two properties - answer, which contains the answer, and score, which is a 0-100 integer of how sure the service is it has the right answer. In fact, you can use this endpoint in non-bot services as well.
Of course, the goal of this blog post is to show how you can deploy this without writing code. To achieve that goal, we’re going to use Azure Bot Services, which is built on top of Azure Functions. Azure Bot Services contains a set of prebuilt templates, including one for QnA Maker.
In the Azure Portal, click New, and then search for Bot Service (preview). The Azure Portal will walk you through creating the website and resource group. After it’s created, and you open the service, you will be prompted to create a bot in Bot Framework. This requires both an ID and a key, which you’ll create by clicking on Create Microsoft App ID and Password.
IMPORTANT: Make sure you copy the password after it’s created; it’s not displayed again! When you click on Finish and go back to Bot Framework, the ID will be copied automatically, but the key will not.
Once you’ve entered the ID and key, you can choose the language (C# or NodeJS), and then the template. The template you’ll want is Question and Answer. When you click Create bot, you’ll be prompted to select your knowledge base (or create a new one).
And that’s it! Your bot is now on the Bot Framework, ready for testing, to be added to Skype, Facebook, etc. You now have a bot that can answer questions about your company, without having to write a single bit of code. In addition, you’ll be able to allow the domain experts update the knowledge base without any need for code updates - simply save and retrain, then publish, and your bot is updated.
While the focus has been on a no-code solution, you are absolutely free to incorporate a QnA Maker knowledge base into an existing bot, or to update the bot you just created to add your own custom code. And if you’re looking for somewhere to get started on creating bots, check out the Bots posts on this very blog, or the MVA I created with Ryan Volum.
One of the greatest advantages of the bot interface is it allows the user to type effectively whatever it is they want.
One of the greatest challenges of the bot interface is it allows the user to type effectively whatever it is they want.
We need to guide the user, and to make it easy for them to figure out what commands are available, and what information they’re able to send to the bot. There are a few ways that we can assist the user, including providing buttons and choices. But sometimes it’s just as easy as allowing the user to type help.
If you’re going to add a help command, you need to make sure the user can type it wherever they are, and trigger the block of code to inform the user what is available to them. Bot Framework allows you to do this by creating a DialogAction. But before we get into creating a DialogAction, let’s discuss the concept of dialogs and conversations in a bot.
Bots contain a hierarchy of conversations and dialogs, which you get to define.
A dialog is a collection of messages back and forth between the user and the bot to collect information and perform an action on their behalf. A dialog might be the appropriate messages to obtain the type of service the user is interested in, determine which location the user is referring to when asking for store information, or the time the user wants to make a reservation for.
A conversation is a collection of dialogs. The conversation might use a dialog to walk through the steps listed above - service type, location and time - to complete the process of creating an appointment. By using dialogs, you can simplify the bot’s code, and enable reuse.
We will talk more in future blog posts about how to manage dialogs, but for right now this will enable us to create a DialogAction.
At the end of the day a DialogAction is a global way of starting a dialog. Unlike a traditional dialog, where it will be started or stopped based on a flow you define, a DialogAction is started based on the user typing in a particular keyword, regardless of where in the flow the user currently is. DialogActions are perfect for adding commands such as help, cancel or representative.
You register a DialogAction by using the bot function
beginDialogAction accepts three parameters, a name for the DialogAction, the name of the Dialog you wish to start, and a named parameter with the regular expression the bot should look for when starting the dialog.
The first line registers a DialogAction named help, calling a Dialog named help. The DialogAction will be launched when the user types anything that begins with the word help.
The next line registers a dialog, named help. This dialog is just like a normal dialog. You could prompt the user at this point for additional information about what they might like, query the message property from session to determine the full text of what the user typed in order to provide more specific help.
The next question is what happens when the help Dialog (what it’s called in our case) completes. When
endDialog is called, where in the flow will the user be dropped? As it turns out, they’ll pick up right where they left off.
Imagine if we had the following bot:
Notice we have have an
IntentDialog built with a load “command”. This kicks of a simple waterfall dialog which will prompt the user for the name of the user they wish to load, and then echos it back. If you ran the bot, and sent the commands load, followed by help, you’d see the following flow:
Notice that after the help dialog completes the user is again prompted to enter the name, picking right up where you left off. This simplifies the injection of the global help command, as you don’t need to code in where the user left, and then returned. The Bot Framework handles that for you.
One of the biggest issues in creating a flow with a chat bot is the fact a user can say nearly anything, or could potentially get lost and not know what messages the bot is looking to receive. A DialogAction allows you to add global commands, such as help or cancel, which can create a more elegant flow to the dialog.
Bots give you the ability to allow users to interact with your app through communication. As a result, figuring out what the user is trying to say, or their intent, is core to all bots you write. There are numerous ways to do this, including regular expressions and external recognizers such as LUIS.
For purposes of this blog post, we’re going to focus our attention on regular expressions. This will give us the ability to focus on design and dialogs without having to worry about training an external service. Don’t worry, though, we’ll absolutely see how to use LUIS, just not in this post.
In Bot Framework, a dialog is the core component to interacting with a user. A dialog is a set of back and forth messages between your bot and the user. In this back and forth you’ll figure out what the user is trying to accomplish, and collect the necessary information to complete the operation on their behalf.
Every dialog you create will have a match. The match will kick off the set of questions you’ll ask the user, and start the user down the process of fulfilling their request.
As mentioned above, there are two ways to “match” or determine the user’s intent, regular expressions or LUIS. Regular expressions are perfect for bots that respond to explicit commands such as create, stop or load. They’re also a great way to offer the user help.
One big thing to keep in mind when designing a bot is no natural language processor is perfect. When people create their first bot, the most common mistake is to allow the user to type almost anything. The challenge is this is almost guaranteed to frustrate the user, and lead to more complex code trying to detect the user’s intent, only to misunderstand a higher percentage of statements.
Generally speaking, you want to guide the user as much as possible, and encourage them to issue terse commands. Not only will this make it easier for your bot to understand what the user is trying to tell it, it actually makes it easier for the user.
Think about a mobile phone, which is one of the most common bot clients. Typing on a small keyboard is a challenge at best, and the user isn’t going to type “I would like to find the profile GeekTrainer” or the like. By using terse commands and utterances, you’ll not only increase the percentage of statements you understand without clarification, you’ll make it easier for the user to interact with your bot. That’s a win/win.
In turn, make it easy for your user to understand what commands are available. By guiding the user through a set of questions, in an almost wizard-like pattern, you’ll increase the chances of success.
To determine the user’s intent by using regular expressions or other external recognizers, you use the
IntentDialog effectively has a set of events exposed via
matches which allow you to execute at least one function in response to the detected event.
Let’s say you wanted to respond to the user’s command of “load”, and send a message in response. You could create a dialog by using the following code:
matches takes two parameters - a regular expression which will be used to match the message sent by the user, and the function (or array of functions) to be called should there be a match. The function, or event handler if you will, takes three parameters,
session, which we saw previously,
args, which contains any additional information sent to the function, and
next, which can be used to call the next function should we provide more than one in an array. For the moment, the only one that’s important, and the only one we’ve used thus far, is
To use this with a bot, you’ll create it and add the dialog like we did previously, only adding in the dialog object rather than a function.
If you run the code, and send the word load, you’ll notice it sends the expected message.
Over time you’ll add more intents. However, as we mentioned earlier, we want to make sure we are able to give the user a bit of guidance, especially if they send a message that we don’t understand at all. Dialogs support this through
onDefault, as you might suspect, executes as the default message when no matches are found.
onDefault works just like any other handler, accepting one or more functions to execute in response to the user’s intent.
You’ll notice you don’t give
onDefault a name because it’s of course also a name. You’ll also notice we used
session.endConversation to send the message.
endConversation ends the conversation, and the next message starts from the very top. In the case of our help message this is the perfect behavior. We’ve given the user the list of everything they can do. The next message they send, in theory anyway, will be one of those commands, and we’ll want to process it. The easiest way to handle it is to use the existing infrastructure we just created.
If you test the bot you just created, you should see the following:
When creating a bot, the first thing you’ll do is determine what the user’s intent is; what are they trying to accomplish? This is done in a standard app by the user clicking on a button. Obviously, there are no buttons. When you get down to the basics, a bot is a text based application. Dialogs can make it easier to determine the user’s intent.
One of the most common phrases when I’m talking about technology for end users is “meet them where they’re at.” A big reason applications fail to be adopted is they require too large of a change in behavior from the users in question, having to open yet another tool, access another application, etc. We as humans have a tendency to revert to our previously learned behaviors. As a result, if we want to get our users using a new process or application we need to minimize the ask as much as possible.
This is one of the biggest places where bots can shine: they can be placed where our users already are. Users are already using Office, Slack, Skype, etc. A bot can then provide information to the user in the environment they’re already in, without having to open another application. Or, if they want to open an application, the bot can make that easier as well. In addition, the user can interact with the bot in a natural language, reducing the learning curve, making it seem more human, and maybe even fun.
At //build 2016 Microsoft announced the Microsoft Bot Framework, a set of APIs available for .NET and Node.js, to make it easier for you to create bots. In addition, we also announced Language Understanding Intelligent Service (LUIS), which helps break down natural speech into intents and parameters your bot can easily understand.
What I’d like to do over a handful of posts is help get you up and running with a bot of your own. We’ll use Node.js to create a simple “Hello, world!” bot, and then add functionality, allowing it to look up user information in GitHub, and then integrate it with various chat services.
The Bot Framework is currently under development. As a result, things are changing. While many of the concepts we’ll talk about will likely remain the same, there may be breaking code changes in the future. You have been warned. ;-)
npm as well. Finally, I’ll be using ES6 syntax as appropriate.
With that in mind, let’s create a folder in which to store our code, and install
As for the initialization, I’m not overly concerned with the settings you choose there, as we really just need the
package.json file; you can just choose all of the defaults.
Let’s start with the stock, standard, “Hello, world!”, or, in this case, “Hello, bot!”
Creating an interactive bot requires creating two items, the bot itself, which houses the logic, and the connector, which allows the bot to interact with users through various mechanisms, such as Skype, Slack and Facebook.
In regards to the connector, there’s two connectors provided in the framework - a ConsoleConnector, perfect for testing and proof of concept as you simply use a Windows console window to interact with your bot, and the ChatConnector, which allows for communication with other clients, such as Slack, Skype, etc. You’ll start with the console connector, as it doesn’t require any other client than the standard Windows console.
As for the bot, you’ll create a simple bot that will send “Hello, bot” as a message. To create the bot, you will pass in the connector you create.
Create a file named
text.js, and add the following code:
Let’s start from the top. The first line is the import of
botbuilder, which will act as the factory for many objects we’ll be using, including
ConsoleConnector, as you see in the second line.
To create a bot, you need to specify its connector, which is why you’ll create that to start. The connector is used to allow the bot to communicate with the outside world. In our case we’ll be interacting with the bot using the command line, thus
ConsoleConnector. Once you’ve created the connector, you can then pass that into the bot’s constructor.
The design of a bot is to interactively communicate with a human through what are known as
dialogs. The next line adds in a dialog named
/. Dialogs are named similar to folders, so
/ will be our starting point or root dialog. You can of course add additional dialogs by calling
dialog yet again, but more on that in a later post. The second parameter is the callback, which, for now, will accept
session manages the discussions for the user, and knows the current state. You’ll either use
session directly to communicate with the user, or pass it into helper functions to communicate on your behalf.
The simplest method on
send which, as you might imagine, will send a message to the user. If you run
text.js, and type in anything and hit enter (make sure you type something in to activate the bot!), you’ll see the message.
You need to send a signal to the bot first in order to “wake it up.” When you’re using the command line for initial testing this can be a bit confusing, as you’ll run the application and notice that nothing is displayed on the screen. When you run your bot, just make sure you send it a message to get things started.
Obviously, displaying a static message isn’t much of a bot. We want to interact with our user. The first step to doing this is to retrieve the message the user sent us. Conveniently enough, the
message is a property of
session. The message will allow us to access where it was sent from, the type, and, key to what we’re doing, the
Let’s upgrade our simple bot to an echo bot, displaying the message the user sent to us.
You’ll notice we updated the
session.send call to retrieve
message, which contains the user input. Now if we run it we’ll see the information the user typed in.
Bots are a way for users to interact with services in a language that’s natural, and in applications they’re already using. You can integrate a bot with an existing application, such as a web app, or with a tool users are already invested in, such as Slack or Skype. We got started in this blog post with bots by first obtaining the SDK, and then creating a simple bot echo service. From here we can continue to build on what we’ve learned to create truly interactive bots.
I remember working for a .NET development shop back in 2005. .NET 2.0 was still in its nascent phase, and the team I was on was still relatievely new to .NET. We were all trying to figure out best practices, object design, etc. But we were good developers, and knew that it’s always best to go with libraries and frameworks that are already written, ones that will simplify the task at hand. The library that was popular among the team was CSLA.
Now I should mention right up front, for full disclosure, that I’m personally not a fan of CSLA. But everyone has their own opinion, and I know many teams have used CSLA to great success. The post below is not about CSLA, but rather about finding the right tool, and the right functionality.
There’s this famous scene in Spinal Tap, a mockumentary about a fictitious rock band, where the lead guitarist, Nigel Tufnel, explains to the reporter how his amp is louder because it goes to 11. When asked why he simply didn’t make 10 louder, Nigel is befuddled and again states the amp goes to 11, which is one more than 10. 11 must be louder, right?
When we discussed using CSLA in our team, one of the arguments that was given in its favor was CSLA had built-in support for remoting, a precursor to WCF. Our project, however, not only had no need for remoting, it would never need remoting. The fact that the framework supported remoting was of no concern to our application.
However, “this one goes up to 11.” “This supports remoting. And isn’t that cool?”
Yes, it’s cool - if we need remoting. Otherwise, we’re simply adding complexity for complexity’s sake.
If you look out over the developer landscape, you’ll notice tool upon tool, framework upon framework, all offering some set of functionality, with promises to make your life as a developer easier. And most frameworks and tools will do exactly that. But, those tools may come at a cost in increased complexity.
On the web side of things, you have NPM, Grunt, Gulp, Bower, …, to help manage packages, files, workflows, etc. And you have jQuery, Bootstrap, Knockout, Angular, …, to make developing front ends that much easier. And the list goes on and on.
All of those various tools and frameworks have their place, and they can all bring additional power, and help you create applications that much faster. But they can also add unnecessary bloat. And complexity.
Before taking a dependency on a tool or framework, make sure the features it provides are what you actually need. For example, Ember.js has this amazing data store; it’s my favorite feature of that framework. But if you’re not making many Ajax calls and working with data, but rather need to simply update the front end dynamically, why choose that as your framework? Why not use jQuery? Or maybe Knockout?
As a perfect example, I am currently pecking out a sample NodeJS application. I dutifully sat down and started adding in various packages, started tweaking my Gulp file, and then stopped myself.
It’s a simple sample application.
Do I need to use LESS? Since I’m just going to be using the default Bootstrap theme, there’s no need for me to worry about pre-processors or the like. Could I use LESS? Sure, but I don’t need it to survive.
If the sample application starts to grow and become more complex, maybe I’ll revisit those decisions. But right now, I don’t need them. Why would I take a dependency on something that offers me features that I don’t need?
Instead, make 10 louder.
Long road relays seem to be all the craze in running these days. Considering the basic concept is you get a bunch of your friends together and cover 200 miles in shifts, the appeal is pretty obvious. Well, obvious to runners anyway. ;-) Chances are if you’re a runner you’re familiar with this style of race, and you’re probably considering doing one. I just finished my first, and while I’m certainly no expert, I did learn some lessons that I wish I’d have known about before the race. So, I’m going to share them with you.
You will want one outfit per leg. After all you run your leg, and then rest, either in your van or elsewhere, for the next few hours. You’re either going to be wearing wet, stinky clothes, much to the chagrin of your van-mates, or you’re going to be putting on wet running gear to head out for your next leg, which is certainly not something you want to do.
2 gallon Ziploc bags are your friends. Not only are they a great way to group together gear, they’re also a great place to store those wet clothes we talked about above. They’ll not only keep the stink contained, they’ll make it easy to keep everything else nice and dry.
When I did my race I meticulously packed each of my three outfits into three separate bags. What I discovered, though, was I wound up swapping things around from my original plans. Next time I’d keep tops in one, bottoms in another, and things like socks in a third. And then just toss your dirty stuff into a single Ziploc.
Make sure you know ahead of time what everyone’s goals are. I did my race with a friend, and we were originally thinking we’d be running a slightly faster training pace. It was only after we joined our team that we discovered everyone wanted to race their legs. We adjusted and rolled with the punches, but it would have been good to have those expectations ahead of time.
Depending on the speed of your runners, you’re not necessarily going to have a lot of time at the exchanges to change. On top of that, the only place you will have to change in private is in a portapotty, and that’s not really the best place to be for anything but the original design of the equipment.
This means, often, the best place to change is going to be in the van. You can set up an area in the back seat, with a couple of towels, that can help give a shield to the person changing and keep the right parts covered. This isn’t to say that you have to give up all privacy, but being comfortable enough to simply have people not look goes a long way to making it easier to get out of those wet clothes or into the next outfit you’re going to wear.
As I mentioned above, there isn’t always a lot of time in the transition areas. You’ll want to have a plan in place on what you’re going to be doing in the transition area. There are three runners you need to support at all times - the one who just finished, the one who is about to start, and the runner that’s currently on the road. You’ll want to make sure you figure out how you’re going to balance all of those runners to make sure everyone has what they need.
If at all possible, have a dedicated driver. Having to run and drive, which I did, makes for a very long day.
Try to keep everyone at some state of ready as you go from transition to transition. Trying to load a van to head to the next exchange can be a bit like herding cats, as someone realizes they need a headlamp, or a reflective vest, or a granola bar, or … This takes time, and makes it that much harder to get out to cheer on your runner, and get to the next exchange in a good amount of time.
Make sure everyone has a headlamp, vest, and back light of their own. You don’t want to worry about trying to share them. And keep those in a separate bag, or maybe in the same bag with your socks (see above). This way you know where everything is at any given time.
Lost car keys are always a risk as everyone keeps hopping in and out of the van in sporadic order. In addition, you’ll be passing the keys around as different people are driving or going into the van. Having them on a lanyard, and around someone’s neck, decreases the chances you’ll lose it, and makes it easier to spot who has the keys.
When you’re packing, make sure you have a sleeping bag and a good pad (or small air mattress) to sleep on. You’re not going to have a lot of time to sleep, so you’ll want to make the best of it. Those little creature comforts will make all of the difference in the world. They’ll also give you the flexibility to sleep outside (maybe pack a small tent?) rather than the high school.
While we’re at it, an eye mask and earplugs are an absolute must. Trust me, you absolutely need them.
I didn’t have any of the above, and I was not a happy man come the following morning.
Ragnar (I can’t speak to the others) does a good job of marking the course, but not always. There was one turn in particular where they had everyone running on one side of the road, but the sign to turn was on the opposite side - very easy to miss, and one runner I know did. Bring the map.
In fact, one thing you might want to consider is leaving someone at the challenging corner if you see it while driving along the route to help the runner make the turn. The little bit of lost time to pick up the person left behind is far less than risking losing a runner.
And the other runners as well.
There’s not going to be a lot of runners out while you’re running, and not much in the way of support beyond those running the race. As a runner, you know how much support helps. Give that support to your runner, and the other teams while you’re at it. They’ll appreciate it.
Chances are you’ll be away from home when you finish. You’re not going to want to drive right home afterwards, and why would you even if you could? I mean, you just finished a great race with your new best friends! You should celebrate it.
If you rent a big house you’ll be able to get showers (you’re going to want a shower!), beds, a kitchen for food, etc.
This goes without saying, but enjoy the experience! It’s great being able to see the sun go down, and then come back up. It’s great being out on a country road, at night, running along. And you’ll share laughs, and a great time.
18 months. That’s how long it’s been since my last marathon. I’ve battled many an injury: shin splints, back issues, and IT band, the last of which sidelined me since last January. It’s been a long struggle back, and fortunately I have many friends who’ve given me more love and support than I could possible ask for.
Grandma’s Marathon, a race which I’ve run in the past, a race which I love, and a race to which I have a connection, as my wife and I went to college at University of Minnesota Duluth (UMD), seemed like the perfect race for my triumphant return. Well, at least it seemed that way.
First up, let’s talk a little bit about the race itself. It’s called [Grandmas][Grandma’s Marathon] because the title sponsor is a local restaurant/bar named, fittingly enough, Grandma’s. They’ve been the title sponsor since the second year the race has been held, which was 39 years ago. That made the 2016 running the 40th annual event, which also meant there was a cool medal. It’s all about the cool medals when you’re a runner.
Because it’s based in Duluth, MN, a town of 100,000 people, and sees a minimum of 15,000 people combined running the half or the full, it takes over the town. The hotels are in full gouge mode over the weekend, with even the Bates Motel asking for no less than $300/night, but they open up all of the local college campuses for the runners. There are signs up all over town referencing the runners. And everyone you talk to just assumes you’re there for the race. It’s a wonderful atmosphere.
Grandma’s is a mini-Boston. Granted, I’ve never run Boston, and probably won’t ever be fast enough to do so, but there are striking similarities. It’s a point to point, starting from a small town and running into the city. It’s a net downhill, with hills throughout, and a good climb at mile 22, called Lemon Drop Hill, although none of the hills are as bad as the ones you’d seen in Boston. And, until the end, there isn’t much in the way of turns, as it follows an old highway into town, meaning all of the turns are sweeping curves.
Our 2016 experience started with my wife, Karin, our good friend, Susan, and myself all flying into Minneapolis, with intentions to drive to Duluth, on Thursday. We all happened to be flying in from different places, Karin from our house in Seattle, Susan from her house in Ottawa, and I from a conference in Boston. Originally we were all supposed to land within about 5 minutes of one another, but in what would set the tone for the weekend, things didn’t go as planned - my flight out of Boston was delayed a good couple of hours. Fortunately, Karin and Susan were able to find dinner, and I was able to eat on the plane, so we were able to grab the car and just start driving to Duluth. We avoided rush hour, and made good time into Duluth. We checked into the dorms and fell fast asleep.
Because Karin and I attended college in Duluth, and lived there for 4 years, we wanted to share a little bit of our past with Susan. This meant starting Friday morning, the day before the race, with breakfast at the Perkin’s we all would hang out at as college kids. We showed Susan a couple of neat views of Lake Superior, the lake we’d all be running along the following day.
The race finishes around the DECC, which is where the race expo is, and the last mile is where the race organizers “make you work for it”. You wind up running past the finishing line twice, before finally making the last turn towards it. There’s also a quick, but steep, hill to cross a bridge to get over to the DECC, and a couple of tight turns you want to be aware of in advance. We took the opportunity to walk the last mile and familiarize ourselves, particularly Susan, with the finish. This was the best 30 minutes we’d spend that day, and it’d pay dividends during the race. If you decide to do Grandma’s, I can’t suggest checking out that last mile enough, otherwise it will bite you - I promise.
The expo itself was your stock expo, with all of the various tchotchkes you might want to find. What made this unique was Dick Beardsley, a Minnesota boy who set the course record at 2:09:36, a record which stood for over 30 years, was signing autographs. I needed an autograph! I can tell you Dick is probably the nicest guy you’ll ever meet, and a great ambassador for the sport. He would talk to everyone for as long as they wanted, and truly relished the time to meet with fellow runners.
After the expo, and realizing we’d walked about 10K, we decided it was time to find lunch and get off our feet. Lunch location? Erbert’s & Gerbert’s, a Duluth institution. We roamed back to campus to take over one of the lounges and watch Spirit of the Marathon, because you have to. We found dinner at a great little Italian restaurant to complete our carbo-loading. Then it was time for sleep, with visions of PRs dancing in our heads.
Now, let me back up just a little bit to talk about the weather obsessing that is standard for any race. When we first started looking it looked like it might be a touch warm (highs right around 70F), but otherwise OK. Then it looked like it’d be hot. And then rain. The night before it looked like the rain was going to miss us, and we were back to a little warm, no wind. We thought everything would be good.
So we thought…
Grandma’s Marathon uses a flag system to indicate the weather risk - green, yellow, red, and then black, with black meaning “extremely high risk”.
Karin boarded her 4:45 bus for the half marathon, which starts at 6:15a. Her race temps were a bit warm, but she finished the race just as they put out the yellow flag. She had a great race, even if it was a bit slower than her previous race here.
As for me and Susan, well… Things started out looking promising. We boarded the bus nice and early, and were among the first to arrive at the starting space, a car dealership. We got a picture by the starting corrals, took pictures of the green (yes, green) flag, indicating good race conditions, and found a nice spot on the grass to chill before the race. Because we were among the first to arrive, we had first crack at the portapotties, which is every runner’s dream. We went through our normal pre-race routines, and chatted about strategy and expectations.
We waited for about as long as we could before fighting our way down to gear check, and then into the corral area. As we pushed our way towards our areas in the corral, we heard the announcer, who clearly wasn’t a runner, talk about how beautiful the day was, at a pleasant 68 degrees. 68 degrees is about 20 degrees too warm for most runners I know. About 10 minutes later, he announced it was 72. What he didn’t add was the fact the humidity was around 80%. It was going to be a slog, and I was beginning to sweat just standing there. I didn’t know that heat that was ahead of me.
A bit of background on me. I’ve run 3 marathons in the past, and I’ve yet to run what I’d call a good race. In each race I had some form of a collapse, at varying spots along the race. I know I have a 3:59 in me, but I’ve yet to coax it out of my body. Going into the race, this was my goal. This was my race. This was my time. Mother Nature, unfortunately, had other ideas.
I bid Susan farewell at the 4:00 “corral”(1), and let her fight the rest of the way to her 3:45 area. When I got there I started looking around for the 4:00 pacer, along with the rest of the runners in the area, to no avail. I discovered later that all pacers under 4:15 were towards the front of the starter’s chute, which did nothing to help those of us who lined up where we were supposed to line up. Grandma’s gets nearly everything right during race weekend, but this was a huge blunder on their part. It was at this point I realized I wasn’t going to have a pacer, and thus wasn’t going to have a pack to run with.
It was the latter part that really bothered me. I’m a social runner. I like to chat, to have comradery, to share the experience with those around me. When there’s a pacer, there’s automatically a pack and a sense of community. Without a pacer, well, it’s every runner for themselves. So while I did chat with a few runners at the start, once the gun went off and we reached the start line, everyone went off by themselves.
I have to say I’m rather proud of myself. I have a history of going out too fast, which is easy to do on this course, as the first couple of miles are downhill. Plus, considering the lack of a community, I could have easily latched on to a runner who was going too fast. But, when the one woman I was talking with took off a bit faster than the 9:09 pace I wanted, I let her go off and do her own thing, and settled into my pace.
And find my pace I did. I went through the first 10K at about 17 total seconds fast, or about a 9:06 pace. My overall pace from mile to mile didn’t vary by more than 8 seconds. I could not be happier with how that first 10K went. And while that green flag was a yellow flag at the first water station, I was still feeling good.
But… When I saw that yellow flag at 5K I knew this was a sign of things to come. I knew at some point we were going to see that red flag. It was too early in the day, the sun was bright overhead, and it was only going to get warmer. My body, however, felt great at the 10K mark. I was going to ride this wave for as long as I could.
Around that 10K mark I passed the 4:15 pace group, which I thought was rather strange. I didn’t know what had happened with the pace groups until after the race, but passing a pace group of any sort does give you a bit of a burst of energy, which I took. I did give a thought to falling in with the group, knowing the heat was only going to continue to beat me down over the course of the day, but again - my body was feeling good.
I mentioned earlier I’d run 3 marathons, and during each marathon you learn different lessons. My last marathon, Carlsbad, was an unmitigated disaster, with me having to push to finish under the 5:00 mark.(2) At mile 10 of that race my quads were shredded and I didn’t have anything. But I decided to push it, and when I hit mile 16 I had nothing left. I walked about 8 of those last 10 miles.
With that lesson in mind, I’d already made the decision to do check-ins with my body at every 10K, and adjust as needed. I cruised through that first 10K, and was settled in for a good race. I found my pace, my stride, and knew the effort that was needed to maintain right around that 9:09 pace. My watch showed me at 9:06, and all was good.
Then things started to change. I went through mile 7 and 8 a touch slow, but nothing that had me overly concerned. I noticed the overall pace on my watch start to creep up to 9:07. Then 9:08. And then I went through mile 10, at the same effort I’d done the previous 9 miles, at a 9:21 pace. It was at this point when I realized it just wasn’t going to be there. Now, yes, 12 seconds off pace isn’t much. But with the temperature starting to rise, and the sun still beating down, I decided to just settle in and enjoy the run rather than shooting for a PR. In the end this turned out to be a good decision, as the 11-mile water station flew what I was expecting to see: red flag. Yes, it was that hot.
From here forward it was all about keeping cool, staying positive, and enjoying the race as best as I could. I was going to listen to my body, try to run from balloon to balloon (mile to mile), and walk through the water stations. Unfortunately, I did wind up walking more than I’d hoped, but I was able to run a lot more than I could at Carlsbad, further reinforcing my decision to bag the PR at mile 11.
At this point I’d like to thank the race organizers for doing an amazing job with the aid stations, and the good people of Duluth, MN for helping keep us cool. The aid stations are easily the best I’ve ever seen at a race, and I’ve experienced many races in my time. Every single station was well staffed and supplied. You had plenty of time to get your water or PowerAde. The layout was water, PowerAde, ice, sponge, water. Yes - two water tables at each station, and PowerAde at each station. I don’t know of another race that does that. And the sponges and ice were plentiful as we all tried to keep ourselves cool, or as cool as we could.
The good people of Duluth did their best as well, setting out cooling showers, or just simply hanging outside of their house with a hose. They all understood to not simply douse the runners, but rather let the runners come to them. One of my highlights was around mile 22, when I went running towards a guy with a hose and said “HIT ME!”, which he did with about 4 gallons of water. It was amazing!
But, back to the race. I hit the halfway mark at 2:02, which made me smile, knowing I had a good time until then, but also knowing what was ahead of me time wise. At mile 15, there was a loud BANG. Turned out the blue balloons, indicating the half marathon miles, exploded from the heat. The explosions were repeated by many of the balloons, although this was the only one I witnessed. My running partner in crime, Susan, also said she heard one. See the explosions or not, you noticed the aftermath as there were few blue balloons left on the course as the day wore on.
The turn into the city was welcomed by everyone, as the fans really start to pick up during that stretch, carrying you through to the finish. The mile through downtown Duluth is amazing, with an energy that can’t be described, as the fans, three to four deep, cheer you through towards that last mile.
Remember that last mile? Boy was I glad we walked through it the day before, as I knew exactly what to expect when we turned left on 5th to cross the bridge towards the DECC. I focused on doing the loop behind the DECC, past the USS Irvin, past the finish line for the second time, and then around one last little loop to head into the finish - at 4:41, and in 79-degree heat. I’ve never been so happy to see 4:41 on my watch, as I had to fight to get it below 4:45.
I crossed the finish line in full celebration mode, knowing what I’d overcome that day. I was decorated with my medal, found my finisher’s shirt, and was then embraced by Susan, with a very knowing expression of “we survived!” As it turned out Susan was about 25 minutes off the time she was hoping for, and battled to get there as well. Susan then led me to Karin, and another wonderful embrace.
After the obligatory post-race photos, the two of them had been in the area for a while, and had already found their first round of post-race refreshments. They helped me find what I needed (a chocolate milk and banana to start), and my gear. Grandma’s offers changing tents, which both Susan and I took advantage of, getting out of our clothes which were soaking wet. Next stop: beer tent, for a well- earned cold one. Karin, who’d been hanging out in the finishing area the longest, then declared she was hungy, which sent us off on a search for food. Needless to say Canal Park, where the race finishes, was absolutely packed. But with a little bit of a walk we were able to find a restaurant that had immediate seating. We settled in, had a couple of bites, and beer number two. :-) We swapped stories, commiserated, and slowly started to recover from a long, hot, but successful day.
Grandma’s remains my favorite marathon, and the heat does nothing to change my opinion of this great race. Yeah, it was hot, but the organizers handled it with aplomb. The only issue was the pacers, which I mentioned above. If you are going to run this race, book your hotel, and your pre-race meal, nice and early.
And pray for rain.
(1) There aren’t corrals, just signs denoting the expected finishing times
(2) Speed is relative. For some, 4:00 is slow. To me, 5:00 is slow. To each their own strengths, etc.
… oh my!
Lately, one of the most common questions I’ve received from my students is, basically, “What in the world is this syntax, and what does it mean?”
Well - it’s a lambda statement. But of course if you haven’t seen one, you need a bit more information than that.
Quite frequently, a full explanation would simply take too long, or send the class down another path far away from the topic at hand. What I want to do with this post is answer that question fully.
I’m going to answer the question by using one of my biggest philosophies when it comes to training, which is to explain it in the way that I understood it, in whatever method it was that made it finally click for me. In this case, that means stepping all the way back to the beginnings and showing essentially the progression that got us to where we are today.
That’s going to take a little while, so bear with me.
Trust me, we’ll get there.
Let’s take a simple class called
Customer that’s defined below:
Pretty straight forward. Couple of properties, a simple constructor. Looks good. Now let’s create a couple, put them into a list, and display them.
Again - pretty straight forward. And the output will of course be exactly what we’d expect:
Now let’s see if we can sort those customers. Fortunately,
List has a a
Sort method. Let’s update line 7 to call sort (code below) and run it to see what happens.
Well that wasn’t ideal… In a nutshell, what the runtime is trying to tell us is that we told it to sort a list of customers, but it has no idea how to sort our customers. Makes sense. How do we tell it to sort our customers? Well – by implementing
IComparable is an interface with one method –
CompareTo returns an integer based on the following criteria:
Current object (
this) is less than the other object, return a negative number
Current object (
this) is equal to the other object, return 0
Current object (
this) is greater than the other object, return a positive number
One thing that you’ll notice is that every primitive type (and strings) in .NET already implement
IComparable. This means we can take advantage of their implementation. Let’s update our
Customer class to implement
IComparable and sort by
The breakdown looks a bit like this:
Line 8 - see if the object is null. If it is, move it to the top of the list.
Line 10 - convert the object to a customer
Line 11 - if it turns out that obj is a
Customer, use the
CompareTo method on
Line 13 - if it turns out that obj is not a
Customer, throw an
If we run the code again, we now get the result that we were hoping for:
But…. If you look at the
CompareTo implementation, we’re having to cast the obj parameter to
Customer. Why can’t we just tell the
IComparable interface that we want people to pass in a
Customer and be done with it?
Fortunately – we can. The way that we do this is by using generics. In a nutshell, generics allow you to pass a type as you would a variable. So at design time we tell
IComparable prepare itself for
Customer objects. Let’s update our
Customer class again.
Cool - we have a Customer class that can be sorted.
But… It can only be sorted one way - by
LastName. If we wanted to sort by
FirstName, well - that’d require updating our class. Having to update our class every time we need a different sort order would be a pain.
Fortunately – we don’t have to. The .NET framework also includes an
IComparer interface. The difference between
IComparer is that
IComparable is for sorting that specific class (which is why
IComparer is for sorting other classes - a utility if you will. Let’s create a class that can sort
Customer objects by
The main difference, besides calling
FirstName, is that we have two parameters of type
Customer and we’re comparing one to the other. But the logic is still basically the same.
To use the new sorter, we simply pass a new instance of the object into the
Sort method on List.
When we run the code now, we get everything sorted by
But… If we have to create a new class every single time we need to change the sort order, well – that’s going to stink. There’s gotta be a better way.
Fortunately, there is. The .NET Framework gives us the ability to use delegates, which allow us to pass methods like we would objects.
Let’s take a look at our new
Compare() method that we created on our
CustomerSorter class. You’ll notice that it takes two parameters, each of type
Customer, and returns an integer. That’s it. And when we call
Sort and pass in the
CustomerSorter, it simply calls that method. Why can’t we just pass a method into
This is where that delegate, an object that points to a method, comes into play. Since it can be used like an object and passed in as a parameter, we can just tell
Sort to call that method directly. This is really the same thing we were doing before by passing in an instance of
CustomerSorter to the
Sort method, only this time we’re just passing in a method.
Our method just needs to match the same signature – two
Customer parameters and return an integer.
Let’s add a method to our
Program class that will use the same logic as our
Want to know a secret? I simply copied and pasted from
CustomerSorter class and changed
private, instance to static, and the name to
Now we just need to tell the
Sort method to use our new
And as before, the result is the same - sorted by
But… If we need to sort in several different orders, we don’t want to have to create a method each and every time. Wouldn’t it be nice if we could just inject the logic right into the call to
Fortunately, we can. We do this by creating an anonymous method. An anonymous method is a method that has no name. We just create the method signature by using a delegate, and pass it right into the
Just as before, we have the same logic (yes, I copied and pasted). We’re just declaring the method just as we normally would with only a couple of differences. First, we’re using the keyword delegate instead of using a method name. Second, we’re not specifying a return type. The reason is that
Sort already knows what the return type is – an integer – so the compiler doesn’t require it.
And, as before, the result is the same – sorted by
But… Why do we have to specify the types of the parameters? After all, we already told the list right up front that we were using
Fortunately, we don’t. This is where lambda expressions come into play. A lambda expression is just like an anonymous method, only with a couple more assumptions. In our case, since we know, and the compiler knows, that we are only dealing with
Customer objects, and we need to return an integer, we’re just going to declare our variables and move on.
The main difference between this and our anonymous method is the syntax and the fact that we’re not declaring data types. Since everyone, including the compiler, knows that
rhs can only be
Customer objects we don’t have to declare it. The => is just the syntax to indicate the start of the method, or lambda expression.
And, as before, the result is the same – sorted by
But… What about the normal situation where all we want to do is just say, “Hey .NET - sort this by ___ for me.”? Do we really have to create a method, even in a lambda expression, every single time?
Fortunately, we don’t. This is where Language Integrated Query (LINQ) comes into play.
In each of our method implementations, our code has simply taken advantage of the logic in the
String class. LINQ will do the same thing for us. We simply tell LINQ, “Hey – sort by this”, and it’ll handle the translations for us.
The first step to using LINQ is to leave behind the
Sort method that we’ve come to know and love. Unfortunately,
Sort doesn’t support LINQ.
When Microsoft introduced LINQ, they also introduced something called extension methods. Extension methods are a way of adding a method to an existing class without having to inherit from that class. The method that was added to the
List class, or more specifically
IEnumerable, for LINQ was
OrderBy we can, by using a lambda statement, simply tell LINQ the property we want to sort by. So the new code looks like this:
Couple of things to notice here.
First up is that unlike
OrderBy is volatile, meaning that it’s going to return the sorted list rather than updating the list like
Second is the fact that we’re not returning an integer. Again, LINQ will handle the translation for us. As long as we specify a property that implements
IComparable it will use that for the sorting.
Third is that LINQ uses deferred execution. In other words, it won’t actually do the sort until we use the results. In this case, we’re doing this when we use the foreach loop.
And now when we run the code we of course get the same results.
And that’s what that lambda statement is all about. It’s really just letting someone else create the query for us based on a couple of assumptions about our code and the types that we’re using.
From here, things actually get cooler. We of course have LINQ syntax, which allows us to write something similar to a SQL query. And, LINQ can also translate the queries for other environments, like SQL.
But for now, we’ll leave it here, with a very propeller head view of lambda statements. Hopefully this helped bring together what’s happening behind the scenes, and why we’re able to take the shortcuts we can take.