A couple of days ago I read a very cool blog post by Scott Hanselman about Monospaced Programming Fonts with Ligatures.

I had never heard about the word ligatures, but he explains it quite well. They are ‘characters’ which are made up by combining multiple individual characters as one. Apparently this is quite common in the Arabic languages. Well, no matter, the thing that does matter is the fact you can use this inside your development environment also!

In order to use ligatures, just install the Fira Code font (or any other font which supports ligatures) on your development machine and you are ready to go! It might be a good idea to place the zip file in your OneDrive folder, so it’s available on all of your machines.

Visual Studio automatically supports ligatures, you just have to select a font which has them. So, head down to the Fonts and Colors setting and change the Plain Text font to `Fira Code Retina`.

image

Note, I had to use the `Fira Code Retina` font. I first tried the `Fira Code` font, but I didn’t see any ligatures pop up. There aren’t many differences between the non-retina and the retina version, so just use the one which works best for you.

After configuring this you will be able to see the ligatures be automatically applied to your codebase (restarting Visual Studio may be required)

image image imageimage

Keep in mind, this is just a representation of characters. Your actual code doesn’t change!

For Visual Studio Code you have to enable ligatures specifically. Change the following settings in your User Settings file.

    "editor.fontFamily": "'Fira Code Retina', Consolas",
    "editor.fontLigatures": true

After saving the file you should see the changes directly. If not, restart Code also.

image

Pretty slick, right?


Are there any other font suggestions I might have to look at? I’m currently quite happy with Fira Code, but open to other great suggestions!


Sidenote: people wondering which color scheme I’m using in Visual Studio, it’s theXamarin Studio style.

Lately, I’ve been busy learning more about creating serverless solutions. Because my main interest lies within the Microsoft Azure stack I surely had to check out the Azure Functions offering.

Azure Functions enable you to create a serverless solutions which are completely event-based. As it’s located within the Azure space, you can integrate easily with all of the other Azure services, like for example the service bus, Cosmos DB, storage, but also external services like SendGrid and GitHub!

All of these integrations are fine and all, but seeing Azure Functions perform in action is still easiest with regular HTTP triggers. You can just navigate with a browser (or Postman) to a URL and your function will be activated immediately. I guess most people will create these kind of functions in order to learn to work with them, at least that’s what I did.

Creating your Azure Functions App

In order to create Azure Functions, you first have to create a so called Function App in the Azure Portal. Creating such an app is quite easy, the only thing you have to think about is which type of Hosting Plan you want to use. At this time there are 2 options, the Consumption Plan or the App Service Plan.

image

For your regular “Hello World”-function it doesn’t really matter, but there are a few important differences between the two.

If you want to experience the full power of a serverless compute solution, you want to use the Consumption plan. This plan creates instances to host your Azure Functions on-demand, depending on the number of incoming events. Even when there is a super-high load on your system, this plan scales automatically.
The other main advantage is, you will only pay for the functions if they actually do something.
As you might remember, these two advantages are, in my opinion, the main benefits for people to move to serverless offerings.

However, using the App Service plan also has some advantages. The main advantage will be to utilize the full power of your virtual machines and not having unexpected (high?) costs. With the App Service plan, your function apps run on the App Service virtual machines you might already have deployed on your subscription (Basic, Standard and Premium). This means you can share the same (underlying) virtual machines of your websites with your Azure Functions. Using this plan might save you some money in the end, because you are already paying for the (unused) compute which you are able to utilize now. Running these functions won’t cost anything extra, aside from the extra bandwidth of course.
Another advantage is your functions will be able to run continuously, or nearly continuously. The App Service plan is useful in scenario’s where you need a lot of long-running compute. Keep in mind you need to enable the Always On setting in your App Service if you want your functions to run continuously.

There are some other little differences between the two plans, but the mentioned differences are most important, to me at least.

Do remember to enable Application Insights for your Functions App. It’s already an awesome monitoring platform, but the integration with Azure Functions makes it even more amazing! I can’t think of a valid reason not to enable it, because it is also quite cheap.

After having completed the creation of your Functions App you are able to navigate to it in the portal. A Functions App acts much like a container for one or more Azure Functions. This way you are able to place multiple Azure Functions into a single Functions App. It might be useful for monitoring if you are placing functions for a single functional use-case into one Functions App.
You can of course put all of your functions inside one App. This doesn’t really matter at the moment. It’s a matter of taste.

Your first Azure Function

If you are just staring with Azure Functions and serverless computing I’d advise to check out the portal and create a new function from over there. Of course, it isn’t a recommended practice if you want to get serious about developing a serverless solution, but this way you are able to take baby steps into this technology space. A recommended practice would be using an ARM template or CLI script.

From inside the Function App you have the possibility to create new functions.

image

Currently, the primary languages of choice are C#, JavaScript and F#. This is just to get you started, because there are more languages supported already (node.js, python, PHP) and more are coming. There’s even an initiative to support R scripts in Azure Functions!

For now I’ll go with the C# function, because that’s my ‘native’ programming language.

After this function is created you are presented with an in-browser code editor from which you can start coding and running your function.

image

This function is placed in a file called run.csx. The csx extension belongs to C# scripts (check out scriptcs.net), much like ps1 belongs to PowerShell scripts.
It should now be clear this Azure Function is ‘just’ a script file with an entry point. This entry point is much like the Main-method in the Program.cs file of your console application.

Because we have created an HTTP hook/endpoint, you should return a valid HTTP response, like you can see in the script. If you want to test your function, the portal has you covered by pressing the Test button. Even though testing in the portal is cool, it’s even cooler to try it out in your browser. If everything is set up correctly you will be able to navigate to the URL https://[yourFunctionApp].azurewebsites.net/api/HttpTriggerCSharp1?code=[someCode] and receive the content, which should be `Please pass a name on the query string or in the request body`. You can extract the proper URL from the Get function URL link in the portal.

Management & settings

Azure Functions are really, really, really short-lived App Services. They are also deployed in the same Azure App Service ecosystem, therefore you can leverage the same management possibilities which are available to your regular App Services.

On the Platform features tab you are able to navigate to most useful management features of your Functions App.

image

I really like this page, it’s much better and clearer compared to the configuration ‘blade’ of regular Azure services. Hopefully this design will be implemented with the other services also!

Keep in mind to configure CORS properly if you want to use your HTTP function from within a javascript application!

All other features presented over here are also important of course. I especially like the direct link to the Kudu site, from which you can do even more management!

Another setting, which is in preview at the moment, is enabling deployment slots. Yes, deployment slots for your Azure Functions! This works exactly the same like you are used to with the regular App Services. I’ve configured one of my Function Apps to use the deployment slots. By enabling deployment slots you can now deploy the `develop` branch to a development slot and the `master` branch to the production slot of the Function App.

image

If for some reason you want to disable the usage of a specific function, just navigate to the Functions leaf in the treeview and you are able to disable (and re-enable) the different functions individually.

image

Creating real functions

Creating functions from within the Azure Portal isn’t a very good idea in real life. Especially since you don’t have any version control, quality gates, continuous integration & deployment in place. That’s why it’s a good idea not to use the browser as your primary coding environment. For a professional development experience you have multiple options at hand.

The easiest option is to use Visual Studio 2017. You need version 15.3 (or higher) of Visual Studio, which is still in preview at this moment. When you are done installing this version you should be able to install the Azure Functions Tools for Visual Studio 2017 from the Visual Studio Marketplace on your machine.
Doing so will enable you to choose a new project template called Azure Functions. You can add multiple Azure Functions to this project. Currently, there already is an extensive list of events available to which you can subscribe to. I’m sure the list will grow in the future, but for now it will suffice for a great deal of solutions.
image

After having chosen the event of your choice you can change some settings, like Access Rights. If you want your HttpTrigger to be accessed by anonymous users from the web, you need to set it to Anonymous instead of Function. No worries if you forgot to do this, it’s something you can set from inside your code also.

When comparing the created functions (the one from the portal and from Visual Studio) you will notice a couple of differences.

namespace FunctionApp1
{
    public static class Function1
    {
        [FunctionName("HttpTriggerCSharp")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            //do stuff
        }
    }
}

First of all, your function is now wrapped inside a namespace and a (static) class. This makes organizing and integrating the code with your current codebase much easier.

Another thing you might notices are the extra attributes added to the function.

The Run-method now has an attribute FunctionName, which will tell the Function App what the name of the function will be. Do note, having multiple functions with the same FunctionName will override the one getting deployed. The function.json file will create an entrypoint for the latest function it finds with a specific name.
Also, the first parameter, the HttpRequestMessage, now has an HttpTrigger-attribute stating how the function can/should be triggered. In this case the function can only be triggered by other functions with a HTTP GET or POST. Because of these attributes it is easier to change the behavior of the functions later on. You aren’t dependent on choices made in some wizard.

I already mentioned the function.json file briefly. This file is used to populate the Function App with your functions. If you’ve explored the portal a bit, you might have seen this file already after creating the initial function.

image

This file contains all configuration for the provided functions within the Function App. The function.json file from the first function script contains the following information.

{
  "disabled": false,
  "bindings": [
    {
      "authLevel": "function",
      "name": "req",
      "type": "httpTrigger",
      "direction": "in"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ]
}

Now compare it with the one, generated, by the function in Visual Studio.

{
  "bindings": [
    {
      "type": "httpTrigger",
      "methods": [
        "get",
        "post"
      ],
      "authLevel": "function",
      "direction": "in",
      "name": "req"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ],
  "disabled": false,
  "scriptFile": "..\\FunctionApp1.dll",
  "entryPoint": "FunctionApp1.Function1.Run"
}

As you can see, the file which is generated in Visual Studio contains information about the entry point for the function.

Note: You won’t see this file, it’s located in the assembly output folder after building the project.

In the above example I’ve used Visual Studio to create functions. It is also possible to use any other IDE for this, only you have to take into consideration you’ll have to use the Azure Function CLI tooling in those environments.

Debugging

When using a proper IDE, like Visual Studio, you are used to debugging your software from within the IDE.

This isn’t any different with Azure Functions. When pressing the F5-button a command line application will start with your functions loaded inside.

image

This application will start a small webserver which emulates the Function App. When working with HTTP triggers you can easily navigate to the provided endpoints. Of course, you can also work with any other event as long as you are able to trigger them.

At BUILD 2017 it was also announced you can do live-production-debugging of your Azure Functions from within Visual Studio. In other words: Connecting to the production environment and setting breakpoints in the code.
The crowd went wild, because this is quite cool. However, do you want to do this? It’s nice there is a possibility for you to leverage such a feature, but in most cases I would frown quite a bit if someone suggested doing this.
I do have to note live-debugging Azure Function production code isn’t as dangerous as a ‘normal’ web application. Normally when you do this, the complete thread is paused and no one is able to continue on the site. This is one of many reasons why you never want to do this. With a serverless model this isn’t the case. Each function spins up a new instance/thread, so if you set a breakpoint in one of the instances, all other instances can still continue to work.
Still, take caution when considering to do this!

Deployment

We are all professional developers, so we also want to leverage the latest and greatest continuous integration & deployment tools. When working in the Microsoft & Azure stack it’s quite common to use either TFS or Azure Release Management for building your assemblies.

Because your Azure Functions project still produce an assembly, which should be deployed with along with your function.json file, it is also still possible to use the normal CI/CD solutions for your serverless solution.

If you don’t feel like setting up such a build environment you can still use a different continuous deployment feature which the App Services model brings to us.

On the Platform features tab click on the Deployment options setting.

image

This will navigate you to the blade from where you can setup your continuous deployment.

image

Using this feature you are able to deploy every commit of a specific branch to the specified application slot.

Setting up this feature is quite easy, if you are using a common version control system which is located in the cloud, like VSTS, GitHub, BitBucket or even DropBox and OneDrive.

I’ve set up one of my applications with VSTS integration. Every time I push some changes to the `master` branch, the changes are being built and deployed to the specified slot.

image

When clicking on a specific deployment, you can even see the details of this deployment and redeploy if needed.

image

All in all quite a cool feature if you want to use continuous deployment, but don’t want to set up TFS or Azure Release Management. The underlying technology still uses Azure Release Management, but you don’t have to worry about it anymore.

If you are thinking of using Azure Functions in your professional environment I highly recommend using a proper CI/CD tool! The continuous deployment option is quite alright (and better as publishing your app from within Visual Studio), but one of the major downsides is you can’t ‘promote’ a build to a different slot.
You can only push changes to a branch and those will get built. This isn’t something you want in your company, but that’s a completely different blog post and unrelated to serverless or Azure Functions.

Hope this helps you out a bit starting with Azure Functions.

In my previous post I’ve described how to use Application Insights and use it within your new web application. Most of us aren’t working in a greenfield project, so new solutions have to be integrated with the old.

The project I’m working on uses log4net for logging messages, exceptions, etc. In order for us to use Application Insights, we had to search for a solution to integrate both. After having done some research on the subject we discovered this wasn’t a big problem.

The software we are working on are Windows Services and Console Applications, so we should not add the Application Insights package for web applications. For these kind of applications it’s enough to just add the core package to your project(s).

image_thumb7

Once added, we are creating the TelemetryClient in the Main of the application.

private static void Main(string[] args)
{
	var telemetryClient = new TelemetryClient { InstrumentationKey = "[InstrumentationKey]" };

	/*Do our application logic*/

	telemetryClient.Flush();
}

You will notice we are setting the InstrumentationKey property explicitly. That’s because we don’t use an ApplicationInsights.config file, like in the web application example and this key has to be specified in order to start logging.

This final flush will make sure all pending messages will be pushed to the Application Insights portal right away.
Now you might not see your messages in the portal right away. We discovered this while testing out the libraries. The reason for this is probably due to some caching in the portal or all messages being handled by some queue mechanism on the backend (assumption on my behalf). So don’t be afraid, your messages will show up within a few seconds/minutes.

After having wired up Application Insights, we still had to add it to log4net. When browsing through the nuget packages we noticed the Log4NetAppender for Application Insights.

After having added this package to our solution, the only thing we had to do is creating a new log4net appender to the configuration.

<root>
    <level value="INFO" />
    <appender-ref ref="LogFileAppender" />
    <appender-ref ref="ColoredConsoleAppender" />
    <appender-ref ref="aiAppender" />
</root>
<appender name="aiAppender" type="Microsoft.ApplicationInsights.Log4NetAppender.ApplicationInsightsAppender, Microsoft.ApplicationInsights.Log4NetAppender">
    <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%message%newline" />
    </layout>
</appender>

This appender will make sure all log4net messages will be sent to the Application Insights portal.

As you can see, adding Application Insights to existing software is rather easy. I would really recommend using one of these modern, cloud based, logging solutions. It’s much easier to use this compared to ploughing through endless log files on the disks or creating your own solution.

Some time ago the Application Insights became available as a preview in the Azure portal. Application Insights helps you monitor the state of an application, server, clients, etc. As said, it’s still in preview, but it’s rather stable and very easy to use and implement in your applications.
The documentation is still being worked on, but with all the getting started guides on the Microsoft site you can kick start your project with it in a couple of minutes.

The main reason for me to dig into Application Insights is because we still had to implement a proper logging solution for our applications which are migrating to the Azure Cloud.  As it happened, Application Insights just became available at the time and because of the tight Azure integration we were really eager to check it out (not saying you can’t use it with non-Azure software of course).

If you want to start using Application Insights, the first thing you will have to do is creating a new Application Insights application. You will be asked what type of application you want to use.

image

It’s wise to select the proper application type over here, because a lot of settings and measuring graphs will be set for you depending on the choice made over here. For this example I’ll choose the ASP.NET web application. After having waited for a few minutes the initial (empty) dashboard will be shown.

image

As you can see, the dashboard is filled with (empty) graphs useful for a web application. In order to see some data over here, you need to push something to this portal. To do this, you’ll need the instrumentation key of this specific Application Insights instance. This instrumentation key can be found within the properties of the created portal.

image

Now let’s head to Visual Studio and create a new MVC Web Application and add the Microsoft.ApplicationInsights.Web nuget package to the project.

image

This package will add the ApplicationInsights.config file to the web project in which you have to specify the instrumentation key.

<InstrumentationKey>2397f37c-669d-41a6-9466-f4fef226fe41</InstrumentationKey>

The web application is now ready to log data to the Application Insights portal.
You might also want to track some custom events, exceptions, metrics, etc. into the Application Insights portal. To do this, just create a new TelemetryClient to your code and start pushing data. An example is shown below.

public class HomeController : Controller
{
	public ActionResult Index()
	{
		var tc = new TelemetryClient();
		tc.TrackPageView("Index");

		return View();
	}

	public ActionResult About()
	{
		ViewBag.Message = "Your application description page.";
		var tc = new TelemetryClient();
		tc.TrackTrace("This is a test tracing message.");
		tc.TrackEvent("Another event in the About page.");
		tc.TrackException(new Exception("My own exception"));
		tc.TrackMetric("Page", 1);
		tc.TrackPageView("About");

		return View();
	}

	public ActionResult Contact()
	{
		ViewBag.Message = "Your contact page.";
			
		var tc = new TelemetryClient();
		tc.TrackPageView("Contact");

		return View();
	}
}

Running the website an navigating between the different pages (Home, About and Contact) will result in the web portal pushing data to Application Insights. Navigating back to the selected Application Insights portal will show the results in the dashboard.

image

You can click on any of the graphs over here and see more details about the data. I’ve clicked on the Server Requests graph to zoom a bit deeper into the requests.

image

As you can see, a lot of data is send to the portal, response time, response code, size. The custom messages, like the exceptions, are also pushed towards the Application Insights portal. You can see these when zooming in on the Failures graph.

image

As you can see in the details of an exception, a lot of extra data is added which will make it easier to analyze the exceptions.

This post talked about adding the Application Insights libraries to an ASP.NET Web Application via a nuget package, but that’s not the only place you can use this tooling. It’s also possible to add the Application Insights API for Javascript Applications to your application. This way you are able to push events to the portal on the front end. Awesome, right?

Of course there are plenty of other logging solutions available, but Application Insights is definitely a great contender in this space in my opinion.

On some installations of Visual Studio 2010, 2012 or 2013 I’m confronted with strange behavior. One of these strange things are the black lines in the Watch Window of Visual Studio. Just like the screenshot below (this isn’t my screenshot, I’ve ‘borrowed’ it from someone else)

187351

Normally this has something to do with the graphics driver, but updating these drivers doesn’t work all the times. There’s also a work around for this problem, described on the MSDN forum.

The work around is:

  1. Go to Tools > Options >  Environment > Font and Colors
  2. Search for the settings for [Watch, Locals, and Autos Tool Windows]
  3. For Text select the Default for Item Foreground

The options will look something like the image below.

image

After you’ve set this option you’ll be able to see the text in your watch window again. The lines will be white again with black text.