You might remember me writing a post on how you can set up your site with SSL while using Let’s Encrypt and Azure App Services.

Well, as it goes, the same post applies for Azure Functions. You just have to do some extra work for it, but it’s not very hard.

Simon Pedersen, the author of the Azure Let’s Encrypt site extension, has done some work in explaining the steps on his GitHub wiki page. This page is based on some old screenshots, but it still applies.

The first thing you need to do is create a new function which will be able to do the ACME challenge. This function will look something like this.

public static class LetsEncrypt
{
    [FunctionName("letsencrypt")]
    public static HttpResponseMessage Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "letsencrypt/{code}")]
        HttpRequestMessage req, 
        string code, 
        TraceWriter log)
    {
        log.Info($"C# HTTP trigger function processed a request. {code}");

        var content = File.ReadAllText(@"D:\home\site\wwwroot\.well-known\acme-challenge\" + code);
        var resp = new HttpResponseMessage(HttpStatusCode.OK);
        resp.Content = new StringContent(content, System.Text.Encoding.UTF8, "text/plain");
        return resp;
    }
}

As you can see, this function will read the ACME challenge file from the disk of the App Service it is running on and return the content of it. Because Azure Functions run in an App Service (even the functions in a Consumption plan), this is very possible. The Principal (created in the earlier post) can create these type of files, so everything will work just perfectly.

This isn’t all we have to do, because the url of this function is not the url which the ACME challenge will use to retrieve the appropriate response. In order for you to actually use this site extension you need to add a new proxy to your Function App. Proxies are still in preview, but very usable! The proxy you have to create will have to redirect the url `/.well-known/acme-challenge/[someCode]` to your Azure Function. The end result will look something like the following proxy.

"acmechallenge": {
  "matchCondition": {
    "methods": [ "GET", "POST" ],
    "route": "/.well-known/acme-challenge/{rest}"
  },
  "backendUri": "https://%WEBSITE_HOSTNAME%/api/letsencrypt/{rest}"
}

Publish your new function and proxy to the Function App and you are good to go!

If you haven’t done this before, be sure to follow all of the steps mentioned in the earlier post! Providing the appropriate application settings should be easy now and if you just follow each step of the wizard you’ll see a green bar when the certificate is successfully requested and installed!

image_thumb5

This makes my minifier service even more awesome, because now I can finally use HTTPS, without getting messages the certificate isn’t valid.

(Almost) No one likes writing code meant to store data to a repository, queues, blobs. Let alone triggering your code when some event occurs in one of those areas. Luckily for us the Azure Functions team has decided to use bindings for this.
By leveraging the power of bindings, you don’t have to write your own logic to store or retrieve data. Azure Functions provides all of this functionality out of the box!

Bindings give you the possibility to retrieve data (strong-typed if you want) from HTTP calls, blob storage events, queues, CosmosDB events, etc. Not only does this work for input, but also for output. Say you want to store some object to a queue or repository, you can just use an output binding in your Azure Function to make this happen. Awesome, right?

Most of the documentation and blogposts out there state you should define your bindings in a file called `function.json`. An example of these bindings is shown in the block below.

{
  "bindings": [
    {
      "name": "order",
      "type": "queueTrigger",
      "direction": "in",
      "queueName": "myqueue-items",
      "connection": "MY_STORAGE_ACCT_APP_SETTING"
    },
    {
      "name": "$return",
      "type": "table",
      "direction": "out",
      "tableName": "outTable",
      "connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
    }
  ]
}

The above sample specifies an input binding for a Queue and an output binding for a some Table Storage. While this works perfectly, it’s not the way you want to implement this when using C# (or F# for that matter), especially if you are using Visual Studio!

How to use bindings with Visual Studio

To set up a function binding in via Visual Studio you just have to specify some attributes for the input parameters of your code. These attributes will make sure the `function.json` file is created when the code is being compiled.

After creating your first Azure Function via Visual Studio you will get a function with these attributes immediately. For my URL Minifier solution I’ve used the following HttpTrigger.

[FunctionName("Get")]
public static async Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "{slug}")] HttpRequestMessage req, string slug,
    TraceWriter log)

Visual Studio (well, actually the Azure Function tooling) will make sure this will get translated to a binding block which looks like this.

"bindings": [
  {
    "type": "httpTrigger",
    "route": "{slug}",
    "methods": [
      "get"
    ],
    "authLevel": "anonymous",
    "name": "req"
  }
],

You can do this for every type of trigger which is available at the moment.

Sadly, this type of development hasn’t been described a lot in the various blogposts and documentation, but with a bit of effort you can find out how to implement most bindings by yourself.

I haven’t worked with all of the different type of bindings yet.
One which I found quite hard to implement is the output binding for a Cosmos DB repository. Though, in hindsight it was rather easy to do once you know what to look for. What worked for me, is creating an Azure Function via the portal first and see which type of binding it uses. This way I found out for a Cosmos DB output binding you need to use the `DocumentDBAttribute`. This attribute needs a couple of variables, like the database name, collection name and of course the actual connection string. After providing all of the necessary information your Cosmos DB output binding should look something like the one below.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("TablesDB", "minified-urls", ConnectionStringSetting = "Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

Notice I had to remove the `async` keyword? That’s because you can’t use `async` if there is an out-parameter.

The thing I had the most trouble with is finding out which value should be in the `ConnectionStringSetting`. If you head down to the Connection String tab of your Cosmos DB in the Azure portal you will find a connection string in the following format.

DefaultEndpointsProtocol=https;AccountName=[myCosmosDb];AccountKey=[myAccountKey];TableEndpoint=https://[myCosmosDb].documents.azure.com

If you use this setting, you’ll be prompted with a `NullReferenceException` for a `ServiceEndpoint`. After having quite a bit of time on troubleshooting this issue I decided the problem probably had to use some other value in the `ConnectionStringSetting`.
Having tired a couple of things I finally discovered you have to specify the setting as follows:

AccountEndpoint=https://[myCosmosDb].documents.azure.com:443/;AccountKey=[myAccountKey];

Running the function will work like a charm now.

I’m pretty sure this will not be the only ‘quirk’ you will come across when using the bindings, but as long as we can all share the information it will become easier in the future!

Where will I store the secrets?

When using attributes you can’t rely much on retrieving your secrets via application settings or the like. Well, the team has you covered!

You can just use your regular application settings, as long as you hold to a naming convention where the values have to be uppercase and use underscores for separation. So instead of hardcoding the values “TablesDB” and “minified-urls” inside my earlier code snippet, one can also use the following.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("MY-DATABASE", MY-COLLECTION", ConnectionStringSetting = Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

By convention, the actual values will now be retrieved via the application settings.

Awesome!

Yeah, but Application Settings aren’t very secure

True!

I’ve already written about this in an earlier post. While using the Application Settings are fine to store some configuration data, you don’t want to specify secrets over there. Secrets should be stored inside Azure Key Vault.

Of course, you can’t use Azure Key Vault in these attributes.
Lucky for us, the Azure Functions team still got us covered with an awesome feature called Imperative bindings! The sample code is enough to get us cracking on creating a binding where the connection secrets are still stored inside Azure Key Vault (or somewhere else for that matter).

Because I’m using a Cosmos DB connection, I need to specify the `DocumentDBAttribute` inside the `Binder`. Something else you should note is when you want to use an output binding, you can’t just use create binding to a `MinifiedUrl` object. If you only specify the object type, the `Binder` will assume it’s an input binding.
If you want an output binding, you need to specify the binding as an `IAsyncCollector<T>`. Check out the code below to see what you need to do in order to use the `DocumentDBAttribute` in combination with imperative bindings.

// Retrieving the secret from Azure Key Vault via a helper class
var connectionString = await secret.Get("CosmosConnectionStringSecret");
// Setting the AppSetting run-time with the secret value, because the Binder needs it
ConfigurationManager.AppSettings["CosmosConnectionString"] = connectionString;

// Creating an output binding
var output = await binder.BindAsync<IAsyncCollector<MinifiedUrl>>(new DocumentDBAttribute("TablesDB", "minified-urls")
{
    CreateIfNotExists = true,
    // Specify the AppSetting key which contains the actual connection string information
    ConnectionStringSetting = "CosmosConnectionString",
});

// Create the MinifiedUrl object
var create = new CreateUrlHandler();
var minifiedUrl = create.Execute(data);

// Adding the newly created object to Cosmos DB
await output.AddAsync(minifiedUrl);

As you can see, there’s a lot of extra code in the body of the function. We have to give up some of the simplicity in order to make the code and configuration a bit more secure, but that’s worth it in my opinion.

If you want to check out the complete codebase of this solution, please check out the GitHub repository, it contains all of the code.

I need more bindings!

Well, a couple of days ago there was some amazing announcement. You can now create your own bindings! Donna Malayeri has some sample code available on GitHub on how to create a Slack binding. There is also a documentation page in the making on how to create these type of bindings.

At this time this feature is still in preview, but if you need some binding which isn’t available at the moment, be sure to check this out. I can imagine this will become quite popular once it has been released. Just imagine creating bindings to existing CRM systems, databases, your own SMTP services, etc.

Awesome stuff in the making!

In the past couple of years the software industry has come a long way in professionalizing the development environment. One of the things which has improved significantly is automating the builds and being able to continuously deploy software.

Having a continuous integration and -deployment environment is the norm nowadays, which means I (and probably you as a reader also) want to have this when creating Azure Functions also!

There are dozens of build servers and deployment tools available, but because Azure Functions are highly likely being deployed in Microsoft Azure, it makes sense to use Visual Studio Team Services with Release Management. I’m not saying you can’t pull this off with any of the other deployment environment, but for me it doesn’t make sense because I already have a VSTS environment and this integrates quite well.

In order for you to deploy your Function App, the first thing you have to make sure is to have an environment (resource group) in your Azure subscription to deploy to. It is advised to use ARM templates for this. There is one big problem with ARM templates though, I genuinely dislike ARM templates. It’s something about the JSON, the long list of variables and ‘magic’ values you have to write down all over the place.
For this reason I first started checking out how to deploy Azure Functions using PowerShell scripts. In the past (3 to 4 years ago) I used a lot of PowerShell scripts to automatically set up and deploy my Azure environments. It is easy to debug, understand and extend. A quick search on the internet showed me the ‘new’ cmdlets you have to use nowadays to spin up a nice resource group and app service. Even though this looked like a very promising deployment strategy, it did feel a bit dirty and hacky. 
In the end I have decided to use ARM templates. Just because I dislike ARM templates doesn’t mean they are a bad thing per se. Also, I noticed these templates have become first-class citizens if you want to deploy software into Azure.

Creating your ARM template

If you are some kind of Azure wizard, you can probably create the templates by yourself. Most of us probably don’t have that level of expertise, so there’s an easier way to get you started.

What I do is head down to the portal, create a resource group and everything which is necessary, like the Function App and extract the ARM template afterwards. Downloading the ARM template is somewhat hidden in the portal, but lucky for us, someone has already asked on Stack Overflow where to find this feature. Once you know where this functionality resides, it makes a bit more sense on why the portal team has decided put it over there.

First of all, you have to navigate to the resource group for which you want to extract an ARM template.

image

On this overview page you’ll see a link beneath the headline Deployments. Click on it and you’ll be navigated to a page where all the deployments are listed which have occurred on your resource group.

Just pick the one you are interested in. In our case it’s the deployment which has created and populated our Function App.

On the detail page of this deployment you’ll see some information which you have specified yourself while creating the Function App. There’s also the option to view the template which Azure has used to create your Function App.

image 

Just click on this link and you will be able to see the complete template, along with the parameters used and most important, there’s the option to download the template!

image

After downloading the template you’ll see a lot of files in the zip-file. You won’t be needing most of them as they are helper files to deploy the template to Azure. Because we will be using VSTS, we only need the parameters.json and template.json files.

The template.json file contains all the information which is necessary for, in our case, the Function App. Below is the one used for my deployment.

{
    "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "name": {
            "type": "String"
        },
        "storageName": {
            "type": "String"
        },
        "location": {
            "type": "String"
        },
        "subscriptionId": {
            "type": "String"
        }
    },
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "kind": "functionapp",
            "name": "[parameters('name')]",
            "apiVersion": "2016-03-01",
            "location": "[parameters('location')]",
            "properties": {
                "name": "[parameters('name')]",
                "siteConfig": {
                    "appSettings": [
                        {
                            "name": "AzureWebJobsDashboard",
                            "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
                        },
                        {
                            "name": "AzureWebJobsStorage",
                            "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
                        },
                        {
                            "name": "FUNCTIONS_EXTENSION_VERSION",
                            "value": "~1"
                        },
                        {
                            "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
                            "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
                        },
                        {
                            "name": "WEBSITE_CONTENTSHARE",
                            "value": "[concat(toLower(parameters('name')), 'b342')]"
                        },
                        {
                            "name": "WEBSITE_NODE_DEFAULT_VERSION",
                            "value": "6.5.0"
                        }
                    ]
                },
                "clientAffinityEnabled": false
            },
            "dependsOn": [
                "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageName'))]"
            ]
        },
        {
            "type": "Microsoft.Storage/storageAccounts",
            "name": "[parameters('storageName')]",
            "apiVersion": "2015-05-01-preview",
            "location": "[parameters('location')]",
            "properties": {
                "accountType": "Standard_LRS"
            }
        }
    ]
}

A fairly readable JSON file, aside from all the magic api versions, types, etc.

The contents of the parameters.json file are a bit more understandable. It contains the key-value pairs which are being referenced in the template file.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "name": {},
        "storageName": {},
        "location": {},
        "subscriptionId": {}
    }
}

The template file uses the format parameters('name') to reference a parameter from the parameters.json file.

These files are important, so you want to add somewhere next to or inside your solution where your functions also reside. Be sure to add them to source control because you’ll need these files in VSTS later on.

For now the above template file is fine, but it’s more awesome to add something to it for a personal touch. I’ve done this by adding a new appSetting in the file.

"appSettings": [
    // other entries
    {
        "name": "MyValue",
        "value": "[parameters('myValue')]"
    }

Also, don’t forget to add myValue to the parameters file and in the header of the template file, otherwise you won’t be able to use it.

In short, if you want to use continuous deployment for your solution, use ARM templates and get started by downloading them from the portal. Now let’s continue to the fun part!

Set up your continuous integration for the Functions!

Setting up the continuous integration of your software solution is actually the easy part! VSTS has matured quite a lot over time, so all we have to do is pick the right template, point it to the right sources and you are (almost) done.

Picking the correct template is the hardest part. You have to pick the ASP.NET Core (.NET Framework). If you choose a different template you will struggle setting it up, if you are unfamiliar with VSTS.

clip_image001

This template contains all the useful steps and settings you need to build and deploy your Azure Functions.

image

It should be quite easy to configure these steps. You can integrate VSTS with every popular source control provider. I’m using GitHub, so I’ve configured it so VSTS can connect to the repository.

image

Note I’ve also selected the Clean options because I stumbled across some issues when deploying the sources. These errors were totally my fault, so you can just keep it turned off.

The NuGet restore step is pretty straightforward and you don’t have to change anything on it.

The next step, Build solution, is the most important one, because it will not only build your solution, but also create an artifact from it. The default setting is already set up properly, but for completeness I’ve added it below. This will tell MSBuild to create a package called WebApp.zip after building the solution.

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactstagingdirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"

Next step which is important is Publish Artifact.
You don’t really have to change anything over here, but it’s good to know where your artifacts get published after the build.

image

Of course, you can change stuff over here if you really want to.

One thing I neglected to mention is the build agent you want to use. The best agent to build your Azure Function on (at the moment) is the Hosted VS2017 agent.

image

This agent is hosted by Microsoft, so you don’t have to configure anything for it which makes it very easy to use. Having this build agent hosted by Microsoft also means you don’t have any control over it, so if you want to build something on a .NET framework which isn’t supported (yet), you just have to set up your own build agent.

When you are finished setting up the build tasks be sure to add your repository trigger to the build.

image

If you forget to do this the build will not be triggered automatically whenever someone pushes to the repository.

That’s all there is to it for setting up your continuous integration for Azure Functions. Everything works out of the box, if you select the correct template in the beginning.

Deploy your Azure Functions continuously!

Now that we have the continuous integration build in place we are ready to deploy the builds. If you are already familiar with Release Management it will be fairly easy to deploy your Azure Functions to Azure.

I had zero experience with Release Management so had to find it out the hard way!

The first thing you want to do when creating a new release pipeline is adding the necessary artifacts. This means adding the artifacts from your CI build, where the source type will be Build and all other options will speak for themselves.

image

Next, not so obvious, artifact is adding the repository where your parameters.json and template.json files are located. These files aren’t stored in the artifact file from the build, so you have to retrieve them some other way.

Lucky for us we are using a GitHub repository and there’s a source type available called GitHub in Release Management. Therefore we can just add a new Source type and configure it to point to the GitHub location of choice.

image

This will make sure the necessary template.json and parameters.json files are available when deploying the Azure Functions.

Next up is adding the environments to your pipeline. In my case I wanted to have a different environment for each slot (develop & production), but I can imagine this will differ per situation. Most of the customers I meet have several Azure subscriptions, each meant to facilitate the work for a specific state (Dev, Test, Acceptance, Production). This isn’t the case in my setup, everything is nice and cozy in a single subscription.

Adding an environment isn’t very hard, just add a new one and choose the Azure App Service Deployment template.

image

There are dozens of other templates which are all very useful, but not necessary for my little automated deployment pipeline.

Just fill out the details in the Deploy Azure App Service task and you are almost done.

image

Main thing to remember is to select the zip-file which was created as an artifact from our CI build and to check the Deploy to slot option, as we want to deploy these Azure Functions to the develop slot.

If you are satisfied with this, good! But remember we still have the ARM template?

Yes, we want to make sure the Azure environment is up and running before we deploy our software. Because of this, you have to add 1 task to this phase which is called Azure Resource Group Deployment.

image

This is the task where we need our linked artifacts from the GitHub repository.

The path to the Template and Template parameters are the most important in this step as these will make sure your Azure environment (resource group) will be set up correctly.

Easiest way to get the correct path is to use the modal dialog which appears if you press the button behind the input box.

image

image

One thing you might notice over here is the option to Override template parameters. This is the main reason why you want to use VSTS Release Management (or any other deployment server). All this boilerplating is done so we can specify the parameters (secrets) for each environment, without having to store them in some kind of repository.

Just to test it I’ve overridden one of the parameters, myValue, with the text “VSTS Value” to make sure the updating actually happens.

Another thing to note is I’ve selected the Deployment mode to Incremental as I just want to update my deployments, not create a completely new Function App.

All of the other options don’t need much explanation at this time.

One thing I have failed to mention is adding the continuous deployment trigger to the pipeline. In your pipeline click on the Trigger-circle and Enable it, like you can see below.

image

This will make sure each time a build has succeeded, a new deployment will occur to the Development slot (in my case).

This is all you need to know to deploy your Azure Functions (or any other Azure App Service for that matter). For the sake of completeness it would make sense to add another Environment in your pipeline, call it Production and make sure the same artifacts get deployed to the production slot. This Environment & Tasks will look very similar to the Develop environment, so I won’t repeat the steps over here. Just remember to choose the correct override parameters when deploying to the production slot. You don’t want to mess this up.

Now what?

The continuous integration & deployment steps are finished, so we can start deploying our software. If you already have some CI builds, you can create a new release in the releases tab.

image

This will be a manual release, but you can also choose to push some changes to your repository and make everything automated.

I’ve done a couple releases to the develop environment already, which are al shown in the overview of the specific release.

image

Over in the portal you will also notice your Azure Functions will be in read only mode, because continuous integration is turned on.

image

But, remember we added the the MyValue parameter to our ARM template? It is now also shown inside the Application settings of our Functions App!

image

This is an awesome way of storing secrets inside your release pipeline! Or even better, store your secrets in Azure Key Vault and adding your Client Id and Client Secret to the Application Settings via the release pipeline, like I described in an earlier post.

I know I’ll keep using VSTS for my Azure Functions deployment from now on. It’s awesome and can only recommend you do it also!

Lately, I’ve been busy learning more about creating serverless solutions. Because my main interest lies within the Microsoft Azure stack I surely had to check out the Azure Functions offering.

Azure Functions enable you to create a serverless solutions which are completely event-based. As it’s located within the Azure space, you can integrate easily with all of the other Azure services, like for example the service bus, Cosmos DB, storage, but also external services like SendGrid and GitHub!

All of these integrations are fine and all, but seeing Azure Functions perform in action is still easiest with regular HTTP triggers. You can just navigate with a browser (or Postman) to a URL and your function will be activated immediately. I guess most people will create these kind of functions in order to learn to work with them, at least that’s what I did.

Creating your Azure Functions App

In order to create Azure Functions, you first have to create a so called Function App in the Azure Portal. Creating such an app is quite easy, the only thing you have to think about is which type of Hosting Plan you want to use. At this time there are 2 options, the Consumption Plan or the App Service Plan.

image

For your regular “Hello World”-function it doesn’t really matter, but there are a few important differences between the two.

If you want to experience the full power of a serverless compute solution, you want to use the Consumption plan. This plan creates instances to host your Azure Functions on-demand, depending on the number of incoming events. Even when there is a super-high load on your system, this plan scales automatically.
The other main advantage is, you will only pay for the functions if they actually do something.
As you might remember, these two advantages are, in my opinion, the main benefits for people to move to serverless offerings.

However, using the App Service plan also has some advantages. The main advantage will be to utilize the full power of your virtual machines and not having unexpected (high?) costs. With the App Service plan, your function apps run on the App Service virtual machines you might already have deployed on your subscription (Basic, Standard and Premium). This means you can share the same (underlying) virtual machines of your websites with your Azure Functions. Using this plan might save you some money in the end, because you are already paying for the (unused) compute which you are able to utilize now. Running these functions won’t cost anything extra, aside from the extra bandwidth of course.
Another advantage is your functions will be able to run continuously, or nearly continuously. The App Service plan is useful in scenario’s where you need a lot of long-running compute. Keep in mind you need to enable the Always On setting in your App Service if you want your functions to run continuously.

There are some other little differences between the two plans, but the mentioned differences are most important, to me at least.

Do remember to enable Application Insights for your Functions App. It’s already an awesome monitoring platform, but the integration with Azure Functions makes it even more amazing! I can’t think of a valid reason not to enable it, because it is also quite cheap.

After having completed the creation of your Functions App you are able to navigate to it in the portal. A Functions App acts much like a container for one or more Azure Functions. This way you are able to place multiple Azure Functions into a single Functions App. It might be useful for monitoring if you are placing functions for a single functional use-case into one Functions App.
You can of course put all of your functions inside one App. This doesn’t really matter at the moment. It’s a matter of taste.

Your first Azure Function

If you are just staring with Azure Functions and serverless computing I’d advise to check out the portal and create a new function from over there. Of course, it isn’t a recommended practice if you want to get serious about developing a serverless solution, but this way you are able to take baby steps into this technology space. A recommended practice would be using an ARM template or CLI script.

From inside the Function App you have the possibility to create new functions.

image

Currently, the primary languages of choice are C#, JavaScript and F#. This is just to get you started, because there are more languages supported already (node.js, python, PHP) and more are coming. There’s even an initiative to support R scripts in Azure Functions!

For now I’ll go with the C# function, because that’s my ‘native’ programming language.

After this function is created you are presented with an in-browser code editor from which you can start coding and running your function.

image

This function is placed in a file called run.csx. The csx extension belongs to C# scripts (check out scriptcs.net), much like ps1 belongs to PowerShell scripts.
It should now be clear this Azure Function is ‘just’ a script file with an entry point. This entry point is much like the Main-method in the Program.cs file of your console application.

Because we have created an HTTP hook/endpoint, you should return a valid HTTP response, like you can see in the script. If you want to test your function, the portal has you covered by pressing the Test button. Even though testing in the portal is cool, it’s even cooler to try it out in your browser. If everything is set up correctly you will be able to navigate to the URL https://[yourFunctionApp].azurewebsites.net/api/HttpTriggerCSharp1?code=[someCode] and receive the content, which should be `Please pass a name on the query string or in the request body`. You can extract the proper URL from the Get function URL link in the portal.

Management & settings

Azure Functions are really, really, really short-lived App Services. They are also deployed in the same Azure App Service ecosystem, therefore you can leverage the same management possibilities which are available to your regular App Services.

On the Platform features tab you are able to navigate to most useful management features of your Functions App.

image

I really like this page, it’s much better and clearer compared to the configuration ‘blade’ of regular Azure services. Hopefully this design will be implemented with the other services also!

Keep in mind to configure CORS properly if you want to use your HTTP function from within a javascript application!

All other features presented over here are also important of course. I especially like the direct link to the Kudu site, from which you can do even more management!

Another setting, which is in preview at the moment, is enabling deployment slots. Yes, deployment slots for your Azure Functions! This works exactly the same like you are used to with the regular App Services. I’ve configured one of my Function Apps to use the deployment slots. By enabling deployment slots you can now deploy the `develop` branch to a development slot and the `master` branch to the production slot of the Function App.

image

If for some reason you want to disable the usage of a specific function, just navigate to the Functions leaf in the treeview and you are able to disable (and re-enable) the different functions individually.

image

Creating real functions

Creating functions from within the Azure Portal isn’t a very good idea in real life. Especially since you don’t have any version control, quality gates, continuous integration & deployment in place. That’s why it’s a good idea not to use the browser as your primary coding environment. For a professional development experience you have multiple options at hand.

The easiest option is to use Visual Studio 2017. You need version 15.3 (or higher) of Visual Studio, which is still in preview at this moment. When you are done installing this version you should be able to install the Azure Functions Tools for Visual Studio 2017 from the Visual Studio Marketplace on your machine.
Doing so will enable you to choose a new project template called Azure Functions. You can add multiple Azure Functions to this project. Currently, there already is an extensive list of events available to which you can subscribe to. I’m sure the list will grow in the future, but for now it will suffice for a great deal of solutions.
image

After having chosen the event of your choice you can change some settings, like Access Rights. If you want your HttpTrigger to be accessed by anonymous users from the web, you need to set it to Anonymous instead of Function. No worries if you forgot to do this, it’s something you can set from inside your code also.

When comparing the created functions (the one from the portal and from Visual Studio) you will notice a couple of differences.

namespace FunctionApp1
{
    public static class Function1
    {
        [FunctionName("HttpTriggerCSharp")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            //do stuff
        }
    }
}

First of all, your function is now wrapped inside a namespace and a (static) class. This makes organizing and integrating the code with your current codebase much easier.

Another thing you might notices are the extra attributes added to the function.

The Run-method now has an attribute FunctionName, which will tell the Function App what the name of the function will be. Do note, having multiple functions with the same FunctionName will override the one getting deployed. The function.json file will create an entrypoint for the latest function it finds with a specific name.
Also, the first parameter, the HttpRequestMessage, now has an HttpTrigger-attribute stating how the function can/should be triggered. In this case the function can only be triggered by other functions with a HTTP GET or POST. Because of these attributes it is easier to change the behavior of the functions later on. You aren’t dependent on choices made in some wizard.

I already mentioned the function.json file briefly. This file is used to populate the Function App with your functions. If you’ve explored the portal a bit, you might have seen this file already after creating the initial function.

image

This file contains all configuration for the provided functions within the Function App. The function.json file from the first function script contains the following information.

{
  "disabled": false,
  "bindings": [
    {
      "authLevel": "function",
      "name": "req",
      "type": "httpTrigger",
      "direction": "in"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ]
}

Now compare it with the one, generated, by the function in Visual Studio.

{
  "bindings": [
    {
      "type": "httpTrigger",
      "methods": [
        "get",
        "post"
      ],
      "authLevel": "function",
      "direction": "in",
      "name": "req"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ],
  "disabled": false,
  "scriptFile": "..\\FunctionApp1.dll",
  "entryPoint": "FunctionApp1.Function1.Run"
}

As you can see, the file which is generated in Visual Studio contains information about the entry point for the function.

Note: You won’t see this file, it’s located in the assembly output folder after building the project.

In the above example I’ve used Visual Studio to create functions. It is also possible to use any other IDE for this, only you have to take into consideration you’ll have to use the Azure Function CLI tooling in those environments.

Debugging

When using a proper IDE, like Visual Studio, you are used to debugging your software from within the IDE.

This isn’t any different with Azure Functions. When pressing the F5-button a command line application will start with your functions loaded inside.

image

This application will start a small webserver which emulates the Function App. When working with HTTP triggers you can easily navigate to the provided endpoints. Of course, you can also work with any other event as long as you are able to trigger them.

At BUILD 2017 it was also announced you can do live-production-debugging of your Azure Functions from within Visual Studio. In other words: Connecting to the production environment and setting breakpoints in the code.
The crowd went wild, because this is quite cool. However, do you want to do this? It’s nice there is a possibility for you to leverage such a feature, but in most cases I would frown quite a bit if someone suggested doing this.
I do have to note live-debugging Azure Function production code isn’t as dangerous as a ‘normal’ web application. Normally when you do this, the complete thread is paused and no one is able to continue on the site. This is one of many reasons why you never want to do this. With a serverless model this isn’t the case. Each function spins up a new instance/thread, so if you set a breakpoint in one of the instances, all other instances can still continue to work.
Still, take caution when considering to do this!

Deployment

We are all professional developers, so we also want to leverage the latest and greatest continuous integration & deployment tools. When working in the Microsoft & Azure stack it’s quite common to use either TFS or Azure Release Management for building your assemblies.

Because your Azure Functions project still produce an assembly, which should be deployed with along with your function.json file, it is also still possible to use the normal CI/CD solutions for your serverless solution.

If you don’t feel like setting up such a build environment you can still use a different continuous deployment feature which the App Services model brings to us.

On the Platform features tab click on the Deployment options setting.

image

This will navigate you to the blade from where you can setup your continuous deployment.

image

Using this feature you are able to deploy every commit of a specific branch to the specified application slot.

Setting up this feature is quite easy, if you are using a common version control system which is located in the cloud, like VSTS, GitHub, BitBucket or even DropBox and OneDrive.

I’ve set up one of my applications with VSTS integration. Every time I push some changes to the `master` branch, the changes are being built and deployed to the specified slot.

image

When clicking on a specific deployment, you can even see the details of this deployment and redeploy if needed.

image

All in all quite a cool feature if you want to use continuous deployment, but don’t want to set up TFS or Azure Release Management. The underlying technology still uses Azure Release Management, but you don’t have to worry about it anymore.

If you are thinking of using Azure Functions in your professional environment I highly recommend using a proper CI/CD tool! The continuous deployment option is quite alright (and better as publishing your app from within Visual Studio), but one of the major downsides is you can’t ‘promote’ a build to a different slot.
You can only push changes to a branch and those will get built. This isn’t something you want in your company, but that’s a completely different blog post and unrelated to serverless or Azure Functions.

Hope this helps you out a bit starting with Azure Functions.

You’ve probably heard a lot of talk around a new buzzword `serverless`. It’s a pretty confusing name for an awesome technology/technique.

The main reason the word `serverless` isn’t a very good one is because it implies there aren’t any servers when using this technique. I found a fairly funny CommitStrip about this topic.

 

Source:http://www.commitstrip.com/en/2017/04/26/servers-there-are-no-servers-here

But what does the term mean then?
Well, it means you don’t have to worry about servers anymore. You just upload your software to the cloud provider of your choice and it runs on-demand/by-request. As Mark Russinovich said in an interview with InfoWorld “I don’t have to worry about the servers. The platform gives me the resources as I need them.”.Of course the hardware, operating system, webserver, firewall, etc. is all still there, but as a developer and operational person you don’t really have to care about it.

Isn’t this the same as PaaS from a couple of years ago?

The answer is: Yes and no!

Yes, there are a lot of similarities and the serverless offerings from each cloud provider are based upon their current PaaS offerings. Therefore, you could call it an evolution of PaaS.

No, because the ideology is a bit different.
Adrian Crockcroft (AWS) describes the difference rather well in a tweet:
”If your PaaS can efficiently start instances in 20ms that run for half a second, then call it serverless.”
https://twitter.com/adrianco/status/736553530689998848

These numbers aren’t set in stone and there are a couple of other ‘rules’ for a serverless solution, but it’s a good elevator pitch.

Does it scale?

Something which differs quite a lot is the scaling aspect of your solution. With the regular PaaS offerings you still have to think a bit about scaling. Most of the time you can choose the server plan/pricing plan, if you want auto-scaling enabled and how many servers should be spun up when certain criteria is met. Still quite a bit of administration.

For serverless solutions you don’t have to think about scaling. It scales out-of-the-box! Each request is handled individually by a server and if necessary, each request can be handled by a different server. Because you don’t have any knowledge of the underlying server architecture, you won’t get billed for it either. The only thing you’ll see on the bill is the number of executions of your software and get billed accordingly.
So if your cloud provider thinks it’s a good idea to deploy your software on 1000 expensive servers, no problem! You only get billed per execution cycle of your software. This is quite different from the original PaaS offerings as you get billed per server/plan with those solutions.

I’ve heard about containers, is this the same?

No, actually not.

Containers are something completely different. They are somewhat of a combination of IaaS and PaaS offerings. Of course, it is possible to create serverless solutions with containers, but it isn’t the main use case.

So, what is it exactly?

Saying ‘you don’t have to care about servers’ and ‘start within 20ms and run half a second’ doesn’t really say what a serverless solution really is.

A well-known synonym for serverless is Functions as a Service (FaaS). This term says it all. Your functions are deployed and run as a service, just like your websites, webservices, webhooks, webjobs, etc.

So instead of creating a fully operational application which has multiple services, some web logic, backend connections and maybe even an API layer, you will create dozens of super small services which are able to to their own little thing.
If you’ve paid attention, you might see some a lot of similarities compared with a microservices architecture. That’s because FaaS or serverless solutions are also known as nanoservices, even more fine-grained services when compared to regular microservices.

When running these functions you, as a developer, only have to care about the logic of the function. Not the management of the servers or other application logic which might be running somewhere. This is one of the biggest advantages of serverless solutions, your code will be much easier to understand and test!

Most of the time these functions are triggered by an external event. This could be a HTTP request, but also placing a message on a queue or creating a file on a storage location.

Sample

I found a nice image which represents, a very simple, design for uploading and retrieving an image to some backend system.

Over here you see a user is making an API call (HTTP) to a function, for uploading an image. This function will store the sent image to some storage location and another function is triggered (event) will be triggered by this action to store this file to a storage table. When the user is requesting the image again via an API call (HTTP), this function will check the storage table and return the requested image.
As you can see, each function is only responsible for doing 1 single, simple, thing. This will make testing and maintaining each function much easier compared to a big service which is responsible for doing multiple things.
Of course, the functionality described above can also be created in a single small microservice. A microservice is also easier to test and maintain compared to a big monolithic solution. Whether you should choose for a nanoservices or microservices solution is up to you, but there are some additional benefits when you choose a nanoservices/serverless solution which I’ll cover a bit later. You can also use both nano- and microservices in your overall solution so you can use the best technology for each use-case. This will make your software very loosely coupled, performant and robust (if done well).

Some benefits

Each function should be stateless and immutable. Every time a function is invoked, a new function will be spun up and destroyed afterwards. This makes testing your function very easy as every invocation should have the same result, if the inputs are the same.
If you do need to save state, it should be stored somewhere else, like an external storage device or database.

Functions are highly scalable by design. Each function is very short-lived, which has two reasons.
First, each cloud provider has some limit on how long a function can run. This limit will differ a bit over time, but it’s good advice to do stuff as fast as possible.
The second reason is because your functions are billed by invocation, but also on duration. For Microsoft Azure this is €0.000014 per GB-s (Gigabyte Seconds). This doesn’t sound like much, and it isn’t, but it will add up in the end. So if you are able to change your function from running 1 second to 200ms, you will immediately save 80% on next months bill (this is a bit simplified of course).

So, in short, serverless is about very small, stateless and immutable services which are capable of doing one thing within a very small timeframe.

Are there any other benefits?

We already covered these services should be small and do a single thing, which will make the services easy to understand and maintain. We also covered you will be billed per execution, which can be a benefit because you don’t have to pay for your services if no one is using them.

Another benefit might be the deployment of your software. Each function can be deployed individually and placed as a new version next to the functions. When your function is deployed (copy-pasted) to the cloud provider you can just route all requests to the new function. If, for some reason, the new version doesn’t work properly, you just have to route the traffic back to the old function and everything should be working again.
From a cost-perspective you don’t even have to remove the old versions of the functions/services, because they aren’t invoked anymore. And as you remember, if a function isn’t invoked, it doesn’t cost any money! Of course, from an operational perspective you might want to remove the old versions of a service after some period to keep a proper overview.

Something else you might benefit from is better performance of your code. Because you will be billed per second of usage, your company might benefit from optimizing code wherever possible. Every optimization will directly impact the bill at the end of the month.

What are the downsides?

As with every technology or technique, there are also a couple of (important) downsides for using a serverless solution. Not all of these downsides are exclusive to serverless solutions, but loosely coupled, distributed, solutions in general. I’ll cover a couple of them below.

Code duplication

Just as with a microservices solution, all functions should be able to do one thing and do it without relying on any other functions. This will result in a lot of services having similar logic, like communicating to a database, service bus, file system, etc.

This doesn’t has to be a bad thing overall, but we have learned to avoid code duplication if possible. Luckily there are a couple of ways to avoid code duplication, but it is something to keep in mind. The serverless offerings of the major cloud providers are still maturing, so sharing code will get easier and better over time.

Multiple server calls

You can create serverless endpoints which react on HTTP methods like GET, POST, etc. Because of this, it’s possible to let your client application (possibly a single page application) be fully dependent on these small functions. This will result in an enormous number of HTTP calls to the backend, because each function should only do 1 single thing.

Eventually, this will result in a slow client application as it has to wait on all those calls to finish.

If you want to use a serverless architecture it might be a good idea to use an API gateway solution. This gateway will act as a proxy between a client call and the multiple function calls in the backend. This way the client only has to do a single call and doesn’t has to bother itself with the internals of the backend system. Of course, this will add some additional complexity to the overall architecture. However, this will be necessary in order to create performant solutions.

State

As said, functions in a serverless solution are short-lived. This means they can’t hold state for longer as their execution time. Every time a function is finished it will be destroyed along with all its in-memory state. If a new call is made, a new object/function will be generated.

So if you need to manage state between multiple invocations, you will have to use some external state management. This will, by definition, be much slower compared to in-memory state. Therefore, using a serverless design will not be beneficial to all kind of solutions.

Testing

Your functions only do a single thing, therefore it should be easy to create unit tests for them. Creating integration- and regression tests is quite a different story. You are fully dependent on events being triggered (webhooks, HTTP calls, service bus messages, etc.). In order to test if your overall solution is working properly you will have to jump through a couple of loops in order to get this working.

This isn’t a problem unique to a serverless design, but also goes for a microservices design or any other distributed system. As these solution designs are gaining a lot of popularity in the past couple of years and the years to come, I’m sure tooling will become available to make this kind of testing easier. For now, I’m not aware of any good tooling to facilitate this kind of integration- or regression testing so you have to think of something yourself.

Monitoring

Having a lot of moving parts takes its toll on your monitoring tools. Monitoring a couple of virtual machines, services or websites is quite easy these days. The tooling has matured a lot over time and most system administrators are quite proficient with it.

Having hundreds (or maybe even thousands) of small services and functions in your solution architecture will probably result in making some changes to your monitoring software. It will become quite cumbersome to manage all of these services in the same way. You also don’t care much for memory, CPU and I/O anymore and will probably be more interested in the overall throughput of messages and events in your system.

The monitoring tooling which is currently available hasn’t fully matured yet to facilitate your serverless (or microservices) design. This can be a major problem for your company and I think it is one of the most important things to think about when designing your system.

Development tooling

Creating serverless solutions (small functions) which do just a single thing isn’t very difficult. Most of the time this will just be a single (or a couple) of classes / modules which you can develop like ‘regular’ software and deploy it as a small function.

This sounds quite easy, but it would be nice to have some proper integration in your development environment. All major cloud providers are working on this and it has matured quite a bit in the past couple of months/years. Still, there is a long way to go.

One of the most important features which has been worked on a lot is the continuous integration and deployment of your functions. The major ALM software- and cloud providers have worked hard to get this working in order to deploy your serverless solution in a professional way.

Where to go next?

As I already mentioned, all major cloud providers have some kind of serverless offering.

Microsoft has Azure Functions and Azure Logic Apps, Amazon has their Lambda solution and Google has a Cloud Functions offering.

Each of those offerings provide similar ways of creating functions and a serverless design, so I’d advice to check out the documentation of the cloud provider of your choosing. I’ll check out the Azure Functions and Logic Apps.

I’m quite a fan of using micro- and nanoservices in my solution designs and try to incorporate them whenever it makes sense.
The regular IaaS and PaaS solutions won’t disappear any time soon. They still have their place in your solution design. But as I’ve written before, use the right tool for the right job.