In the past couple of years the software industry has come a long way in professionalizing the development environment. One of the things which has improved significantly is automating the builds and being able to continuously deploy software.

Having a continuous integration and -deployment environment is the norm nowadays, which means I (and probably you as a reader also) want to have this when creating Azure Functions also!

There are dozens of build servers and deployment tools available, but because Azure Functions are highly likely being deployed in Microsoft Azure, it makes sense to use Visual Studio Team Services with Release Management. I’m not saying you can’t pull this off with any of the other deployment environment, but for me it doesn’t make sense because I already have a VSTS environment and this integrates quite well.

In order for you to deploy your Function App, the first thing you have to make sure is to have an environment (resource group) in your Azure subscription to deploy to. It is advised to use ARM templates for this. There is one big problem with ARM templates though, I genuinely dislike ARM templates. It’s something about the JSON, the long list of variables and ‘magic’ values you have to write down all over the place.
For this reason I first started checking out how to deploy Azure Functions using PowerShell scripts. In the past (3 to 4 years ago) I used a lot of PowerShell scripts to automatically set up and deploy my Azure environments. It is easy to debug, understand and extend. A quick search on the internet showed me the ‘new’ cmdlets you have to use nowadays to spin up a nice resource group and app service. Even though this looked like a very promising deployment strategy, it did feel a bit dirty and hacky. 
In the end I have decided to use ARM templates. Just because I dislike ARM templates doesn’t mean they are a bad thing per se. Also, I noticed these templates have become first-class citizens if you want to deploy software into Azure.

Creating your ARM template

If you are some kind of Azure wizard, you can probably create the templates by yourself. Most of us probably don’t have that level of expertise, so there’s an easier way to get you started.

What I do is head down to the portal, create a resource group and everything which is necessary, like the Function App and extract the ARM template afterwards. Downloading the ARM template is somewhat hidden in the portal, but lucky for us, someone has already asked on Stack Overflow where to find this feature. Once you know where this functionality resides, it makes a bit more sense on why the portal team has decided put it over there.

First of all, you have to navigate to the resource group for which you want to extract an ARM template.

image

On this overview page you’ll see a link beneath the headline Deployments. Click on it and you’ll be navigated to a page where all the deployments are listed which have occurred on your resource group.

Just pick the one you are interested in. In our case it’s the deployment which has created and populated our Function App.

On the detail page of this deployment you’ll see some information which you have specified yourself while creating the Function App. There’s also the option to view the template which Azure has used to create your Function App.

image 

Just click on this link and you will be able to see the complete template, along with the parameters used and most important, there’s the option to download the template!

image

After downloading the template you’ll see a lot of files in the zip-file. You won’t be needing most of them as they are helper files to deploy the template to Azure. Because we will be using VSTS, we only need the parameters.json and template.json files.

The template.json file contains all the information which is necessary for, in our case, the Function App. Below is the one used for my deployment.

{
    "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "name": {
            "type": "String"
        },
        "storageName": {
            "type": "String"
        },
        "location": {
            "type": "String"
        },
        "subscriptionId": {
            "type": "String"
        }
    },
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "kind": "functionapp",
            "name": "[parameters('name')]",
            "apiVersion": "2016-03-01",
            "location": "[parameters('location')]",
            "properties": {
                "name": "[parameters('name')]",
                "siteConfig": {
                    "appSettings": [
                        {
                            "name": "AzureWebJobsDashboard",
                            "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
                        },
                        {
                            "name": "AzureWebJobsStorage",
                            "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
                        },
                        {
                            "name": "FUNCTIONS_EXTENSION_VERSION",
                            "value": "~1"
                        },
                        {
                            "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
                            "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
                        },
                        {
                            "name": "WEBSITE_CONTENTSHARE",
                            "value": "[concat(toLower(parameters('name')), 'b342')]"
                        },
                        {
                            "name": "WEBSITE_NODE_DEFAULT_VERSION",
                            "value": "6.5.0"
                        }
                    ]
                },
                "clientAffinityEnabled": false
            },
            "dependsOn": [
                "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageName'))]"
            ]
        },
        {
            "type": "Microsoft.Storage/storageAccounts",
            "name": "[parameters('storageName')]",
            "apiVersion": "2015-05-01-preview",
            "location": "[parameters('location')]",
            "properties": {
                "accountType": "Standard_LRS"
            }
        }
    ]
}

A fairly readable JSON file, aside from all the magic api versions, types, etc.

The contents of the parameters.json file are a bit more understandable. It contains the key-value pairs which are being referenced in the template file.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "name": {},
        "storageName": {},
        "location": {},
        "subscriptionId": {}
    }
}

The template file uses the format parameters('name') to reference a parameter from the parameters.json file.

These files are important, so you want to add somewhere next to or inside your solution where your functions also reside. Be sure to add them to source control because you’ll need these files in VSTS later on.

For now the above template file is fine, but it’s more awesome to add something to it for a personal touch. I’ve done this by adding a new appSetting in the file.

"appSettings": [
    // other entries
    {
        "name": "MyValue",
        "value": "[parameters('myValue')]"
    }

Also, don’t forget to add myValue to the parameters file and in the header of the template file, otherwise you won’t be able to use it.

In short, if you want to use continuous deployment for your solution, use ARM templates and get started by downloading them from the portal. Now let’s continue to the fun part!

Set up your continuous integration for the Functions!

Setting up the continuous integration of your software solution is actually the easy part! VSTS has matured quite a lot over time, so all we have to do is pick the right template, point it to the right sources and you are (almost) done.

Picking the correct template is the hardest part. You have to pick the ASP.NET Core (.NET Framework). If you choose a different template you will struggle setting it up, if you are unfamiliar with VSTS.

clip_image001

This template contains all the useful steps and settings you need to build and deploy your Azure Functions.

image

It should be quite easy to configure these steps. You can integrate VSTS with every popular source control provider. I’m using GitHub, so I’ve configured it so VSTS can connect to the repository.

image

Note I’ve also selected the Clean options because I stumbled across some issues when deploying the sources. These errors were totally my fault, so you can just keep it turned off.

The NuGet restore step is pretty straightforward and you don’t have to change anything on it.

The next step, Build solution, is the most important one, because it will not only build your solution, but also create an artifact from it. The default setting is already set up properly, but for completeness I’ve added it below. This will tell MSBuild to create a package called WebApp.zip after building the solution.

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactstagingdirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"

Next step which is important is Publish Artifact.
You don’t really have to change anything over here, but it’s good to know where your artifacts get published after the build.

image

Of course, you can change stuff over here if you really want to.

One thing I neglected to mention is the build agent you want to use. The best agent to build your Azure Function on (at the moment) is the Hosted VS2017 agent.

image

This agent is hosted by Microsoft, so you don’t have to configure anything for it which makes it very easy to use. Having this build agent hosted by Microsoft also means you don’t have any control over it, so if you want to build something on a .NET framework which isn’t supported (yet), you just have to set up your own build agent.

When you are finished setting up the build tasks be sure to add your repository trigger to the build.

image

If you forget to do this the build will not be triggered automatically whenever someone pushes to the repository.

That’s all there is to it for setting up your continuous integration for Azure Functions. Everything works out of the box, if you select the correct template in the beginning.

Deploy your Azure Functions continuously!

Now that we have the continuous integration build in place we are ready to deploy the builds. If you are already familiar with Release Management it will be fairly easy to deploy your Azure Functions to Azure.

I had zero experience with Release Management so had to find it out the hard way!

The first thing you want to do when creating a new release pipeline is adding the necessary artifacts. This means adding the artifacts from your CI build, where the source type will be Build and all other options will speak for themselves.

image

Next, not so obvious, artifact is adding the repository where your parameters.json and template.json files are located. These files aren’t stored in the artifact file from the build, so you have to retrieve them some other way.

Lucky for us we are using a GitHub repository and there’s a source type available called GitHub in Release Management. Therefore we can just add a new Source type and configure it to point to the GitHub location of choice.

image

This will make sure the necessary template.json and parameters.json files are available when deploying the Azure Functions.

Next up is adding the environments to your pipeline. In my case I wanted to have a different environment for each slot (develop & production), but I can imagine this will differ per situation. Most of the customers I meet have several Azure subscriptions, each meant to facilitate the work for a specific state (Dev, Test, Acceptance, Production). This isn’t the case in my setup, everything is nice and cozy in a single subscription.

Adding an environment isn’t very hard, just add a new one and choose the Azure App Service Deployment template.

image

There are dozens of other templates which are all very useful, but not necessary for my little automated deployment pipeline.

Just fill out the details in the Deploy Azure App Service task and you are almost done.

image

Main thing to remember is to select the zip-file which was created as an artifact from our CI build and to check the Deploy to slot option, as we want to deploy these Azure Functions to the develop slot.

If you are satisfied with this, good! But remember we still have the ARM template?

Yes, we want to make sure the Azure environment is up and running before we deploy our software. Because of this, you have to add 1 task to this phase which is called Azure Resource Group Deployment.

image

This is the task where we need our linked artifacts from the GitHub repository.

The path to the Template and Template parameters are the most important in this step as these will make sure your Azure environment (resource group) will be set up correctly.

Easiest way to get the correct path is to use the modal dialog which appears if you press the button behind the input box.

image

image

One thing you might notice over here is the option to Override template parameters. This is the main reason why you want to use VSTS Release Management (or any other deployment server). All this boilerplating is done so we can specify the parameters (secrets) for each environment, without having to store them in some kind of repository.

Just to test it I’ve overridden one of the parameters, myValue, with the text “VSTS Value” to make sure the updating actually happens.

Another thing to note is I’ve selected the Deployment mode to Incremental as I just want to update my deployments, not create a completely new Function App.

All of the other options don’t need much explanation at this time.

One thing I have failed to mention is adding the continuous deployment trigger to the pipeline. In your pipeline click on the Trigger-circle and Enable it, like you can see below.

image

This will make sure each time a build has succeeded, a new deployment will occur to the Development slot (in my case).

This is all you need to know to deploy your Azure Functions (or any other Azure App Service for that matter). For the sake of completeness it would make sense to add another Environment in your pipeline, call it Production and make sure the same artifacts get deployed to the production slot. This Environment & Tasks will look very similar to the Develop environment, so I won’t repeat the steps over here. Just remember to choose the correct override parameters when deploying to the production slot. You don’t want to mess this up.

Now what?

The continuous integration & deployment steps are finished, so we can start deploying our software. If you already have some CI builds, you can create a new release in the releases tab.

image

This will be a manual release, but you can also choose to push some changes to your repository and make everything automated.

I’ve done a couple releases to the develop environment already, which are al shown in the overview of the specific release.

image

Over in the portal you will also notice your Azure Functions will be in read only mode, because continuous integration is turned on.

image

But, remember we added the the MyValue parameter to our ARM template? It is now also shown inside the Application settings of our Functions App!

image

This is an awesome way of storing secrets inside your release pipeline! Or even better, store your secrets in Azure Key Vault and adding your Client Id and Client Secret to the Application Settings via the release pipeline, like I described in an earlier post.

I know I’ll keep using VSTS for my Azure Functions deployment from now on. It’s awesome and can only recommend you do it also!

As with almost every application there is a point where you have to work with some kind of secret, like for example a connection string to a database. There are multiple ways to retrieve these secrets and this isn’t any different with Azure Functions.

If you have set up a continuous deployment build within Visual Studio Release Management you can just substitute the values in your build, which makes it easy, transparent and consistent to add and change the values.

A different approach is to provide the secrets by yourself in the application settings of your App Service. These Application Settings are already used by the Azure Functions in order to specify some settings which are necessary to run properly.

image

While I have nothing against either one approach, there is a better option available to retrieve secrets in your cloud software. This option is called Azure Key Vault. The Azure Key Vault is a secure store which helps you safeguard keys and secrets used in your cloud applications.
One of the many advantages of using Azure Key Vault, compared to the alternatives, is having the possibility to revoke access to specific secrets for an application or user. This makes it much easier to lock down an application or user if it has been compromised. One other advantage is having a central location for all your keys & secrets and being able to update them when necessary.
As with almost any Azure service you get detailed logging of usage, which means your operations team is able to monitor the usage of the keys and secrets in more detail.

Setting up Azure Key Vault does have a bit of a learning curve, so I’ll post all steps, which are necessary today, below.
There are two (proper) ways of working with the Azure Key Vault from within your application or Azure Function. One is by using a client secret, the other is to work with a certificate. While I do think working with certificates is more secure, I don’t use them often for my personal projects. The main reason being certificates will expire so you have to renew them, which requires more management, which is something I rarely want to do for side-projects. However, if you are building something which is going to run in production for a customer, do have a look at working with certificates!

What do we need?

If you are choosing to work with a client secret, you need 3 things.

A Client Id, a Client Secret and an URL to the location of your secret.

The Id and Secret will be stored within the Azure Active Directory. The secret will, obviously, be stored within the Azure Key Vault.

One of the most common secrets we use with application development is a connection string to some kind of database. This is why I will be saving a Cosmos DB connection string in the Key Vault. Just create a Cosmos DB instance and retrieve the connection string from it. It should look similar the one below.

DefaultEndpointsProtocol=https;AccountName=[myAccount];AccountKey=[myAccountKey]TableEndpoint=https://[myTableEndpoint].documents.azure.com

Setting up Azure Key Vault

Creating an Azure Key Vault works just like any other service in the Azure ecosystem. Either use PowerShell, an ARM template or the portal. In this post I’ll use the portal to explain stuff, but you can do it with whichever has your preference.

I’ve chosen the default options for a Key Vault. I guess the Standard tier is good enough for most people and projects. Do keep in mind to add yourself (or a different administrator) to the principals.

image

If you remove yourself from the Access policy principals you won’t be able to access the Key Vault, which can be troublesome if you want to configure it.

Wait until your Key Vault has been created and add your secret connection string to it.

Navigate to the Secrets blade and choose to Add the secret.

image

The Upload option is defaulted to Certificate, but in order to add our connection string we need to change this to Manual.

image

After changing the option you will see some fields which look familiar, Name and Value. Fill the Name field with a proper description of your secret and the Value with the actual secret.

For connection strings you don’t really need to set an activation- or expiration date, but it’s a nice option to have if you are working with secrets which expire from time to time.

When the secret is created in the Vault you can click on it and see some details. A very cool feature is to create a ‘New Version’ of the key. This makes a lot of sense if you have secrets which expire.

image

When you click on the current version of the secret another blade will show up with even more details.

image

This is the blade which has all interesting data. For starters it contains the `Secret Identifier` which is the location of the secret value,stored in the Key Vault. You can also check what the actual secret value is in this blade!

Each version of a secret has its own Secret Identifier. Because of this, it is very easy to lock down a specific application, because you can just create a new version of the Secret, update it in all other applications and disable the old Secret Identifier endpoint. Using this strategy, the not-updated application will not be able to retrieve the secret anymore, therefore probably not operating (correctly) anymore.

This is all we have to do in the Azure Key Vault, for now.

Configuring the Active Directory

In order to access the Key Vault you need to be some kind of approved principal. This can either be a user or an application. Because Azure Functions (or any other service) aren’t people, we will have to create applications in Azure Active Directory.

First, navigate to your Active Directory and select `App registrations`.

image

You don’t need to have an existing application yet. These steps are necessary to create the appropriate identifiers and secrets your application can use later on.
There aren’t any real rules to name your application, but I saw some other examples post-fixing them with `-ad` so I added this myself also. This is a convention to make it clear these are Active Directory configured applications.

image

For most services you will be hosting in the cloud choosing the application type `Web app / API` is sufficient. The other option is `Native`, which should only be used for applications which can be installed on a system.

You might be surprised by the `Sign-on URL` at the bottom. For the purpose of storing and retrieving secrets you don’t really need this, so you can write whatever URL you want over here.

After creating the application you will see a blade with some details about it.

image

The Application Id you see in this overview is the Client Id, used later on for authentication with the Key Vault.

Now it’s time to create a (secret) key for retrieving the connection string.

image

Creating a key is one of the most important steps in the process.

image

The description of the key doesn’t really matter, just use something which will make it clear for what it is used for. In our case, the expiry date can be set to Never, which will re-define itself to 31-12-2299. I won’t live to see this date, so it’s practically Never to me.

The Value will be created when pressing the Save button on the top. Copy this Value, because you will not be able to see it again. This value is the Secret Key you will need later on!

The configuration of your Active Directory application is now finished.

Back to the Key Vault

Now that we have an application present in the Active Directory, we can add this to the list of approved Principals in the Key Vault.

Navigate back to the Key Vault and search for the `Access policies` option.

image

Over here you have the option to add a new principal by searching for the application just created.

image

Make sure you limit the permission set you give this application. For now the application only needs to retrieve secrets from the Key Vault. Therefore, selecting the Get permission for Secrets is good enough.

Save the changes you made to the access policies and you are good to go!

The hardest part is over and we can start coding our Azure Function!

Creating the Azure Function

In an earlier post I already mentioned you can write your Azure Functions in the portal or an IDE like Visual Studio. Because Visual Studio offers the best developer experience I’ll be using this for all of my Azure Functions development.

I’ve tried to keep things as simple as possible, therefore not doing anything fancy in this sample. All this function does is retrieving the connection string from the Key Vault and returning it to the requesting user.

using System.Configuration;
using System.Net;
using System.Net.Http;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.IdentityModel.Clients.ActiveDirectory;

namespace Minifier
{
    public static class Get
    {
        [FunctionName("Get")]
        public static HttpResponseMessage Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "{slug}")] HttpRequestMessage req, string slug, TraceWriter log)
        {
            // The Application Id of the Azure AD application
            var clientId = ConfigurationManager.AppSettings["ClientId"];
            // The Value of the Key you created in the Azure AD application
            var clientSecret = ConfigurationManager.AppSettings["ClientKey"];
            // The Secret Identifier of your secret value
            var connectionstringUrl = ConfigurationManager.AppSettings["ConnectionstringUrl"];

            // Creating the Key Vault client
            var keyVault = new KeyVaultClient(async (authority, resource, scope) =>
            {
                var authContext = new AuthenticationContext(authority);
                var credential = new ClientCredential(clientId, clientSecret);
                var token = await authContext.AcquireTokenAsync(resource, credential);
                return token.AccessToken;
            });

            // Retrieving the connection string from Key Vault
            var connectionstring = keyVault.GetSecretAsync(connectionstringUrl).Result.Value;

            return req.CreateResponse(HttpStatusCode.OK, "The connection string is: " + connectionstring);
        }
    }
}

This is all you have to do in your function. The most difficult part is creating the `KeyVaultClient`, but you can just copy-paste this piece of sample code to get it up and running!

Now you might think: “Hey, I see you are still retrieving data from the application settings, that’s not very safe, is it?”. Very observant indeed!
However, there isn’t a very good way around this. You still need to identify yourself with the Azure Key Vault, therefore storing a secret somewhere. As I mentioned, the best thing is do add these settings somewhere in the delivery pipeline, this way as few people possible know the actual values.
This is also one of the reasons why it is better to work with certificates. Certificates aren’t stored somewhere in plain text, but installed on a system. Because of this you can manage them a bit better, compared to secrets stored in a configuration file or portal.

Still, if someone managed to ‘steal’ your client secrets and identifiers, you will be able to make sure they can’t do anything with it. Remove (or restrict) the principal in your Key Vault and the application will be ‘crippled’ right away.

That’s all for working with the Azure Key Vault in combination with Azure Functions. It’s not very different (if at all) from a normal web application. Which makes it an even better solution to use for storing secrets in the cloud.

Lately, I’ve been busy learning more about creating serverless solutions. Because my main interest lies within the Microsoft Azure stack I surely had to check out the Azure Functions offering.

Azure Functions enable you to create a serverless solutions which are completely event-based. As it’s located within the Azure space, you can integrate easily with all of the other Azure services, like for example the service bus, Cosmos DB, storage, but also external services like SendGrid and GitHub!

All of these integrations are fine and all, but seeing Azure Functions perform in action is still easiest with regular HTTP triggers. You can just navigate with a browser (or Postman) to a URL and your function will be activated immediately. I guess most people will create these kind of functions in order to learn to work with them, at least that’s what I did.

Creating your Azure Functions App

In order to create Azure Functions, you first have to create a so called Function App in the Azure Portal. Creating such an app is quite easy, the only thing you have to think about is which type of Hosting Plan you want to use. At this time there are 2 options, the Consumption Plan or the App Service Plan.

image

For your regular “Hello World”-function it doesn’t really matter, but there are a few important differences between the two.

If you want to experience the full power of a serverless compute solution, you want to use the Consumption plan. This plan creates instances to host your Azure Functions on-demand, depending on the number of incoming events. Even when there is a super-high load on your system, this plan scales automatically.
The other main advantage is, you will only pay for the functions if they actually do something.
As you might remember, these two advantages are, in my opinion, the main benefits for people to move to serverless offerings.

However, using the App Service plan also has some advantages. The main advantage will be to utilize the full power of your virtual machines and not having unexpected (high?) costs. With the App Service plan, your function apps run on the App Service virtual machines you might already have deployed on your subscription (Basic, Standard and Premium). This means you can share the same (underlying) virtual machines of your websites with your Azure Functions. Using this plan might save you some money in the end, because you are already paying for the (unused) compute which you are able to utilize now. Running these functions won’t cost anything extra, aside from the extra bandwidth of course.
Another advantage is your functions will be able to run continuously, or nearly continuously. The App Service plan is useful in scenario’s where you need a lot of long-running compute. Keep in mind you need to enable the Always On setting in your App Service if you want your functions to run continuously.

There are some other little differences between the two plans, but the mentioned differences are most important, to me at least.

Do remember to enable Application Insights for your Functions App. It’s already an awesome monitoring platform, but the integration with Azure Functions makes it even more amazing! I can’t think of a valid reason not to enable it, because it is also quite cheap.

After having completed the creation of your Functions App you are able to navigate to it in the portal. A Functions App acts much like a container for one or more Azure Functions. This way you are able to place multiple Azure Functions into a single Functions App. It might be useful for monitoring if you are placing functions for a single functional use-case into one Functions App.
You can of course put all of your functions inside one App. This doesn’t really matter at the moment. It’s a matter of taste.

Your first Azure Function

If you are just staring with Azure Functions and serverless computing I’d advise to check out the portal and create a new function from over there. Of course, it isn’t a recommended practice if you want to get serious about developing a serverless solution, but this way you are able to take baby steps into this technology space. A recommended practice would be using an ARM template or CLI script.

From inside the Function App you have the possibility to create new functions.

image

Currently, the primary languages of choice are C#, JavaScript and F#. This is just to get you started, because there are more languages supported already (node.js, python, PHP) and more are coming. There’s even an initiative to support R scripts in Azure Functions!

For now I’ll go with the C# function, because that’s my ‘native’ programming language.

After this function is created you are presented with an in-browser code editor from which you can start coding and running your function.

image

This function is placed in a file called run.csx. The csx extension belongs to C# scripts (check out scriptcs.net), much like ps1 belongs to PowerShell scripts.
It should now be clear this Azure Function is ‘just’ a script file with an entry point. This entry point is much like the Main-method in the Program.cs file of your console application.

Because we have created an HTTP hook/endpoint, you should return a valid HTTP response, like you can see in the script. If you want to test your function, the portal has you covered by pressing the Test button. Even though testing in the portal is cool, it’s even cooler to try it out in your browser. If everything is set up correctly you will be able to navigate to the URL https://[yourFunctionApp].azurewebsites.net/api/HttpTriggerCSharp1?code=[someCode] and receive the content, which should be `Please pass a name on the query string or in the request body`. You can extract the proper URL from the Get function URL link in the portal.

Management & settings

Azure Functions are really, really, really short-lived App Services. They are also deployed in the same Azure App Service ecosystem, therefore you can leverage the same management possibilities which are available to your regular App Services.

On the Platform features tab you are able to navigate to most useful management features of your Functions App.

image

I really like this page, it’s much better and clearer compared to the configuration ‘blade’ of regular Azure services. Hopefully this design will be implemented with the other services also!

Keep in mind to configure CORS properly if you want to use your HTTP function from within a javascript application!

All other features presented over here are also important of course. I especially like the direct link to the Kudu site, from which you can do even more management!

Another setting, which is in preview at the moment, is enabling deployment slots. Yes, deployment slots for your Azure Functions! This works exactly the same like you are used to with the regular App Services. I’ve configured one of my Function Apps to use the deployment slots. By enabling deployment slots you can now deploy the `develop` branch to a development slot and the `master` branch to the production slot of the Function App.

image

If for some reason you want to disable the usage of a specific function, just navigate to the Functions leaf in the treeview and you are able to disable (and re-enable) the different functions individually.

image

Creating real functions

Creating functions from within the Azure Portal isn’t a very good idea in real life. Especially since you don’t have any version control, quality gates, continuous integration & deployment in place. That’s why it’s a good idea not to use the browser as your primary coding environment. For a professional development experience you have multiple options at hand.

The easiest option is to use Visual Studio 2017. You need version 15.3 (or higher) of Visual Studio, which is still in preview at this moment. When you are done installing this version you should be able to install the Azure Functions Tools for Visual Studio 2017 from the Visual Studio Marketplace on your machine.
Doing so will enable you to choose a new project template called Azure Functions. You can add multiple Azure Functions to this project. Currently, there already is an extensive list of events available to which you can subscribe to. I’m sure the list will grow in the future, but for now it will suffice for a great deal of solutions.
image

After having chosen the event of your choice you can change some settings, like Access Rights. If you want your HttpTrigger to be accessed by anonymous users from the web, you need to set it to Anonymous instead of Function. No worries if you forgot to do this, it’s something you can set from inside your code also.

When comparing the created functions (the one from the portal and from Visual Studio) you will notice a couple of differences.

namespace FunctionApp1
{
    public static class Function1
    {
        [FunctionName("HttpTriggerCSharp")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            //do stuff
        }
    }
}

First of all, your function is now wrapped inside a namespace and a (static) class. This makes organizing and integrating the code with your current codebase much easier.

Another thing you might notices are the extra attributes added to the function.

The Run-method now has an attribute FunctionName, which will tell the Function App what the name of the function will be. Do note, having multiple functions with the same FunctionName will override the one getting deployed. The function.json file will create an entrypoint for the latest function it finds with a specific name.
Also, the first parameter, the HttpRequestMessage, now has an HttpTrigger-attribute stating how the function can/should be triggered. In this case the function can only be triggered by other functions with a HTTP GET or POST. Because of these attributes it is easier to change the behavior of the functions later on. You aren’t dependent on choices made in some wizard.

I already mentioned the function.json file briefly. This file is used to populate the Function App with your functions. If you’ve explored the portal a bit, you might have seen this file already after creating the initial function.

image

This file contains all configuration for the provided functions within the Function App. The function.json file from the first function script contains the following information.

{
  "disabled": false,
  "bindings": [
    {
      "authLevel": "function",
      "name": "req",
      "type": "httpTrigger",
      "direction": "in"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ]
}

Now compare it with the one, generated, by the function in Visual Studio.

{
  "bindings": [
    {
      "type": "httpTrigger",
      "methods": [
        "get",
        "post"
      ],
      "authLevel": "function",
      "direction": "in",
      "name": "req"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ],
  "disabled": false,
  "scriptFile": "..\\FunctionApp1.dll",
  "entryPoint": "FunctionApp1.Function1.Run"
}

As you can see, the file which is generated in Visual Studio contains information about the entry point for the function.

Note: You won’t see this file, it’s located in the assembly output folder after building the project.

In the above example I’ve used Visual Studio to create functions. It is also possible to use any other IDE for this, only you have to take into consideration you’ll have to use the Azure Function CLI tooling in those environments.

Debugging

When using a proper IDE, like Visual Studio, you are used to debugging your software from within the IDE.

This isn’t any different with Azure Functions. When pressing the F5-button a command line application will start with your functions loaded inside.

image

This application will start a small webserver which emulates the Function App. When working with HTTP triggers you can easily navigate to the provided endpoints. Of course, you can also work with any other event as long as you are able to trigger them.

At BUILD 2017 it was also announced you can do live-production-debugging of your Azure Functions from within Visual Studio. In other words: Connecting to the production environment and setting breakpoints in the code.
The crowd went wild, because this is quite cool. However, do you want to do this? It’s nice there is a possibility for you to leverage such a feature, but in most cases I would frown quite a bit if someone suggested doing this.
I do have to note live-debugging Azure Function production code isn’t as dangerous as a ‘normal’ web application. Normally when you do this, the complete thread is paused and no one is able to continue on the site. This is one of many reasons why you never want to do this. With a serverless model this isn’t the case. Each function spins up a new instance/thread, so if you set a breakpoint in one of the instances, all other instances can still continue to work.
Still, take caution when considering to do this!

Deployment

We are all professional developers, so we also want to leverage the latest and greatest continuous integration & deployment tools. When working in the Microsoft & Azure stack it’s quite common to use either TFS or Azure Release Management for building your assemblies.

Because your Azure Functions project still produce an assembly, which should be deployed with along with your function.json file, it is also still possible to use the normal CI/CD solutions for your serverless solution.

If you don’t feel like setting up such a build environment you can still use a different continuous deployment feature which the App Services model brings to us.

On the Platform features tab click on the Deployment options setting.

image

This will navigate you to the blade from where you can setup your continuous deployment.

image

Using this feature you are able to deploy every commit of a specific branch to the specified application slot.

Setting up this feature is quite easy, if you are using a common version control system which is located in the cloud, like VSTS, GitHub, BitBucket or even DropBox and OneDrive.

I’ve set up one of my applications with VSTS integration. Every time I push some changes to the `master` branch, the changes are being built and deployed to the specified slot.

image

When clicking on a specific deployment, you can even see the details of this deployment and redeploy if needed.

image

All in all quite a cool feature if you want to use continuous deployment, but don’t want to set up TFS or Azure Release Management. The underlying technology still uses Azure Release Management, but you don’t have to worry about it anymore.

If you are thinking of using Azure Functions in your professional environment I highly recommend using a proper CI/CD tool! The continuous deployment option is quite alright (and better as publishing your app from within Visual Studio), but one of the major downsides is you can’t ‘promote’ a build to a different slot.
You can only push changes to a branch and those will get built. This isn’t something you want in your company, but that’s a completely different blog post and unrelated to serverless or Azure Functions.

Hope this helps you out a bit starting with Azure Functions.

There are dozens of blog posts, articles and books talking about microservices. Some of them talk about the design, other on how to implement and even others talk about why and when to use them.

This post will be a combination of them all. I won’t claim to be the all-time-expert on the matter, but I have read quite a bit on the subject, attended some talks and have had the honor to design (and implement) such a solution a couple of years ago.

First and foremost, it’s important to understand a microservices design is just another standard architectural design pattern. This pattern can help you to create a high-performance, scalable software solution, but it can also bankrupt your company!

The short explanation

If you don’t have much time to read, or don’t really want to, here’s the elevator pitch for microservices:

It’s a set of small (independent) services, each of them able to carry out their own (functional/business) responsibility without having direct dependencies to other services.

The long explanation

Of course, such a short explanation is a bit…short, to say the least.

The general overview

The microservices pattern is a combination of several other, well-known, patterns which we probably have been using for a couple of years now. Take for example the Service-oriented Architecture, Event-driven architecture, Database per Service, API Gateway / Backend for Frontend and many more. All of them combined can form an architecture which has multiple small services, all operating individually.

There’s an awesome picture on microservices.io (copied over here for completeness) drawing lines to dozens of patterns which are related to a microservices architecture.

It’s recommended to have some experience with a couple of these patterns, before trying to implement the full microservices architecture. Even if you and your team have lots of experience, developing and maintaining this kind of a solution design can still proof to be complicated over time. Managing all these small services and keeping an overview requires quite some (software) management skills.

So, why would I want all this complexity?

An excellent question!

Keeping track of all those small, independent which might, or might not have some loosely coupled dependency to each different services can become quite a burden. Why would you ever want to encumber yourself with such a design?

Well, let’s take a step back and look at how we (I’m guessing about 98% of the software industry) have been developing software solutions in the past couple of decades. Most of the solutions are very big software systems with multiple layers, tiers, thousands of classes (I’m coming from an OO-background over here), specific boundaries, tightly coupled services, etc.

Many of us have been developing and maintaining these kind of solutions and we all know it can be very hard to learn these systems for new team members, but also to add, change or migrate functionality.

This is called a monolithic architecture. A nice diagram is shown below.

All of the business logic, data access, I/O operations and other stuff is bundled into one big system. Most of the time the Presentation/View layer is also put inside this big block.

I’m not saying designing and developing a monolithic software solution architecture is a bad thing. Each architectural pattern has its own use. Nowadays, most developers understand this type of architecture style as it has been quite a common way to design your software in the past couple of years/decades.

The major downside of this monolithic software architecture is changing different parts of the system. As said, most of the time all of the components are tightly dependent to each other and it becomes harder and harder to change functionality.

Let us not forget the deployment and testing strategies of these massive software solutions. I’ve been at a couple of companies where they got the deployment and testing in such a state they can deploy their monolithic software multiple times per day without breaking a sweat. However, most of the companies I’ve consulted at aren’t near this level of professionalism and deploying a new version also stresses out the teams.

If you are experiencing the issues mentioned above, or want to steer your software development in a more ‘agile’ way, changing your solution design to a microservice architecture might be a way to solve this. The idea behind this architecture is you can create dozens of small services which focus on one specific thing. This means these services will probably be easy to understand for the development team(s), but also to test the services and deploy them to the different environments.
Keep in mind though, migrating from a monolith architecture to a microservices architecture isn’t without risk and requires a lot of discipline and experience of your development & operations team. Most of the time I’d advice against such a migration!

When it’s easy to develop, test and deploy your solution, it should also be easier to change certain areas of it. This is the biggest win for a microservices architecture, in my opinion.

Some people say another big win is ‘you can write each service in the language/framework of your choice’. While this is true, I don’t think this should be the goal of your organization. If you are developing each service in a different programming language or framework, you loose one of the biggest advantages: learning to develop and maintaining the service.
Imagine you have 10 different services written in .NET Framework 4.6, .Net Core 1.0, .Net Core 2.0, Java, Ruby, Python, node.js, PHP and PowerShell. This means all of the developers need to learn these languages and different software designs of the services. The maintainability of such a solution will be worse compared to a large monolithic software solution written in a single language/framework. This is one of the reasons I’m a big believer in sticking to 1 or 2 languages per software solution. There might be a better fit in the solution for a specific problem, but if the overall solution will become even more complicated to understand I’ll pass and prefer to use the technique we are already using in other areas of the system.

So to conclude on the ‘why do you want all this complexity?’-question, it’s because this type of software design can make your software easier to develop, extend, test and deploy. All of these factors can cause a faster release-lifecycle, which means more business revenue. More business revenue means more profit!

So how do I get there?

Are you convinced creating a microservices solution might be a good idea for your next/current software solution?

If so, there are a couple of things to consider.

First and foremost it’s important your development team has basic understanding of about 60% of the patterns mentioned in the earlier diagram. Preferably more!

Now there’s another thing to consider, your software & network infrastructure.
While it’s completely possible to develop and deploy a microservices solution on-premise, I’d advise for you to go to a major cloud provider, like Microsoft Azure, Amazon AWS or the Google Cloud Platform. I’m a Microsoft fanboy, so I’ll advice Microsoft Azure of course!
The main reason to choose for a major cloud provider is the 24/7 support and they offer quite a lot of services which can help you create a proper microservices solution.
As I mentioned, it’s possible to deploy all of these services on-premise, but this will add additional load to your system administrators as they will have to learn and maintain all of the services and techniques you want to use in your software. This will also add some additional complexity to your deployment strategy and maintaining the software will probably be harder also.

Now that you are ready to create your microservices solution in the cloud you can start designing your first service. This first service will work like a charm and you will probably be able to deploy it within a couple of hours/days, maybe 2 weeks if you are unlucky! The development of the second service will probably go quite well also. Both of those services can operate individually without having any dependencies between them.
As the project moves along you’ll notice it will be useful if the services could share information with each other. This thought is where a lot of microservice solution architectures go to die. The moment one of the team members thinks it’s a good idea to call a different service from the service they are working in you are in trouble. Think about it for a moment……


If you choose to let the services work directly with each other, wouldn’t this be the same as a distributed monolithic architecture? In short: a monolith with latency?
Yes, it is!

That’s why it’s so important to understand the rules (or: guidelines) of a microservices architecture. I’ll repeat them again as they are so important and fundamental to the cause: Each service is able to carry out their own (functional/business) responsibility without having direct dependencies to other services.

So what does this mean?

This means each (micro)service is able to carry out whatever they need to do, without having the need for any (external) service. With ‘service’ I mean some other API or software service which lives in a different context.

This doesn’t mean you can’t use a database, cache, queue, etc. in your service. These are all systems which are necessary in modern development environments and help you create a stable, robust, scalable and performant service.

How would you design such a solution?

A couple of years ago, I was lucky enough to be hired as a solution architect on a project which was a perfect candidate for a microservices architecture. I won’t go into details over here, because it’s a confidential project.
At that time the hype of this architecture style was at its highest and there weren’t a lot of best-practices publicly available yet. However, because the project was such a perfect fit, I decided to go this route.

To this time I’m still quite happy with the overall design, even though there are some details I would have done differently in hindsight, which I can’t share a lot of info on.

Creating one or two services which can operate individually isn’t much of a problem. In this case it makes sense each service has some kind of service layer (REST API), business layer and probably even a data layer, your typical 3-layer software architecture.
Because you want to access data quickly a caching mechanism might be introduced (Redis Cache) and you need a ‘normal’ database of course. Your first option might be a SQL Server database, which we all know and love. However, when you think about it, why would you choose a relational database type, like SQL Server, if all you are interested in is retrieving and storing data for your own little service?
It might be a better idea to store the data in some kind of NoSQL database (DocumentDBCosmosDB. This way you can do a single get- or insert-statement whenever you are working with data. No more need for denormalizing & normalizing data, joins, strange foreign keys, etc. By moving to a NoSQL database you can improve performance quite a bit.
Note: I still love SQL Server, but it might not be the most logical database to store data for your microservice. Think about it and decide for yourself!

What I’ve described so far isn’t very special and probably how you have been designing your software already. With the exception to the database, because its highly likely you have 1 (maybe 2) database in your solution design which is shared between all application logic.

The next part might be a bit new to you or your team, using some kind of service bus to communicate between services.
I already mentioned each service should be able to work independently of other services. But it might still be useful to share data between these services. We don’t want any direct dependencies between these services so this communication will be abstracted via the service bus. If something happens in a service, this service will send a message to the service bus. If there happens to be a service which is interested in this message, it can pick it up and work with this information.


Example
Imagine you have an ProductManagement-service and an Billing-service. If a product manager changes the name of a product from DocumentDB to CosmosDB in the ProductManagement-service, you also want to reflect this name change in the Billing-service in order to create proper bills. This means the ProductManagement-service will send a Update-message to the service bus. The Billing-service is able to pick up the message and will update its own data store with the necessary changes.


One thing you have to make sure is messages should be able to be picked up by multiple subscribers (logging service, audit service, other business services, etc.). The service bus should be able to implement some kind of pub/sub mechanism, for example the Azure Service Bus Topics.

A simplified diagram of the system I was designing a couple of years ago looks very similar to the following picture.

image

Over here I’ve highlighted the Messages Service API green to make it stand out. This service handles all things related to messaging (sending, retrieving, marking as read, archiving, etc.).

Whenever an action is triggered within this service, it updates both its own persistent storage (the repository) and its cache. When both of these repositories are updated, a message will be sent down to the service bus. This message will contain some details on the action executed, which might be useful to other services. Not all information sent will be useful to all interested services, but its not the responsibility of the Messages Service to keep track of this. The Messages Service only takes care of itself and passes along some information of the executed action in case some other service is interested in it.

In the diagram above you can see a Messages Service backend, which keeps track of all mutations of Messages and stores them in some kind of reporting repository. There are also a couple of other worker services and an API service, which are interested in Message mutations. Each of these services can subscribe to specific or all Message actions posted on the service bus.
None of the services are dependent to each other, but this way they still can share data. If Interested worker system #1 will send some kind of message to the service bus later on, any other service can act upon this message also.
Most service bus systems will allow you to filter on certain properties of messages sent which means your service will only act on messages it’s interested in.
Also note, the Messages Service API will also subscribe itself to any Message mutations posted on the service bus. This means it can update its own repository and cache if necessary when messages are retrieved from the service bus also. This might be useful if any other service needs to post something which the Message Service API will have to return to connected clients.
You will understand why this might be necessary when implementing a Backend for Frontend system. I’d try to avoid this in the beginning, but when your solution grows and you want more specific API’s for different clients, this will become more important.

Wow, while typing I noticed how confusing this Message scenario might be, because I’m writing about Messages (with a capital M) as actual object types and messages (lowercase) as any type of object posted as a ‘message’ on the service bus. Excuse me if this strikes as confusing to you.

What about latency?

This is one of the most common heard complaints when designing or speaking about a microservices architecture. Because there are so many services, each doing their own thing, people will think there will be a lot of latency and retrieving data will be quite slow.

If this is the case in your solution, you’ve probably made a few mistakes in the overall design. When following the design from the example above, you’ll see each service will have a direct connection to a repository (and probably even a cache) nearby. This repository is designed for specific use for this service, which means it should be blazing fast when querying or updating the data store. Overall, I think this design should be faster compared to your traditional (monolith?) design, because data is near the service tier so it doesn’t has to ‘travel’ very far.

Remember, your service should be able to run independently of other services. So, effectively, there will be 0 latency in your system.

If your service is dependent on information from other services, it will retrieve this information via the service bus. Retrieving information via the service bus (or any other messaging system) will cause a lot of latency (most of the time a couple of milliseconds). We can’t do much about this, aside from keeping this in mind when designing the system. None of the services will have (real) realtime access to updated data from a different service, but all of the services will be eventual consistent. It’s up to you to determine if the latency of multiple milliseconds (maybe even seconds in the worst-case scenario) is something your services can afford for updating their repository data.

Managing the messages and system health

This is the hard part when designing, developing and maintaining a microservices architecture.

When having hundreds of small services, each doing their own little thing, it’s important to have some proper management tooling in place. Most major cloud providers offer such services, like the Azure Monitor and Azure Application Insights.

In order to properly use these services you have to implement the some logging mechanism in your services. Logging is very important when developing this kind of a solution architecture. No one will want to check each service individually to in order to find out if they have any errors.

With the logging in place you will be able to create awesome dashboards telling you if something is wrong or not. Even the latency and load of your services can be measured. Still, this requires some practice and you shouldn’t think too lightly on this matter. Keeping a good overview of your overall system performance and health is very, very important!

These operational insights is not something you should start considering near the end of the project. Creating logs and other insightful information should be implemented right from the beginning if you want useful and meaningful information.

To conclude

Implementing a microservices solution can be quite a pain in the beginning, but once you get the hang of it you will notice it will become easier. I’m a big fan of this kind of software design, if implemented properly!

There are dozens of other factors you have to consider which I haven’t mentioned in this post. If you like, I can dedicate some other posts on these matters, just drop me a comment and I’ll see what I can write up.

It has become increasingly important to have your site secured via some kind of certificate. Even your Google ranking is affected by it.

The main problem with SSL/TLS certificates is the fact most of them cost money. Now, I don’t have any problem with paying some money for something like a certificate, but it will cost quite a lot if I want to set this up for all of my sites & domains. In theory it’s possible to create a self-signed certificate and publish your site with it, but that’s not a very good idea as there’s no one who trusts your self-signed certificate besides yourself.

Luckily Mozilla is helping us, poor content-creators, out with their service called Let’s Encrypt. Let’s Encrypt is a rather new Certificate Authority which is offering a free, open and automated service to create certificates. Their Getting Started guide contains some details on how to set this up for your website or hosting provider.

This is all fun and games, but when hosting your site(s) in the Azure App Service ecosystem you can’t do much with the steps defined in the Getting Started guide. At least, I couldn’t make any sense off it.

There’s a developer who has been so kind to create a so called Site extension for an Azure App Service called Azure Let’s Encrypt. It comes in two flavors for both x86 and x64 systems. Depending on which platform you have deployed your site to, you need to activate one corresponding this platform.

image

In order to access these site extensions you’ll have to log in to the Kudu environment of your site (https://yourAzureSite.scm.azurewebsites.net/SiteExtensions/#gallery).

Once it’s installed you can launch the extension and will navigate to the configuration area of this extension. This screen will show you a number of fields for which most of them have to be filled with correct data.

image

This form can look quite impressive if you are not familiar with these things. I’m not very familiar with these terms also, but Nik Molnar has a nice post with some details on the matter.

He mentions you should first create two new application settings within the Azure Portal for the App Service you want to enable SSL. The name/key of these settings are AzureWebJobsStorage and AzureWebJobsDashboard. Both should contain a connectionstring to your Azure Storage account, which will look something like the following DefaultEndpointsProtocol=https;AccountName=[storage account name];AccountKey=[storage account key]. The storage account is necessary for the WebJobs, which will be created by the site extension in order to refresh the SSL certificate.

Next up is the hardest part, creating a Server Principal and retrieving a Client Id and Client Secret from it. In this context, a Server Principal can be seen as some kind of authentication server.
If you already have a Server Principal, you can skip the step of creating one. I still had to add one to my subscription. The following script will create one for you.

$uri = 'http://mysubdomain.mydomain.nl'
$password = 'SomeStrongPassword'

$app = New-AzureRmADApplication -DisplayName 'MySNP' -HomePage $uri -IdentifierUris $uri -Password $password

Needles to say, your PowerShell context needs to be logged in into your Azure subscription before you are able to run this command.

You are now ready to add your application to the server principal with the following commands.

New-AzureRmADServicePrincipal -ApplicationId $app.ApplicationId

New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName $app.ApplicationId

These commands will add the application and make sure it has the Contributor role, so it will have enough permissions to install and configure certificates.
Make sure you write down the $app.ApplicationId which will be used as the Client Id in the site extension later on.

Now that we have all the information to configure the Let’s Encrypt site extension, we are ready to install it on our App Service. In order to make this even easier, please configure the following Application Settings in your App Service.

KeyValue

letsencrypt:Tenant

Your AAD domain
(yoursubscription.onmicrosoft.com)

letsencrypt:SubscriptionId

Your Azure Subscription Id
Can be found on the Overview page of an App Service

letsencrypt:ClientId

The $app.ApplicationId we saved from before

letsencrypt:ClientSecret

The $password value used in the earlier script

letsencrypt:ResourceGroupName

The resource group name your App Service is created on

After installing the site extension, navigate to the configuration page. If all is set up correctly, the fields will all be prefilled with the correct information.

image

Check the settings and adjust them if necessary. When you are sure everything is set up correct, proceed to the next page.

This next page isn’t very interesting at this time, so we can continue to the final page of the wizard.

image

Nothing very special over here, just make sure to fill out your e-mail address so you are able to receive notifications from Let’s Encrypt when necessary.

Also good to note, don’t check the Use Staging option. By checking this box the extension will use the test API of Let’s Encrypt and will not create a certificate for you.

Press the big blue button and your site will be available with the certificate after a few moments. The extension uses the challenge-response system of Let’s Encrypt to create a certificate for you. This means it will create a couple of directories in the wwwroot folder of your App Service. This folder structure will look like .well-known\acme-challenge.

image

If this succeeds, Let’s Encrypt is able to create your certificate.

The challenge-response system folders are hard-coded in the extension. If you run your site in subfolder, like public, website, build, etc. you have to specify this via a special variable. You can add the letsencrypt:WebRootPath key in the application settings and specify the site folder in the value, for example site\wwwroot\public. This is very important to remember. I had forgotten about this on one of my sites and couldn’t figure out why the creation of the SSL certificate didn’t work.

Now that you know how to secure your sites with a free certificate, go set this up right away!