So, one of my previous customers reached out to me a couple of weeks ago. They had a question concerning on how to use dependency injection in their AutoMapper profiles. For this project we were using profiles which were dynamically loaded inside the application using MEF and were using Autofac for dependency injection.

The way you would normally load all of these profiles is by using the `AddProfiles` method when initializing AutoMapper. The code would look similar to the following excerpt.

private static void RegisterAutomapperDefault(IEnumerable<Assembly> assemblies)
{
    AutoMapper.Mapper.Initialize(cfg =>
    {
        cfg.AddProfiles(assemblies);
    });
}

This works fine on most occasions and is the recommended approach, to my knowledge.

When you start thinking about using dependency injection (constructor injection in this case), you might want to rethink your mapping profile. Because, if you have the need for dependencies when mapping object properties to the properties of a different object it probably means there’s too much logic going on over here.

Of course, if you need this, one thing you might want to consider is using the custom type convertors or custom value resolvers. You can use dependency injection (constructor injection) using these convertors and resolvers by adding a single line in the `Initialize` method of AutoMapper.

private static void RegisterAutomapperDefault(IContainer container, IEnumerable<Assembly> assemblies)
{
    AutoMapper.Mapper.Initialize(cfg =>
    {
        cfg.ConstructServicesUsing(container.Resolve);

        cfg.AddProfiles(assemblies);
    });
}

Now if you still feel like you need to do constructor injection inside your mapping `Profile` classes, that’s also quite possible, but please think about it in before doing so.

In order to get this working, I first created a new `Profile` class which injects an `IConvertor`, like below.

public class MyProfile : Profile
{
    public MyProfile(IConvertor convertor)
    {
        CreateMap<Model, ViewModel>()
            .ForMember(dest => dest.Id, opt => opt.MapFrom(src => src.Identifier))
            .ForMember(dest => dest.Name, opt => opt.MapFrom(src => convertor.Execute(src.SomeText)))
            ;
    }
}

What you need to do now is register all of the `Profile` implementations to your IoC-framework, like Autofac. To do this, you have to do some reflection magic. The code used below retrieves all `Profile` implementations in the assemblies which have their name starting with “Some”.

public static IContainer Autofac()
{
    var containerBuilder = new ContainerBuilder();

    // Register the dependencies...
    containerBuilder.RegisterType<Convertor>().As<IConvertor>();


    var loadedProfiles = RetrieveProfiles();
    containerBuilder.RegisterTypes(loadedProfiles.ToArray());

    var container = containerBuilder.Build();

    RegisterAutoMapper(container, loadedProfiles);

    return container;
}

/// <summary>
/// Scan all referenced assemblies to retrieve all `Profile` types.
/// </summary>
/// <returns>A collection of <see cref="AutoMapper.Profile"/> types.</returns>
private static List<Type> RetrieveProfiles()
{
    var assemblyNames = Assembly.GetExecutingAssembly().GetReferencedAssemblies()
        .Where(a => a.Name.StartsWith("Some"));
    var assemblies = assemblyNames.Select(an => Assembly.Load(an));
    var loadedProfiles = ExtractProfiles(assemblies);
    return loadedProfiles;
}

private static List<Type> ExtractProfiles(IEnumerable<Assembly> assemblies)
{
    var profiles = new List<Type>();
    foreach (var assembly in assemblies)
    {
        var assemblyProfiles = assembly.ExportedTypes.Where(type => type.IsSubclassOf(typeof(Profile)));
        profiles.AddRange(assemblyProfiles);
    }
    return profiles;
}

All of this code is just to register your mapping profiles to Autofac. This way Autofac can resolve them when initializing AutoMapper. To register your mapping profiles in AutoMapper you need to use a specific overload of the `AddProfile` method which takes a `Profile` instance, instead of a type.

/// <summary>
/// Over here we iterate over all <see cref="Profile"/> types and resolve them via the <see cref="IContainer"/>.
/// This way the `AddProfile` method will receive an instance of the found <see cref="Profile"/> type, which means
/// all dependencies will be resolved via the <see cref="IContainer"/>.
/// </summary>
private static void RegisterAutoMapper(IContainer container, IEnumerable<Type> loadedProfiles)
{
    AutoMapper.Mapper.Initialize(cfg =>
    {
        cfg.ConstructServicesUsing(container.Resolve);
                
        foreach (var profile in loadedProfiles)
        {
            var resolvedProfile = container.Resolve(profile) as Profile;
            cfg.AddProfile(resolvedProfile);
        }
                
    });
}

You can see I’m resolving all of the loaded profiles via Autofac and add each resolved instance to AutoMapper.

This takes quite a bit of effort, but resolving your profiles like this will give you the possibility to do any kind of dependency injection inside your AutoMapper code.


Just remember, as I’ve written before: “Just because you can, doesn’t mean you should!”
Still I wanted to show you how this can be done as it’s kind of cool. If you want to check out the complete solution, check out my GitHub repository for this project.

You might remember me writing a post on how you can set up your site with SSL while using Let’s Encrypt and Azure App Services.

Well, as it goes, the same post applies for Azure Functions. You just have to do some extra work for it, but it’s not very hard.

Simon Pedersen, the author of the Azure Let’s Encrypt site extension, has done some work in explaining the steps on his GitHub wiki page. This page is based on some old screenshots, but it still applies.

The first thing you need to do is create a new function which will be able to do the ACME challenge. This function will look something like this.

public static class LetsEncrypt
{
    [FunctionName("letsencrypt")]
    public static HttpResponseMessage Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "letsencrypt/{code}")]
        HttpRequestMessage req, 
        string code, 
        TraceWriter log)
    {
        log.Info($"C# HTTP trigger function processed a request. {code}");

        var content = File.ReadAllText(@"D:\home\site\wwwroot\.well-known\acme-challenge\" + code);
        var resp = new HttpResponseMessage(HttpStatusCode.OK);
        resp.Content = new StringContent(content, System.Text.Encoding.UTF8, "text/plain");
        return resp;
    }
}

As you can see, this function will read the ACME challenge file from the disk of the App Service it is running on and return the content of it. Because Azure Functions run in an App Service (even the functions in a Consumption plan), this is very possible. The Principal (created in the earlier post) can create these type of files, so everything will work just perfectly.

This isn’t all we have to do, because the url of this function is not the url which the ACME challenge will use to retrieve the appropriate response. In order for you to actually use this site extension you need to add a new proxy to your Function App. Proxies are still in preview, but very usable! The proxy you have to create will have to redirect the url `/.well-known/acme-challenge/[someCode]` to your Azure Function. The end result will look something like the following proxy.

"acmechallenge": {
  "matchCondition": {
    "methods": [ "GET", "POST" ],
    "route": "/.well-known/acme-challenge/{rest}"
  },
  "backendUri": "https://%WEBSITE_HOSTNAME%/api/letsencrypt/{rest}"
}

Publish your new function and proxy to the Function App and you are good to go!

If you haven’t done this before, be sure to follow all of the steps mentioned in the earlier post! Providing the appropriate application settings should be easy now and if you just follow each step of the wizard you’ll see a green bar when the certificate is successfully requested and installed!

image_thumb5

This makes my minifier service even more awesome, because now I can finally use HTTPS, without getting messages the certificate isn’t valid.

(Almost) No one likes writing code meant to store data to a repository, queues, blobs. Let alone triggering your code when some event occurs in one of those areas. Luckily for us the Azure Functions team has decided to use bindings for this.
By leveraging the power of bindings, you don’t have to write your own logic to store or retrieve data. Azure Functions provides all of this functionality out of the box!

Bindings give you the possibility to retrieve data (strong-typed if you want) from HTTP calls, blob storage events, queues, CosmosDB events, etc. Not only does this work for input, but also for output. Say you want to store some object to a queue or repository, you can just use an output binding in your Azure Function to make this happen. Awesome, right?

Most of the documentation and blogposts out there state you should define your bindings in a file called `function.json`. An example of these bindings is shown in the block below.

{
  "bindings": [
    {
      "name": "order",
      "type": "queueTrigger",
      "direction": "in",
      "queueName": "myqueue-items",
      "connection": "MY_STORAGE_ACCT_APP_SETTING"
    },
    {
      "name": "$return",
      "type": "table",
      "direction": "out",
      "tableName": "outTable",
      "connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
    }
  ]
}

The above sample specifies an input binding for a Queue and an output binding for a some Table Storage. While this works perfectly, it’s not the way you want to implement this when using C# (or F# for that matter), especially if you are using Visual Studio!

How to use bindings with Visual Studio

To set up a function binding in via Visual Studio you just have to specify some attributes for the input parameters of your code. These attributes will make sure the `function.json` file is created when the code is being compiled.

After creating your first Azure Function via Visual Studio you will get a function with these attributes immediately. For my URL Minifier solution I’ve used the following HttpTrigger.

[FunctionName("Get")]
public static async Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "{slug}")] HttpRequestMessage req, string slug,
    TraceWriter log)

Visual Studio (well, actually the Azure Function tooling) will make sure this will get translated to a binding block which looks like this.

"bindings": [
  {
    "type": "httpTrigger",
    "route": "{slug}",
    "methods": [
      "get"
    ],
    "authLevel": "anonymous",
    "name": "req"
  }
],

You can do this for every type of trigger which is available at the moment.

Sadly, this type of development hasn’t been described a lot in the various blogposts and documentation, but with a bit of effort you can find out how to implement most bindings by yourself.

I haven’t worked with all of the different type of bindings yet.
One which I found quite hard to implement is the output binding for a Cosmos DB repository. Though, in hindsight it was rather easy to do once you know what to look for. What worked for me, is creating an Azure Function via the portal first and see which type of binding it uses. This way I found out for a Cosmos DB output binding you need to use the `DocumentDBAttribute`. This attribute needs a couple of variables, like the database name, collection name and of course the actual connection string. After providing all of the necessary information your Cosmos DB output binding should look something like the one below.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("TablesDB", "minified-urls", ConnectionStringSetting = "Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

Notice I had to remove the `async` keyword? That’s because you can’t use `async` if there is an out-parameter.

The thing I had the most trouble with is finding out which value should be in the `ConnectionStringSetting`. If you head down to the Connection String tab of your Cosmos DB in the Azure portal you will find a connection string in the following format.

DefaultEndpointsProtocol=https;AccountName=[myCosmosDb];AccountKey=[myAccountKey];TableEndpoint=https://[myCosmosDb].documents.azure.com

If you use this setting, you’ll be prompted with a `NullReferenceException` for a `ServiceEndpoint`. After having quite a bit of time on troubleshooting this issue I decided the problem probably had to use some other value in the `ConnectionStringSetting`.
Having tired a couple of things I finally discovered you have to specify the setting as follows:

AccountEndpoint=https://[myCosmosDb].documents.azure.com:443/;AccountKey=[myAccountKey];

Running the function will work like a charm now.

I’m pretty sure this will not be the only ‘quirk’ you will come across when using the bindings, but as long as we can all share the information it will become easier in the future!

Where will I store the secrets?

When using attributes you can’t rely much on retrieving your secrets via application settings or the like. Well, the team has you covered!

You can just use your regular application settings, as long as you hold to a naming convention where the values have to be uppercase and use underscores for separation. So instead of hardcoding the values “TablesDB” and “minified-urls” inside my earlier code snippet, one can also use the following.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("MY-DATABASE", MY-COLLECTION", ConnectionStringSetting = Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

By convention, the actual values will now be retrieved via the application settings.

Awesome!

Yeah, but Application Settings aren’t very secure

True!

I’ve already written about this in an earlier post. While using the Application Settings are fine to store some configuration data, you don’t want to specify secrets over there. Secrets should be stored inside Azure Key Vault.

Of course, you can’t use Azure Key Vault in these attributes.
Lucky for us, the Azure Functions team still got us covered with an awesome feature called Imperative bindings! The sample code is enough to get us cracking on creating a binding where the connection secrets are still stored inside Azure Key Vault (or somewhere else for that matter).

Because I’m using a Cosmos DB connection, I need to specify the `DocumentDBAttribute` inside the `Binder`. Something else you should note is when you want to use an output binding, you can’t just use create binding to a `MinifiedUrl` object. If you only specify the object type, the `Binder` will assume it’s an input binding.
If you want an output binding, you need to specify the binding as an `IAsyncCollector<T>`. Check out the code below to see what you need to do in order to use the `DocumentDBAttribute` in combination with imperative bindings.

// Retrieving the secret from Azure Key Vault via a helper class
var connectionString = await secret.Get("CosmosConnectionStringSecret");
// Setting the AppSetting run-time with the secret value, because the Binder needs it
ConfigurationManager.AppSettings["CosmosConnectionString"] = connectionString;

// Creating an output binding
var output = await binder.BindAsync<IAsyncCollector<MinifiedUrl>>(new DocumentDBAttribute("TablesDB", "minified-urls")
{
    CreateIfNotExists = true,
    // Specify the AppSetting key which contains the actual connection string information
    ConnectionStringSetting = "CosmosConnectionString",
});

// Create the MinifiedUrl object
var create = new CreateUrlHandler();
var minifiedUrl = create.Execute(data);

// Adding the newly created object to Cosmos DB
await output.AddAsync(minifiedUrl);

As you can see, there’s a lot of extra code in the body of the function. We have to give up some of the simplicity in order to make the code and configuration a bit more secure, but that’s worth it in my opinion.

If you want to check out the complete codebase of this solution, please check out the GitHub repository, it contains all of the code.

I need more bindings!

Well, a couple of days ago there was some amazing announcement. You can now create your own bindings! Donna Malayeri has some sample code available on GitHub on how to create a Slack binding. There is also a documentation page in the making on how to create these type of bindings.

At this time this feature is still in preview, but if you need some binding which isn’t available at the moment, be sure to check this out. I can imagine this will become quite popular once it has been released. Just imagine creating bindings to existing CRM systems, databases, your own SMTP services, etc.

Awesome stuff in the making!

In the past couple of years the software industry has come a long way in professionalizing the development environment. One of the things which has improved significantly is automating the builds and being able to continuously deploy software.

Having a continuous integration and -deployment environment is the norm nowadays, which means I (and probably you as a reader also) want to have this when creating Azure Functions also!

There are dozens of build servers and deployment tools available, but because Azure Functions are highly likely being deployed in Microsoft Azure, it makes sense to use Visual Studio Team Services with Release Management. I’m not saying you can’t pull this off with any of the other deployment environment, but for me it doesn’t make sense because I already have a VSTS environment and this integrates quite well.

In order for you to deploy your Function App, the first thing you have to make sure is to have an environment (resource group) in your Azure subscription to deploy to. It is advised to use ARM templates for this. There is one big problem with ARM templates though, I genuinely dislike ARM templates. It’s something about the JSON, the long list of variables and ‘magic’ values you have to write down all over the place.
For this reason I first started checking out how to deploy Azure Functions using PowerShell scripts. In the past (3 to 4 years ago) I used a lot of PowerShell scripts to automatically set up and deploy my Azure environments. It is easy to debug, understand and extend. A quick search on the internet showed me the ‘new’ cmdlets you have to use nowadays to spin up a nice resource group and app service. Even though this looked like a very promising deployment strategy, it did feel a bit dirty and hacky. 
In the end I have decided to use ARM templates. Just because I dislike ARM templates doesn’t mean they are a bad thing per se. Also, I noticed these templates have become first-class citizens if you want to deploy software into Azure.

Creating your ARM template

If you are some kind of Azure wizard, you can probably create the templates by yourself. Most of us probably don’t have that level of expertise, so there’s an easier way to get you started.

What I do is head down to the portal, create a resource group and everything which is necessary, like the Function App and extract the ARM template afterwards. Downloading the ARM template is somewhat hidden in the portal, but lucky for us, someone has already asked on Stack Overflow where to find this feature. Once you know where this functionality resides, it makes a bit more sense on why the portal team has decided put it over there.

First of all, you have to navigate to the resource group for which you want to extract an ARM template.

image

On this overview page you’ll see a link beneath the headline Deployments. Click on it and you’ll be navigated to a page where all the deployments are listed which have occurred on your resource group.

Just pick the one you are interested in. In our case it’s the deployment which has created and populated our Function App.

On the detail page of this deployment you’ll see some information which you have specified yourself while creating the Function App. There’s also the option to view the template which Azure has used to create your Function App.

image 

Just click on this link and you will be able to see the complete template, along with the parameters used and most important, there’s the option to download the template!

image

After downloading the template you’ll see a lot of files in the zip-file. You won’t be needing most of them as they are helper files to deploy the template to Azure. Because we will be using VSTS, we only need the parameters.json and template.json files.

The template.json file contains all the information which is necessary for, in our case, the Function App. Below is the one used for my deployment.

{
    "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "name": {
            "type": "String"
        },
        "storageName": {
            "type": "String"
        },
        "location": {
            "type": "String"
        },
        "subscriptionId": {
            "type": "String"
        }
    },
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "kind": "functionapp",
            "name": "[parameters('name')]",
            "apiVersion": "2016-03-01",
            "location": "[parameters('location')]",
            "properties": {
                "name": "[parameters('name')]",
                "siteConfig": {
                    "appSettings": [
                        {
                            "name": "AzureWebJobsDashboard",
                            "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
                        },
                        {
                            "name": "AzureWebJobsStorage",
                            "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
                        },
                        {
                            "name": "FUNCTIONS_EXTENSION_VERSION",
                            "value": "~1"
                        },
                        {
                            "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
                            "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2015-05-01-preview').key1)]"
                        },
                        {
                            "name": "WEBSITE_CONTENTSHARE",
                            "value": "[concat(toLower(parameters('name')), 'b342')]"
                        },
                        {
                            "name": "WEBSITE_NODE_DEFAULT_VERSION",
                            "value": "6.5.0"
                        }
                    ]
                },
                "clientAffinityEnabled": false
            },
            "dependsOn": [
                "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageName'))]"
            ]
        },
        {
            "type": "Microsoft.Storage/storageAccounts",
            "name": "[parameters('storageName')]",
            "apiVersion": "2015-05-01-preview",
            "location": "[parameters('location')]",
            "properties": {
                "accountType": "Standard_LRS"
            }
        }
    ]
}

A fairly readable JSON file, aside from all the magic api versions, types, etc.

The contents of the parameters.json file are a bit more understandable. It contains the key-value pairs which are being referenced in the template file.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "name": {},
        "storageName": {},
        "location": {},
        "subscriptionId": {}
    }
}

The template file uses the format parameters('name') to reference a parameter from the parameters.json file.

These files are important, so you want to add somewhere next to or inside your solution where your functions also reside. Be sure to add them to source control because you’ll need these files in VSTS later on.

For now the above template file is fine, but it’s more awesome to add something to it for a personal touch. I’ve done this by adding a new appSetting in the file.

"appSettings": [
    // other entries
    {
        "name": "MyValue",
        "value": "[parameters('myValue')]"
    }

Also, don’t forget to add myValue to the parameters file and in the header of the template file, otherwise you won’t be able to use it.

In short, if you want to use continuous deployment for your solution, use ARM templates and get started by downloading them from the portal. Now let’s continue to the fun part!

Set up your continuous integration for the Functions!

Setting up the continuous integration of your software solution is actually the easy part! VSTS has matured quite a lot over time, so all we have to do is pick the right template, point it to the right sources and you are (almost) done.

Picking the correct template is the hardest part. You have to pick the ASP.NET Core (.NET Framework). If you choose a different template you will struggle setting it up, if you are unfamiliar with VSTS.

clip_image001

This template contains all the useful steps and settings you need to build and deploy your Azure Functions.

image

It should be quite easy to configure these steps. You can integrate VSTS with every popular source control provider. I’m using GitHub, so I’ve configured it so VSTS can connect to the repository.

image

Note I’ve also selected the Clean options because I stumbled across some issues when deploying the sources. These errors were totally my fault, so you can just keep it turned off.

The NuGet restore step is pretty straightforward and you don’t have to change anything on it.

The next step, Build solution, is the most important one, because it will not only build your solution, but also create an artifact from it. The default setting is already set up properly, but for completeness I’ve added it below. This will tell MSBuild to create a package called WebApp.zip after building the solution.

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactstagingdirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"

Next step which is important is Publish Artifact.
You don’t really have to change anything over here, but it’s good to know where your artifacts get published after the build.

image

Of course, you can change stuff over here if you really want to.

One thing I neglected to mention is the build agent you want to use. The best agent to build your Azure Function on (at the moment) is the Hosted VS2017 agent.

image

This agent is hosted by Microsoft, so you don’t have to configure anything for it which makes it very easy to use. Having this build agent hosted by Microsoft also means you don’t have any control over it, so if you want to build something on a .NET framework which isn’t supported (yet), you just have to set up your own build agent.

When you are finished setting up the build tasks be sure to add your repository trigger to the build.

image

If you forget to do this the build will not be triggered automatically whenever someone pushes to the repository.

That’s all there is to it for setting up your continuous integration for Azure Functions. Everything works out of the box, if you select the correct template in the beginning.

Deploy your Azure Functions continuously!

Now that we have the continuous integration build in place we are ready to deploy the builds. If you are already familiar with Release Management it will be fairly easy to deploy your Azure Functions to Azure.

I had zero experience with Release Management so had to find it out the hard way!

The first thing you want to do when creating a new release pipeline is adding the necessary artifacts. This means adding the artifacts from your CI build, where the source type will be Build and all other options will speak for themselves.

image

Next, not so obvious, artifact is adding the repository where your parameters.json and template.json files are located. These files aren’t stored in the artifact file from the build, so you have to retrieve them some other way.

Lucky for us we are using a GitHub repository and there’s a source type available called GitHub in Release Management. Therefore we can just add a new Source type and configure it to point to the GitHub location of choice.

image

This will make sure the necessary template.json and parameters.json files are available when deploying the Azure Functions.

Next up is adding the environments to your pipeline. In my case I wanted to have a different environment for each slot (develop & production), but I can imagine this will differ per situation. Most of the customers I meet have several Azure subscriptions, each meant to facilitate the work for a specific state (Dev, Test, Acceptance, Production). This isn’t the case in my setup, everything is nice and cozy in a single subscription.

Adding an environment isn’t very hard, just add a new one and choose the Azure App Service Deployment template.

image

There are dozens of other templates which are all very useful, but not necessary for my little automated deployment pipeline.

Just fill out the details in the Deploy Azure App Service task and you are almost done.

image

Main thing to remember is to select the zip-file which was created as an artifact from our CI build and to check the Deploy to slot option, as we want to deploy these Azure Functions to the develop slot.

If you are satisfied with this, good! But remember we still have the ARM template?

Yes, we want to make sure the Azure environment is up and running before we deploy our software. Because of this, you have to add 1 task to this phase which is called Azure Resource Group Deployment.

image

This is the task where we need our linked artifacts from the GitHub repository.

The path to the Template and Template parameters are the most important in this step as these will make sure your Azure environment (resource group) will be set up correctly.

Easiest way to get the correct path is to use the modal dialog which appears if you press the button behind the input box.

image

image

One thing you might notice over here is the option to Override template parameters. This is the main reason why you want to use VSTS Release Management (or any other deployment server). All this boilerplating is done so we can specify the parameters (secrets) for each environment, without having to store them in some kind of repository.

Just to test it I’ve overridden one of the parameters, myValue, with the text “VSTS Value” to make sure the updating actually happens.

Another thing to note is I’ve selected the Deployment mode to Incremental as I just want to update my deployments, not create a completely new Function App.

All of the other options don’t need much explanation at this time.

One thing I have failed to mention is adding the continuous deployment trigger to the pipeline. In your pipeline click on the Trigger-circle and Enable it, like you can see below.

image

This will make sure each time a build has succeeded, a new deployment will occur to the Development slot (in my case).

This is all you need to know to deploy your Azure Functions (or any other Azure App Service for that matter). For the sake of completeness it would make sense to add another Environment in your pipeline, call it Production and make sure the same artifacts get deployed to the production slot. This Environment & Tasks will look very similar to the Develop environment, so I won’t repeat the steps over here. Just remember to choose the correct override parameters when deploying to the production slot. You don’t want to mess this up.

Now what?

The continuous integration & deployment steps are finished, so we can start deploying our software. If you already have some CI builds, you can create a new release in the releases tab.

image

This will be a manual release, but you can also choose to push some changes to your repository and make everything automated.

I’ve done a couple releases to the develop environment already, which are al shown in the overview of the specific release.

image

Over in the portal you will also notice your Azure Functions will be in read only mode, because continuous integration is turned on.

image

But, remember we added the the MyValue parameter to our ARM template? It is now also shown inside the Application settings of our Functions App!

image

This is an awesome way of storing secrets inside your release pipeline! Or even better, store your secrets in Azure Key Vault and adding your Client Id and Client Secret to the Application Settings via the release pipeline, like I described in an earlier post.

I know I’ll keep using VSTS for my Azure Functions deployment from now on. It’s awesome and can only recommend you do it also!

As with almost every application there is a point where you have to work with some kind of secret, like for example a connection string to a database. There are multiple ways to retrieve these secrets and this isn’t any different with Azure Functions.

If you have set up a continuous deployment build within Visual Studio Release Management you can just substitute the values in your build, which makes it easy, transparent and consistent to add and change the values.

A different approach is to provide the secrets by yourself in the application settings of your App Service. These Application Settings are already used by the Azure Functions in order to specify some settings which are necessary to run properly.

image

While I have nothing against either one approach, there is a better option available to retrieve secrets in your cloud software. This option is called Azure Key Vault. The Azure Key Vault is a secure store which helps you safeguard keys and secrets used in your cloud applications.
One of the many advantages of using Azure Key Vault, compared to the alternatives, is having the possibility to revoke access to specific secrets for an application or user. This makes it much easier to lock down an application or user if it has been compromised. One other advantage is having a central location for all your keys & secrets and being able to update them when necessary.
As with almost any Azure service you get detailed logging of usage, which means your operations team is able to monitor the usage of the keys and secrets in more detail.

Setting up Azure Key Vault does have a bit of a learning curve, so I’ll post all steps, which are necessary today, below.
There are two (proper) ways of working with the Azure Key Vault from within your application or Azure Function. One is by using a client secret, the other is to work with a certificate. While I do think working with certificates is more secure, I don’t use them often for my personal projects. The main reason being certificates will expire so you have to renew them, which requires more management, which is something I rarely want to do for side-projects. However, if you are building something which is going to run in production for a customer, do have a look at working with certificates!

What do we need?

If you are choosing to work with a client secret, you need 3 things.

A Client Id, a Client Secret and an URL to the location of your secret.

The Id and Secret will be stored within the Azure Active Directory. The secret will, obviously, be stored within the Azure Key Vault.

One of the most common secrets we use with application development is a connection string to some kind of database. This is why I will be saving a Cosmos DB connection string in the Key Vault. Just create a Cosmos DB instance and retrieve the connection string from it. It should look similar the one below.

DefaultEndpointsProtocol=https;AccountName=[myAccount];AccountKey=[myAccountKey]TableEndpoint=https://[myTableEndpoint].documents.azure.com

Setting up Azure Key Vault

Creating an Azure Key Vault works just like any other service in the Azure ecosystem. Either use PowerShell, an ARM template or the portal. In this post I’ll use the portal to explain stuff, but you can do it with whichever has your preference.

I’ve chosen the default options for a Key Vault. I guess the Standard tier is good enough for most people and projects. Do keep in mind to add yourself (or a different administrator) to the principals.

image

If you remove yourself from the Access policy principals you won’t be able to access the Key Vault, which can be troublesome if you want to configure it.

Wait until your Key Vault has been created and add your secret connection string to it.

Navigate to the Secrets blade and choose to Add the secret.

image

The Upload option is defaulted to Certificate, but in order to add our connection string we need to change this to Manual.

image

After changing the option you will see some fields which look familiar, Name and Value. Fill the Name field with a proper description of your secret and the Value with the actual secret.

For connection strings you don’t really need to set an activation- or expiration date, but it’s a nice option to have if you are working with secrets which expire from time to time.

When the secret is created in the Vault you can click on it and see some details. A very cool feature is to create a ‘New Version’ of the key. This makes a lot of sense if you have secrets which expire.

image

When you click on the current version of the secret another blade will show up with even more details.

image

This is the blade which has all interesting data. For starters it contains the `Secret Identifier` which is the location of the secret value,stored in the Key Vault. You can also check what the actual secret value is in this blade!

Each version of a secret has its own Secret Identifier. Because of this, it is very easy to lock down a specific application, because you can just create a new version of the Secret, update it in all other applications and disable the old Secret Identifier endpoint. Using this strategy, the not-updated application will not be able to retrieve the secret anymore, therefore probably not operating (correctly) anymore.

This is all we have to do in the Azure Key Vault, for now.

Configuring the Active Directory

In order to access the Key Vault you need to be some kind of approved principal. This can either be a user or an application. Because Azure Functions (or any other service) aren’t people, we will have to create applications in Azure Active Directory.

First, navigate to your Active Directory and select `App registrations`.

image

You don’t need to have an existing application yet. These steps are necessary to create the appropriate identifiers and secrets your application can use later on.
There aren’t any real rules to name your application, but I saw some other examples post-fixing them with `-ad` so I added this myself also. This is a convention to make it clear these are Active Directory configured applications.

image

For most services you will be hosting in the cloud choosing the application type `Web app / API` is sufficient. The other option is `Native`, which should only be used for applications which can be installed on a system.

You might be surprised by the `Sign-on URL` at the bottom. For the purpose of storing and retrieving secrets you don’t really need this, so you can write whatever URL you want over here.

After creating the application you will see a blade with some details about it.

image

The Application Id you see in this overview is the Client Id, used later on for authentication with the Key Vault.

Now it’s time to create a (secret) key for retrieving the connection string.

image

Creating a key is one of the most important steps in the process.

image

The description of the key doesn’t really matter, just use something which will make it clear for what it is used for. In our case, the expiry date can be set to Never, which will re-define itself to 31-12-2299. I won’t live to see this date, so it’s practically Never to me.

The Value will be created when pressing the Save button on the top. Copy this Value, because you will not be able to see it again. This value is the Secret Key you will need later on!

The configuration of your Active Directory application is now finished.

Back to the Key Vault

Now that we have an application present in the Active Directory, we can add this to the list of approved Principals in the Key Vault.

Navigate back to the Key Vault and search for the `Access policies` option.

image

Over here you have the option to add a new principal by searching for the application just created.

image

Make sure you limit the permission set you give this application. For now the application only needs to retrieve secrets from the Key Vault. Therefore, selecting the Get permission for Secrets is good enough.

Save the changes you made to the access policies and you are good to go!

The hardest part is over and we can start coding our Azure Function!

Creating the Azure Function

In an earlier post I already mentioned you can write your Azure Functions in the portal or an IDE like Visual Studio. Because Visual Studio offers the best developer experience I’ll be using this for all of my Azure Functions development.

I’ve tried to keep things as simple as possible, therefore not doing anything fancy in this sample. All this function does is retrieving the connection string from the Key Vault and returning it to the requesting user.

using System.Configuration;
using System.Net;
using System.Net.Http;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.IdentityModel.Clients.ActiveDirectory;

namespace Minifier
{
    public static class Get
    {
        [FunctionName("Get")]
        public static HttpResponseMessage Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "{slug}")] HttpRequestMessage req, string slug, TraceWriter log)
        {
            // The Application Id of the Azure AD application
            var clientId = ConfigurationManager.AppSettings["ClientId"];
            // The Value of the Key you created in the Azure AD application
            var clientSecret = ConfigurationManager.AppSettings["ClientKey"];
            // The Secret Identifier of your secret value
            var connectionstringUrl = ConfigurationManager.AppSettings["ConnectionstringUrl"];

            // Creating the Key Vault client
            var keyVault = new KeyVaultClient(async (authority, resource, scope) =>
            {
                var authContext = new AuthenticationContext(authority);
                var credential = new ClientCredential(clientId, clientSecret);
                var token = await authContext.AcquireTokenAsync(resource, credential);
                return token.AccessToken;
            });

            // Retrieving the connection string from Key Vault
            var connectionstring = keyVault.GetSecretAsync(connectionstringUrl).Result.Value;

            return req.CreateResponse(HttpStatusCode.OK, "The connection string is: " + connectionstring);
        }
    }
}

This is all you have to do in your function. The most difficult part is creating the `KeyVaultClient`, but you can just copy-paste this piece of sample code to get it up and running!

Now you might think: “Hey, I see you are still retrieving data from the application settings, that’s not very safe, is it?”. Very observant indeed!
However, there isn’t a very good way around this. You still need to identify yourself with the Azure Key Vault, therefore storing a secret somewhere. As I mentioned, the best thing is do add these settings somewhere in the delivery pipeline, this way as few people possible know the actual values.
This is also one of the reasons why it is better to work with certificates. Certificates aren’t stored somewhere in plain text, but installed on a system. Because of this you can manage them a bit better, compared to secrets stored in a configuration file or portal.

Still, if someone managed to ‘steal’ your client secrets and identifiers, you will be able to make sure they can’t do anything with it. Remove (or restrict) the principal in your Key Vault and the application will be ‘crippled’ right away.

That’s all for working with the Azure Key Vault in combination with Azure Functions. It’s not very different (if at all) from a normal web application. Which makes it an even better solution to use for storing secrets in the cloud.