You might remember me writing on how to warm up your App Service instances when moving between slots. The use of the applicationInitialization-element is implemented on nearly every IIS webserver nowadays and works great, until it doesn’t.

I’ve been working on a project which has been designed, as I’d like to call it, a distributed monolith. To give you an oversimplified overview, here’s what we have.

image

First off we have a single page web application which communicates directly to an ASP.NET Web API, which in turn communicates to a backend WCF service, which in turn also communicates with a bunch of other services. You can probably imagine I’m not very happy with this kind of a design, but I can’t do much about it currently.

One of the problems with this design is having cold-starts whenever a service is being deployed.
Since we’re deploying continuously to test & production there are a lot of cold starts. Using the applicationInitialization-element really helped spinning up our App Services, but we were still facing some slowness whenever the WCF service was being deployed to any of our environments. This service is being deployed to an ‘old-fashioned’ Cloud Service so we figured the applicationInitialization-element should just work as it’s still running on IIS.

After having added some logging to our warmup endpoints in the WCF service we quickly discovered these methods were never hit whenever the service was being deployed or swapped. Strange!

I started looking inside the WCF configuration and discovered HTTP GET requests should just work as expected.

<behaviors>
 <serviceBehaviors>
  <behavior name="MyBehavior">
    <serviceMetadata httpsGetEnabled="true" httpGetEnabled="true" />
  </behavior>
 </serviceBehaviors>
</behaviors>

The thing is, we apparently also have some XML transformation going on for all of our cloud environments, so these attributes were set to `false` for both the Test and Production services. This means we can’t do a warmup request via the applicationInitialization-element as it’s doing a normal GET request to the specified endpoints.

Since we still need this WCF service to be hot right after a deployment and swap VIP I had to come up with something else. There are multiple things you can do at this point, like creating a PowerShell script which runs right after a deployment, do a smoke-test in your release pipeline which touches all instances, etc. All of them didn’t feel quite right.

What I came up with is to extend the WebRole.OnStart method! This way I can know for sure that every time the WCF Service starts my code is being executed. Plus, all of the warmup source code is located in the same project as the service implementation which makes it easier to track. In order to do a warmup request you need to do a couple of things. First off, we need to figure out what the local IP-number is of the currently running instance. Once we’ve retrieved this IP-number we can use it to do a HTTP request to our (local) warmup endpoint. As I mentioned earlier, HTTP GET is disabled via our configuration, so we need to use a different method, like POST.

There’s an excellent blog post by Kevin Williamson on how to implement such a feature. In this post he states the following:

If your website takes several minutes to warmup (either standard IIS/ASP.NET warmup of precompilation and module loading, or warming up a cache or other app specific tasks) then your clients may experience an outage or random timeouts.  After a role instance restarts and your OnStart code completes then your role instance will be put back in the load balancer rotation and will begin receiving incoming requests.  If your website is still warming up then all of those incoming requests will queue up and time out.  If you only have 2 instances of your web role then IN_0, which is still warming up, will be taking 100% of the incoming requests while IN_1 is being restarted for the Guest OS update.  This can lead to a complete outage of your service until your website is finished warming up on both instances.  It is recommended to keep your instance in OnStart, which will keep it in the Busy state where it won't receive incoming requests from the load balancer, until your warmup is complete.

He also posts a small code snippet which can be used as inspiration for implementation, so I went and used this piece of code.

One of the problems you will face when implementing this, is realizing your IoC-container isn’t spun up yet. This doesn’t has to be a problem per se, but I wanted to add some logging to this code in order to check if everything was working correctly (or not). Well, this isn’t possible!
In order to add some kind of logging, I had to resort to writing entries to the Windows Event Log. This isn’t ideal of course, but it’s the cleanest way I could come up with. By adding entries to the event log you ‘only’ have to enable RDP on your cloud service in order to check what has happened. Needless to say, the RDP settings are reset every time you deploy a new update to the cloud service, so enabling it is quite safe in our scenario.

Adding this logging to my solution really saved the day, because after having added the HTTP request to the OnStart method I still couldn’t see the warmup events being triggered. By checking out the Event Log I quickly discovered this had to do with the installed certificate on the endpoint. The error I was facing told me the following

Could not establish a trust relationship for the SSL/TLS secure channel

This makes sense of course as the certificate is registered on the hostname and we’re now making a request directly to the internal IP-number which obviously is different (service.customer.com !== 12.34.56.78). Removing the certificate check is rather easy, but you should only do this when you’re 100% sure to do a thing like this. If you remove the certificate check on a global scope you’re opening up yourself for a massive amount of problems!

For future reference, here’s a piece of code which resembles what I came up with.

private void Warmup()
{
    string loggingPrefix = $"{nameof(WebRole)} - {nameof(Warmup)} - ";

    using (var eventLog = new EventLog("Application"))
    {
        eventLog.Source = "Application";
        eventLog.WriteEntry($"{loggingPrefix}Starting {nameof(Warmup)}.", EventLogEntryType.Information);

        IPHostEntry ipEntry = Dns.GetHostEntry(Dns.GetHostName());
        string ip = null;
        foreach (IPAddress ipaddress in ipEntry.AddressList)
        {
            if (ipaddress.AddressFamily.ToString() == "InterNetwork")
            {
                ip = ipaddress.ToString();
                eventLog.WriteEntry($"{loggingPrefix}Found IPv4 address is `{ip}`.", EventLogEntryType.Information);
            }
        }
        string urlToPing = $"https://{ip}/V1/MyService.svc/WarmUp";
        HttpWebRequest req = HttpWebRequest.Create(urlToPing) as HttpWebRequest;
        req.Method = "POST";
        req.ContentLength = 0;
        req.ContentType = "application/my-webrole-startup";

        RemoveCertificateValidationToMakeRequestOnInstanceInsteadOfPublicHostname(req);

        try
        {
            eventLog.WriteEntry($"{loggingPrefix}Posting to `{urlToPing}`.", EventLogEntryType.Information);
            var response = req.GetResponse();
        }
        catch (WebException webException)
        {
            // Warmup failed for some reason.
            eventLog.WriteEntry($"{loggingPrefix}Posting to endpoint `{urlToPing}` failed for reason: {webException.Message}.", EventLogEntryType.Error);
        }
        eventLog.WriteEntry($"{loggingPrefix}Finished {nameof(Warmup)}.", EventLogEntryType.Information);
    }
}

private static void RemoveCertificateValidationToMakeRequestOnInstanceInsteadOfPublicHostname(HttpWebRequest req)
{
    req.ServerCertificateValidationCallback = delegate { return true; };
}

This piece of code is based on the sample provided by Kevin Williamson with some Event Log logging added to it, a POST request and removed the certificate check.

Hope it helps you when facing a similar issue!

As I mentioned in my earlier post, there are 2 options available to you out of the box for logging. You can either use the `TraceWriter` or the `ILogger`. While this is fine when you are doing some small projects or Functions, it can become a problem if you want your Azure Functions to reuse earlier developed logic or modules used in different projects, a Web API for example.

In these shared class libraries you are probably leveraging the power of a ‘full-blown’ logging library. While it is possible to wire up a secondary logging instance in your Azure Function, it’s better to use something which is already available to you, like the `ILogger` or the `TraceWriter`.

I’m a big fan of the log4net logging library, so this post is about using log4net with Azure Functions. As it goes, you can apply the same principle for any other logging framework just the implementation will be a bit different.

Creating an appender

One way to extend the logging capabilities of log4net is by creating your own logging appender. You are probably already using some default file appender or console appender in your projects. Because there isn’t an out-of-the-box appender for the `ILogger`, yet, you have to create one yourself.

Creating an appender isn’t very hard. Make sure you have log4net added to your project and create a new class which derives from `AppenderSkeleton`. Having done so you are notified the `Append`-method should be implemented, which makes sense. The most basic implementation of an appender which is using the `ILogger` looks pretty much like the following.

internal class FunctionLoggerAppender : AppenderSkeleton
{
    private readonly ILogger logger;

    public FunctionLoggerAppender(ILogger logger)
    {
        this.logger = logger;
    }
    protected override void Append(LoggingEvent loggingEvent)
    {
        switch (loggingEvent.Level.Name)
        {
            case "DEBUG":
                this.logger.LogDebug($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "INFO":
                this.logger.LogInformation($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "WARN":
                this.logger.LogWarning($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "ERROR":
                this.logger.LogError($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "FATAL":
                this.logger.LogCritical($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            default:
                this.logger.LogTrace($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
        }
    }
}

Easy, right?

You probably notice the injected `ILogger` in the constructor of this appender. That’s actually the ‘hardest’ part of setting up this thing, because it means you can only add this appender in a context where the ILogger has been instantiated!

Using the appender

Not only am I a big fan of log4net, but Autofac is also on my shortlist of favorite libraries.
In order to use Autofac and log4net together you can use the LoggingModule from the Autofac documentation page. I’m using this module all the time in my projects, with some changes if necessary.

Azure Functions doesn’t support the default app.config and web.configfiles, which means you can’t use the default XML configuration block which is used in a ‘normal’ scenario. It is possible to load some configuration file by yourself and providing it to log4net, but there are easier (& cleaner) implementations.

What I’ve done is pass along the Azure Functions `ILogger` to the module I mentioned earlier and configure log4net to use this newly created appender.

public class LoggingModule : Autofac.Module
{
    public LoggingModule(ILogger logger)
    {
        log4net.Config.BasicConfigurator.Configure(new FunctionLoggerAppender(logger));
    }
// All the other (default) LoggingModule stuff
}

// And for setting up the dependency container

internal class Dependency
{
    internal static IContainer Container { get; private set; }
    public static void CreateContainer(ILogger logger)
    {
        if (Container == null)
        {
            var builder = new ContainerBuilder();
            builder.RegisterType<Do>().As<IDo>();
            builder.RegisterModule(new LoggingModule(logger));
            Container = builder.Build();
        }
    }
}

I do find it a bit dirty to pass along the `ILogger` throughout the code. If you want to use this in a production system, please make the appropriate changes to make this a bit more clean.

You probably notice I’m storing the Autofac container in a static variable. This is to make sure the wiring of my dependencies is only done once, per instance of my Azure Function. Azure Functions are reused quite often and it’s a waste of resources to spin up a complete dependency container per invocation (IMO).

Once you’re done setting up your IoC and logging, you can use any piece of code which is using the log4net `ILog` implementations and still see the results in your Azure Functions tooling!

If you are running locally, you might not see anything being logged in your local Azure Functions emulator. This is a known issue of the currentprevious tooling, there is an openclosed issue on GitHub. Install the latest version of the tooling (1.0.12 at this time) and you’ll be able to see your log messages from the class library.

image

Of course, you can also check the logging in the Azure Portal if you want to. There are multiple ways to find the log messages, but the easiest option is probably the Log-window inside your Function.

image


Well, that’s all there is to it!

By using an easy to write appender you can reuse your class libraries between multiple projects and still have all the necessary logging available to you. I know this’ll help me in some of my projects!
If you want to see all of the source code on this demo project, it’s available on my GitHub page: https://github.com/Jandev/log4netfunction

Warming up your web applications and websites is something which we have been doing for quite some time now and will probably be doing for the next couple of years also. This warmup is necessary to ‘spin up’ your services, like the just-in-time compiler, your database context, caches, etc.

I’ve worked in several teams where we had solved the warming up of a web application in different ways. Running smoke-tests, pinging some endpoint on a regular basis, making sure the IIS application recycle timeout is set to infinite and some more creative solutions.
Luckily you don’t need to resort to these kind of solutions anymore. There is built-in functionality inside IIS and the ASP.NET framework. Just add an `applicationInitialization`-element inside the `system.WebServer`-element in your web.config file and you are good to go! This configuration will look very similar to the following block.

<system.webServer>
  ...
  <applicationInitialization>
    <add initializationPage="/Warmup" />
  </applicationInitialization>
</system.webServer>

What this will do is invoke a call to the /Warmup-endpoint whenever the application is being deployed/spun up. Quite awesome, right? This way you don’t have to resort to those arcane solutions anymore and just use the functionality which is delivered out of the box.

The above works quite well most of the time.
However, we were noticing some strange behavior while using this for our Azure App Services. The App Services weren’t ‘hot’ when a new version was deployed and swapped. This probably isn’t much of a problem if you’re only deploying your application once per day, but it does become a problem when your application is being deployed multiple times per hour.

In order to investigate why the initialization of the web application wasn’t working as expected I needed to turn on some additional monitoring in the App Service.
The easiest way to do this is to turn on the Failed Request Tracing in the App Service and make sure all requests are logged inside these log files. Turning on the Failed Request Tracing is rather easy, this can be enabled inside the Azure Portal.

image

In order to make sure all requests are logged inside this log file, you have to make sure all HTTP status codes are stored, from all possible areas. This requires a bit of configuration in the web.config file. The trace-element will have to be added, along with the traceFailedRequests-element.

<tracing>
  <traceFailedRequests>
    <clear/>
    <add path="*">
      <traceAreas>
        <add provider="WWW Server" 	
        areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module,Rewrite,iisnode"
		verbosity="Verbose" />
      </traceAreas>
      <failureDefinitions statusCodes="200-600" />
    </add>
  </traceFailedRequests>
</tracing>

As you can see I’ve configured this to trace all status codes from 200 to 600, which results in all possible HTTP response codes.

Once these settings were configured correctly I was able to do some tests between the several tiers and configurations in an App Service. I had read a post by Ruslan Y stating the use of slot settings might help in our problems with the warmup functionality.
In order to test this I’ve created an App Service for all of the tiers we are using, Free and Standard, in order to see what happens exactly when deploying and swapping the application.
All of the deployments have been executed via TFS Release Management, but I’ve also checked if a right-click deployment from Visual Studio resulted in different logs. I was glad to see they resulted in having the same entries in the log files.

Free

I first tested my application in the Free App Service (F1). After the application was deployed I navigated to the Kudu site to download the trace logs.

Much to my surprise I couldn’t find anything in the logs. There were a lot of log files, but none of them contained anything which closely resembled something like a warmup event. This does validate the earlier linked post, stating we should be using slot settings.

You probably think something like “That’s all fun and games, but deployment slots aren’t available in the Free tier”. That’s a valid thought, but you can configure slot settings, even if you can’t do anything useful with it.

So I added a slot setting to see what would happen when deploying. After the deploying the application I downloaded the log files again and was happy to see the a warmup event being triggered.

<EventData>
  <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data>
  <Data Name="Headers">
    Host: localhost
    User-Agent: IIS Application Initialization Warmup
  </Data>
</EventData>

This is what you want to see, a request by a user agent called `IIS Application Initialization Warmup`.

Somewhere later in the file you should see a different EventData block with your configured endpoint(s) inside it.

<EventData>
  <Data Name="ContextId">{00000000-0000-0000-0000-000000000000}</Data>
  <Data Name="RequestURL">/Warmup</Data>
</EventData>

If you have multiple warmup endpoints you should be able to see each of them in a different EventData-block.

Well, that’s about anything for the Free tier, as you can’t do any actual swapping.

Standard

On the Standard App Service I started with a baseline test by just deploying the application without any slots and slot settings.

After deploying the application to the App Service without a slot setting, I did see a warmup event in the logs. This is quite strange, to me, as there wasn’t a warmup event in the logs for the Free tier. This means there are some differences between the Free and Standard tiers considering this warmup functionality.

After having performed the standard deployment, I also tested the other common scenario’s.
The second scenario I tried was deploying the application to the Staging slot and press the Swap VIP button on the Azure portal. Both of these environments (staging & production) didn’t have a slot setting specified.
So, I checked the log files again and couldn’t find a warmup event or anything which closely resembled this.

This means deploying directly to the Production slot DOES trigger the warmup, but deploying to the Staging slot and execute a swap DOESN’T! Strange, right?

Let’s see what happens when you add a slot setting to the application.
Well, just like the post of Ruslan Y states, if there is a slot setting the warmup is triggered after swapping your environment. This actually makes sense, as your website has to ‘restart’ after swapping environments if there is a slot setting. This restarting also triggers the warmup, like you would expect when starting a site in IIS.

How to configure this?

Based on these tests it appears you probably always want to configure a slot setting, even if you are on the Free tier, when using the warmup functionality.

Configuring slot settings is quite easy if you are using ARM templates to deploy your resources. First of all you need to add a setting which will be used as a slot setting. If you haven’t one already, just add something like `Environment` to the `properties` block in your template.

"properties": {
  ...
  "Environment": "ProductionSlot"
}

Now for the trickier part, actually defining a slot setting. Just paste the code block from below.

{
  "apiVersion": "2015-08-01",
  "name": "slotconfignames",
  "type": "config",
  "dependsOn": [
    "[resourceId('Microsoft.Web/Sites', 
				parameters('mySiteName'))]"
],
"properties": {
  "appSettingNames": [ "Environment" ]
}

When I added this to the template I got red squigglies underneath `slotconfignames` in Visual Studio, but you can ignore them as this is valid setting name.

What the code block above does is telling your App Service the application setting `Environment` is a slot setting.

After deploying your application with these ARM-template settings you should see this setting inside the Azure Portal with a checked checkbox.

image

Some considerations

If you want to use the Warmup functionality, do make sure you use it properly. Use the warmup endpoint(s) to ‘start up’ your database connection, fill your caches, etc.

Another thing to keep in mind is the swapping (or deploying) of an App Service is done after all of the Warmup endpoint(s) are finished executing. This means if you have some code which will take multiple seconds to execute it will ‘delay’ the deployment because of this.

To conclude, please use the warmup-functionality provided by IIS (and Azure) instead of those old-fashioned methods and if deploying to an App Service, just add a slot setting to make sure it always triggers.

Hope the above helps if you have experienced similar issues and don’t have the time to investigate the issue.

You might remember me writing a post on how you can set up your site with SSL while using Let’s Encrypt and Azure App Services.

Well, as it goes, the same post applies for Azure Functions. You just have to do some extra work for it, but it’s not very hard.

Simon Pedersen, the author of the Azure Let’s Encrypt site extension, has done some work in explaining the steps on his GitHub wiki page. This page is based on some old screenshots, but it still applies.

The first thing you need to do is create a new function which will be able to do the ACME challenge. This function will look something like this.

public static class LetsEncrypt
{
    [FunctionName("letsencrypt")]
    public static HttpResponseMessage Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "letsencrypt/{code}")]
        HttpRequestMessage req, 
        string code, 
        TraceWriter log)
    {
        log.Info($"C# HTTP trigger function processed a request. {code}");

        var content = File.ReadAllText(@"D:\home\site\wwwroot\.well-known\acme-challenge\" + code);
        var resp = new HttpResponseMessage(HttpStatusCode.OK);
        resp.Content = new StringContent(content, System.Text.Encoding.UTF8, "text/plain");
        return resp;
    }
}

As you can see, this function will read the ACME challenge file from the disk of the App Service it is running on and return the content of it. Because Azure Functions run in an App Service (even the functions in a Consumption plan), this is very possible. The Principal (created in the earlier post) can create these type of files, so everything will work just perfectly.

This isn’t all we have to do, because the url of this function is not the url which the ACME challenge will use to retrieve the appropriate response. In order for you to actually use this site extension you need to add a new proxy to your Function App. Proxies are still in preview, but very usable! The proxy you have to create will have to redirect the url `/.well-known/acme-challenge/[someCode]` to your Azure Function. The end result will look something like the following proxy.

"acmechallenge": {
  "matchCondition": {
    "methods": [ "GET", "POST" ],
    "route": "/.well-known/acme-challenge/{rest}"
  },
  "backendUri": "https://%WEBSITE_HOSTNAME%/api/letsencrypt/{rest}"
}

Publish your new function and proxy to the Function App and you are good to go!

If you haven’t done this before, be sure to follow all of the steps mentioned in the earlier post! Providing the appropriate application settings should be easy now and if you just follow each step of the wizard you’ll see a green bar when the certificate is successfully requested and installed!

image_thumb5

This makes my minifier service even more awesome, because now I can finally use HTTPS, without getting messages the certificate isn’t valid.

(Almost) No one likes writing code meant to store data to a repository, queues, blobs. Let alone triggering your code when some event occurs in one of those areas. Luckily for us the Azure Functions team has decided to use bindings for this.
By leveraging the power of bindings, you don’t have to write your own logic to store or retrieve data. Azure Functions provides all of this functionality out of the box!

Bindings give you the possibility to retrieve data (strong-typed if you want) from HTTP calls, blob storage events, queues, CosmosDB events, etc. Not only does this work for input, but also for output. Say you want to store some object to a queue or repository, you can just use an output binding in your Azure Function to make this happen. Awesome, right?

Most of the documentation and blogposts out there state you should define your bindings in a file called `function.json`. An example of these bindings is shown in the block below.

{
  "bindings": [
    {
      "name": "order",
      "type": "queueTrigger",
      "direction": "in",
      "queueName": "myqueue-items",
      "connection": "MY_STORAGE_ACCT_APP_SETTING"
    },
    {
      "name": "$return",
      "type": "table",
      "direction": "out",
      "tableName": "outTable",
      "connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
    }
  ]
}

The above sample specifies an input binding for a Queue and an output binding for a some Table Storage. While this works perfectly, it’s not the way you want to implement this when using C# (or F# for that matter), especially if you are using Visual Studio!

How to use bindings with Visual Studio

To set up a function binding in via Visual Studio you just have to specify some attributes for the input parameters of your code. These attributes will make sure the `function.json` file is created when the code is being compiled.

After creating your first Azure Function via Visual Studio you will get a function with these attributes immediately. For my URL Minifier solution I’ve used the following HttpTrigger.

[FunctionName("Get")]
public static async Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "{slug}")] HttpRequestMessage req, string slug,
    TraceWriter log)

Visual Studio (well, actually the Azure Function tooling) will make sure this will get translated to a binding block which looks like this.

"bindings": [
  {
    "type": "httpTrigger",
    "route": "{slug}",
    "methods": [
      "get"
    ],
    "authLevel": "anonymous",
    "name": "req"
  }
],

You can do this for every type of trigger which is available at the moment.

Sadly, this type of development hasn’t been described a lot in the various blogposts and documentation, but with a bit of effort you can find out how to implement most bindings by yourself.

I haven’t worked with all of the different type of bindings yet.
One which I found quite hard to implement is the output binding for a Cosmos DB repository. Though, in hindsight it was rather easy to do once you know what to look for. What worked for me, is creating an Azure Function via the portal first and see which type of binding it uses. This way I found out for a Cosmos DB output binding you need to use the `DocumentDBAttribute`. This attribute needs a couple of variables, like the database name, collection name and of course the actual connection string. After providing all of the necessary information your Cosmos DB output binding should look something like the one below.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("TablesDB", "minified-urls", ConnectionStringSetting = "Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

Notice I had to remove the `async` keyword? That’s because you can’t use `async` if there is an out-parameter.

The thing I had the most trouble with is finding out which value should be in the `ConnectionStringSetting`. If you head down to the Connection String tab of your Cosmos DB in the Azure portal you will find a connection string in the following format.

DefaultEndpointsProtocol=https;AccountName=[myCosmosDb];AccountKey=[myAccountKey];TableEndpoint=https://[myCosmosDb].documents.azure.com

If you use this setting, you’ll be prompted with a `NullReferenceException` for a `ServiceEndpoint`. After having quite a bit of time on troubleshooting this issue I decided the problem probably had to use some other value in the `ConnectionStringSetting`.
Having tired a couple of things I finally discovered you have to specify the setting as follows:

AccountEndpoint=https://[myCosmosDb].documents.azure.com:443/;AccountKey=[myAccountKey];

Running the function will work like a charm now.

I’m pretty sure this will not be the only ‘quirk’ you will come across when using the bindings, but as long as we can all share the information it will become easier in the future!

Where will I store the secrets?

When using attributes you can’t rely much on retrieving your secrets via application settings or the like. Well, the team has you covered!

You can just use your regular application settings, as long as you hold to a naming convention where the values have to be uppercase and use underscores for separation. So instead of hardcoding the values “TablesDB” and “minified-urls” inside my earlier code snippet, one can also use the following.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("MY-DATABASE", MY-COLLECTION", ConnectionStringSetting = Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

By convention, the actual values will now be retrieved via the application settings.

Awesome!

Yeah, but Application Settings aren’t very secure

True!

I’ve already written about this in an earlier post. While using the Application Settings are fine to store some configuration data, you don’t want to specify secrets over there. Secrets should be stored inside Azure Key Vault.

Of course, you can’t use Azure Key Vault in these attributes.
Lucky for us, the Azure Functions team still got us covered with an awesome feature called Imperative bindings! The sample code is enough to get us cracking on creating a binding where the connection secrets are still stored inside Azure Key Vault (or somewhere else for that matter).

Because I’m using a Cosmos DB connection, I need to specify the `DocumentDBAttribute` inside the `Binder`. Something else you should note is when you want to use an output binding, you can’t just use create binding to a `MinifiedUrl` object. If you only specify the object type, the `Binder` will assume it’s an input binding.
If you want an output binding, you need to specify the binding as an `IAsyncCollector<T>`. Check out the code below to see what you need to do in order to use the `DocumentDBAttribute` in combination with imperative bindings.

// Retrieving the secret from Azure Key Vault via a helper class
var connectionString = await secret.Get("CosmosConnectionStringSecret");
// Setting the AppSetting run-time with the secret value, because the Binder needs it
ConfigurationManager.AppSettings["CosmosConnectionString"] = connectionString;

// Creating an output binding
var output = await binder.BindAsync<IAsyncCollector<MinifiedUrl>>(new DocumentDBAttribute("TablesDB", "minified-urls")
{
    CreateIfNotExists = true,
    // Specify the AppSetting key which contains the actual connection string information
    ConnectionStringSetting = "CosmosConnectionString",
});

// Create the MinifiedUrl object
var create = new CreateUrlHandler();
var minifiedUrl = create.Execute(data);

// Adding the newly created object to Cosmos DB
await output.AddAsync(minifiedUrl);

As you can see, there’s a lot of extra code in the body of the function. We have to give up some of the simplicity in order to make the code and configuration a bit more secure, but that’s worth it in my opinion.

If you want to check out the complete codebase of this solution, please check out the GitHub repository, it contains all of the code.

I need more bindings!

Well, a couple of days ago there was some amazing announcement. You can now create your own bindings! Donna Malayeri has some sample code available on GitHub on how to create a Slack binding. There is also a documentation page in the making on how to create these type of bindings.

At this time this feature is still in preview, but if you need some binding which isn’t available at the moment, be sure to check this out. I can imagine this will become quite popular once it has been released. Just imagine creating bindings to existing CRM systems, databases, your own SMTP services, etc.

Awesome stuff in the making!