Creating a solution with multiple small services is great of course. It provides you with a lot of flexibility and scalability.

There are however a couple of things you have to think about when designing and developing a solution with multiple services. One of the things you need to figure out is how to implement proper logging. For an actual production system you need to have this in place in order to monitor and debug the overall solution.

We, developers using Azure Functions, are already blessed with some logging mechanisms and tools provided out of the box! The out of the box stuff is pretty basic, but it gets the job done and will make your life much easier when the need arises to analyze a production issue.

Logging in your function

When first creating your Azure Function you will probably see a parameter being passed in your `Run` method called `log`.

public static class Function1
{
    [FunctionName("Function1")]
    public static async Task<HttpResponseMessage> Start(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)]
        HttpRequestMessage req, 
        TraceWriter log)
    {
        log.Info("C# HTTP trigger function processed a request.");

        // Logic
    }
}

This `log`-parameter can be used to do some simple logging inside your Azure Function. When invoking this function you’ll be able to see the specified log entry in your console.

[10-4-2018 19:12:12] Function started (Id=a669fc9a-b15b-44b9-a863-47a303aca6b6)
[10-4-2018 19:12:13] Executing 'Function1' (Reason='This function was programmatically called via the host APIs.', Id=a669fc9a-b15b-44b9-a863-47a303aca6b6)
[10-4-2018 19:12:13] C# HTTP trigger function processed a request.

All of the other common logging levels, like `Error`, `Warning` and `Verbose`, are also implemented in the TraceWriter.

After having deployed the Azure Function to Azure you will also be able to see the log messages in the Portal.

2018-04-10T19:30:09.112 [Info] Function started (Id=71cead2b-3f90-4c04-8c82-42f6a885491a)
2018-04-10T19:30:09.127 [Info] C# HTTP trigger function processed a request.
2018-04-10T19:30:09.205 [Info] Function completed (Success, Id=71cead2b-3f90-4c04-8c82-42f6a885491a, Duration=87ms)

Checking out the log messages inside the log monitor is nice and all, but the best thing over here is the (automatic) integration with Application Insights! If you have activated this feature you will be able to see all logs in the Application Insights Search blade.

image

I totally recommend using Application Insights to monitor your application, along with all of the logging, as it’s an awesome piece of technology with endless possibilities for your DevOps teams. I’ll share more of my experience with Application Insights in a later post, because it’s too much to share in a single post.

Using the `TraceWriter` is a nice way to get you started with logging, but it’s probably not something you want to use in an actual application.

Introducing the `ILogger`

A different way to start logging is by using the `ILogger` from the `Microsoft.Extensions.Logging` namespace. You can just replace the earlier mentioned `TraceWriter` with the `ILogger`, change the logging methods and be done with it. The Azure Functions runtime will make sure some concrete implementation is being injected which will act similarly to the `TraceWriter`.
The main benefit will be the improved testability of your Azure Function as this parameter is much easier to mock.

One thing I did notice though is the `ILogger` doesn’t output any logging to the Azure Functions console application when running locally. There appears to be an (now open) issue on GitHub which mentions this bug. But don’t worry, you can see all the expected logging in the available tooling of the Azure Portal.

As mentioned, the `ILogger` is a better solution if you want to use Azure Functions in your actual project, because it offers better testability. However, the functionality is quite limited and you can’t extend much on the `ILogger` at this time. If you already have some kind of logging mechanism or framework elsewhere in your solution your probably don’t want to introduce this new logging implementation. My go-to logging framework is log4net and I'd like to use it for the logging inside my Azure Functions also. In order to do so you need to configure a thing or two when setting up log4net. It isn’t very complicated, but something to keep in mind when setting up the Azure Functions.
I’ll describe the necessary steps in a later post.

For now my main advice is just to use the `ILogger` instead of the `TraceWriter`. Both work very well, but `ILogger` offers some advantages compared to the `TraceWriter`.

In a couple of weeks, on the 22nd of February, I’ll be talking at a free event organized by 4DotNet and SnelStart called Move Up with Azure. I’m not the only one who will be speaking over there, there’s also a great session by Henry Been (SnelStart) and an awesome talk from Christos Matskas (Microsoft).

I myself will be talking on how to create a serverless solution using Azure Functions. This of course is a very broad subject and I’d like to know what you think I should focus on or what you would like to see covered in this session?

Some areas which I’ll be covering for sure is a short introduction on the serverless paradigm, how to design and create a scalable architecture, using built-in functionality offered by Azure Functions to make your life easier, working with Visual Studio to get stuff done and of course how to test your solution.
There are a lot of other subjects which I can also cover and deep-dive into. Feel free to comment over here if you have a specific interest in something related to serverless or Azure Functions. For example Durable Functions, performance, common patterns & principles.

Hope to see you on the 22nd of February in Nieuwegein. Be sure to register on Eventbrite to get your free ticket for this event!

Using certificates to secure, sign and validate information has become a common practice in the past couple of years. Therefore, it makes sense to use them in combination with Azure Functions as well.

As Azure Functions are hosted on top of an Azure App Service this is quite possible, but you do have to configure something before you can start using certificates.

Adding your certificate to the Function App

Let’s just start at the beginning, in case you are wondering on how to add these certificates to your Function App. Adding certificates is ‘hidden’ on the SSL blade in the Azure portal. Over here you can add SSL certificates, but also regular certificates

image

Keep in mind though, if you are going to use certificates in your own project, please just add them to Azure Key Vault in order to keep them secure. Using the Key Vault is the preferred way to work with certificates (and secrets).

For the purpose of this post I’ve just pressed the Upload Certificate-link, which will prompt you with a new blade from which you can upload a private or public certificate.

clip_image001[4]

You will be able to see the certificate’s thumbprint, name and expiration date on the SSL blade if it has been added correctly.

image

There was a time where you couldn’t use certificates if your Azure Functions were located on a Consumption plan. Luckily this issue has been resolved, which means we can now use our uploaded certificates in both a Consumption and an App Service plan.

Configure the Function App

As I had written before, in order to use certificates in your code there is one little configuration matter which has to be addressed. By default the Function App (read: App Service) is locked down quite nicely which results in not being able to retrieve certificates from the certificate store.

The code I’m using to retrieve a certificate from the store is shown below.

private static X509Certificate2 GetCertificateByThumbprint()
{
    var store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
    store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
    var certificateCollection = store.Certificates.Find(X509FindType.FindByThumbprint, CertificateThumprint, false);

    store.Close();

    foreach (var certificate in certificateCollection)
    {
        if (certificate.Thumbprint == CertificateThumprint)
        {
            return certificate;
        }
    }
    throw new CryptographicException("No certificate found with thumbprint: " + CertificateThumprint);
}

Note, if you upload a certificate to your App Service, Azure will place this certificate inside the `CurrentUser/My` store.

Running this code right now will result in an empty `certificateCollection` collection, therefore a `CryptographicException` is thrown. In order to get access to the certificate store we need to add an Application Setting called `WEBSITE_LOAD_CERTIFICATES`. The value of this setting can be any certificate thumbprint you want (comma separated) or just add an asterisk (*) to allow any certificate to be loaded.

After having added this single application setting the above code will run just fine and return the certificate matching the thumbprint.

Using the certificate

Using certificates to sign or validate values isn’t rocket science, but strange things can occur! This was also the case when I wanted to use my own self-signed certificate in a function.

I was loading my private key from the store and used it to sign some message, like in the code below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");
        var signature = csp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

This code works perfectly, until I started running it inside an Azure Function (or any other App Service for that matter). When running this piece of code I was confronted with the following exception

System.Security.Cryptography.CryptographicException: Invalid algorithm specified.
    at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash, Int32 cbHash, ObjectHandleOnStack retSignature)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignHash(Byte[] rgbHash, Int32 calgHash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignData(Byte[] buffer, Object halg)

So, an `Invalid algorithm specified`? Sounds strange, as this code runs perfectly fine on my local system and any other system I ran it on.

After having done some research on the matter, it appears the underlying Crypto API is choosing the wrong Cryptographic Service Provider. From what I’ve read the framework is picking CSP number 1, instead of CSP 24, which is necessary for SHA-265. Apparently there have been some changes on this matter in the Windows XP SP3 era, so I don’t know why this still is a problem with our (new) certificates. Then again, I’m no expert on the matter.

If you are experiencing the above problem, the best solution is to request new certificates created with the `Microsoft Enhanced RSA and AES Cryptographic Provider` (CSP 24). If you aren’t in the position to request or use these new certificates, there is a way to overcome the issue.

You can still load and use the current certificate, but you need to export all of the properties and create a new `RSACryptoServiceProvider` with the contents of this certificate. This way you can specify which CSP you want to use along with your current certificate.
The necessary code is shown in the block below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");

        var privateKeyBlob = csp.ExportCspBlob(true);
        var cp = new CspParameters(24);
        var newCsp = new RSACryptoServiceProvider(cp);
        newCsp.ImportCspBlob(privateKeyBlob);

        var signature = newCsp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

Do keep in mind, this is something you want to use with caution. Being able to export all properties of a certificate, including the private key, isn’t something you want to expose to your code very often. So if you are in need of such a solution, please consult with your security officer(s) before implementing!

As I mentioned, the code block above works fine inside an App Service and also when running inside an Azure Function on the App Service plan. If you are running your Azure Functions in the Consumption plan, you are out of luck!
Running this code will result in the following exception message.

Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Sign ---> System.Security.Cryptography.CryptographicException: Key not valid for use in specified state.
   at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
   at System.Security.Cryptography.Utils.ExportCspBlob(SafeKeyHandle hKey, Int32 blobType, ObjectHandleOnStack retBlob)
   at System.Security.Cryptography.Utils.ExportCspBlobHelper(Boolean includePrivateParameters, CspParameters parameters, SafeKeyHandle safeKeyHandle)
   at Certificates.Sign.SignData(X509Certificate2 certificate, String xmlString)
   at Certificates.Sign.Run(HttpRequestMessage req, String message, TraceWriter log)
   at lambda_method(Closure , Sign , Object[] )
   at Microsoft.Azure.WebJobs.Host.Executors.MethodInvokerWithReturnValue`2.InvokeAsync(TReflected instance, Object[] arguments)
   at Microsoft.Azure.WebJobs.Host.Executors.FunctionInvoker`2.d__9.MoveNext()

My guess is this has something to do with the nature of the Consumption plan and it being a ‘real’ serverless implementation. I haven’t looked into the specifics yet, but not having access to server resources makes sense.

It has taken me quite some time to figure this out, so I hope it helps you a bit!

You might remember me writing a post on how you can set up your site with SSL while using Let’s Encrypt and Azure App Services.

Well, as it goes, the same post applies for Azure Functions. You just have to do some extra work for it, but it’s not very hard.

Simon Pedersen, the author of the Azure Let’s Encrypt site extension, has done some work in explaining the steps on his GitHub wiki page. This page is based on some old screenshots, but it still applies.

The first thing you need to do is create a new function which will be able to do the ACME challenge. This function will look something like this.

public static class LetsEncrypt
{
    [FunctionName("letsencrypt")]
    public static HttpResponseMessage Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "letsencrypt/{code}")]
        HttpRequestMessage req, 
        string code, 
        TraceWriter log)
    {
        log.Info($"C# HTTP trigger function processed a request. {code}");

        var content = File.ReadAllText(@"D:\home\site\wwwroot\.well-known\acme-challenge\" + code);
        var resp = new HttpResponseMessage(HttpStatusCode.OK);
        resp.Content = new StringContent(content, System.Text.Encoding.UTF8, "text/plain");
        return resp;
    }
}

As you can see, this function will read the ACME challenge file from the disk of the App Service it is running on and return the content of it. Because Azure Functions run in an App Service (even the functions in a Consumption plan), this is very possible. The Principal (created in the earlier post) can create these type of files, so everything will work just perfectly.

This isn’t all we have to do, because the url of this function is not the url which the ACME challenge will use to retrieve the appropriate response. In order for you to actually use this site extension you need to add a new proxy to your Function App. Proxies are still in preview, but very usable! The proxy you have to create will have to redirect the url `/.well-known/acme-challenge/[someCode]` to your Azure Function. The end result will look something like the following proxy.

"acmechallenge": {
  "matchCondition": {
    "methods": [ "GET", "POST" ],
    "route": "/.well-known/acme-challenge/{rest}"
  },
  "backendUri": "https://%WEBSITE_HOSTNAME%/api/letsencrypt/{rest}"
}

Publish your new function and proxy to the Function App and you are good to go!

If you haven’t done this before, be sure to follow all of the steps mentioned in the earlier post! Providing the appropriate application settings should be easy now and if you just follow each step of the wizard you’ll see a green bar when the certificate is successfully requested and installed!

image_thumb5

This makes my minifier service even more awesome, because now I can finally use HTTPS, without getting messages the certificate isn’t valid.

(Almost) No one likes writing code meant to store data to a repository, queues, blobs. Let alone triggering your code when some event occurs in one of those areas. Luckily for us the Azure Functions team has decided to use bindings for this.
By leveraging the power of bindings, you don’t have to write your own logic to store or retrieve data. Azure Functions provides all of this functionality out of the box!

Bindings give you the possibility to retrieve data (strong-typed if you want) from HTTP calls, blob storage events, queues, CosmosDB events, etc. Not only does this work for input, but also for output. Say you want to store some object to a queue or repository, you can just use an output binding in your Azure Function to make this happen. Awesome, right?

Most of the documentation and blogposts out there state you should define your bindings in a file called `function.json`. An example of these bindings is shown in the block below.

{
  "bindings": [
    {
      "name": "order",
      "type": "queueTrigger",
      "direction": "in",
      "queueName": "myqueue-items",
      "connection": "MY_STORAGE_ACCT_APP_SETTING"
    },
    {
      "name": "$return",
      "type": "table",
      "direction": "out",
      "tableName": "outTable",
      "connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
    }
  ]
}

The above sample specifies an input binding for a Queue and an output binding for a some Table Storage. While this works perfectly, it’s not the way you want to implement this when using C# (or F# for that matter), especially if you are using Visual Studio!

How to use bindings with Visual Studio

To set up a function binding in via Visual Studio you just have to specify some attributes for the input parameters of your code. These attributes will make sure the `function.json` file is created when the code is being compiled.

After creating your first Azure Function via Visual Studio you will get a function with these attributes immediately. For my URL Minifier solution I’ve used the following HttpTrigger.

[FunctionName("Get")]
public static async Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "{slug}")] HttpRequestMessage req, string slug,
    TraceWriter log)

Visual Studio (well, actually the Azure Function tooling) will make sure this will get translated to a binding block which looks like this.

"bindings": [
  {
    "type": "httpTrigger",
    "route": "{slug}",
    "methods": [
      "get"
    ],
    "authLevel": "anonymous",
    "name": "req"
  }
],

You can do this for every type of trigger which is available at the moment.

Sadly, this type of development hasn’t been described a lot in the various blogposts and documentation, but with a bit of effort you can find out how to implement most bindings by yourself.

I haven’t worked with all of the different type of bindings yet.
One which I found quite hard to implement is the output binding for a Cosmos DB repository. Though, in hindsight it was rather easy to do once you know what to look for. What worked for me, is creating an Azure Function via the portal first and see which type of binding it uses. This way I found out for a Cosmos DB output binding you need to use the `DocumentDBAttribute`. This attribute needs a couple of variables, like the database name, collection name and of course the actual connection string. After providing all of the necessary information your Cosmos DB output binding should look something like the one below.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("TablesDB", "minified-urls", ConnectionStringSetting = "Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

Notice I had to remove the `async` keyword? That’s because you can’t use `async` if there is an out-parameter.

The thing I had the most trouble with is finding out which value should be in the `ConnectionStringSetting`. If you head down to the Connection String tab of your Cosmos DB in the Azure portal you will find a connection string in the following format.

DefaultEndpointsProtocol=https;AccountName=[myCosmosDb];AccountKey=[myAccountKey];TableEndpoint=https://[myCosmosDb].documents.azure.com

If you use this setting, you’ll be prompted with a `NullReferenceException` for a `ServiceEndpoint`. After having quite a bit of time on troubleshooting this issue I decided the problem probably had to use some other value in the `ConnectionStringSetting`.
Having tired a couple of things I finally discovered you have to specify the setting as follows:

AccountEndpoint=https://[myCosmosDb].documents.azure.com:443/;AccountKey=[myAccountKey];

Running the function will work like a charm now.

I’m pretty sure this will not be the only ‘quirk’ you will come across when using the bindings, but as long as we can all share the information it will become easier in the future!

Where will I store the secrets?

When using attributes you can’t rely much on retrieving your secrets via application settings or the like. Well, the team has you covered!

You can just use your regular application settings, as long as you hold to a naming convention where the values have to be uppercase and use underscores for separation. So instead of hardcoding the values “TablesDB” and “minified-urls” inside my earlier code snippet, one can also use the following.

[FunctionName("Create")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "create")]HttpRequestMessage req, 
    [DocumentDB("MY-DATABASE", MY-COLLECTION", ConnectionStringSetting = Minified_ConnectionString", CreateIfNotExists = true)] out MinifiedUrl minifiedUrl,
    TraceWriter log)

By convention, the actual values will now be retrieved via the application settings.

Awesome!

Yeah, but Application Settings aren’t very secure

True!

I’ve already written about this in an earlier post. While using the Application Settings are fine to store some configuration data, you don’t want to specify secrets over there. Secrets should be stored inside Azure Key Vault.

Of course, you can’t use Azure Key Vault in these attributes.
Lucky for us, the Azure Functions team still got us covered with an awesome feature called Imperative bindings! The sample code is enough to get us cracking on creating a binding where the connection secrets are still stored inside Azure Key Vault (or somewhere else for that matter).

Because I’m using a Cosmos DB connection, I need to specify the `DocumentDBAttribute` inside the `Binder`. Something else you should note is when you want to use an output binding, you can’t just use create binding to a `MinifiedUrl` object. If you only specify the object type, the `Binder` will assume it’s an input binding.
If you want an output binding, you need to specify the binding as an `IAsyncCollector<T>`. Check out the code below to see what you need to do in order to use the `DocumentDBAttribute` in combination with imperative bindings.

// Retrieving the secret from Azure Key Vault via a helper class
var connectionString = await secret.Get("CosmosConnectionStringSecret");
// Setting the AppSetting run-time with the secret value, because the Binder needs it
ConfigurationManager.AppSettings["CosmosConnectionString"] = connectionString;

// Creating an output binding
var output = await binder.BindAsync<IAsyncCollector<MinifiedUrl>>(new DocumentDBAttribute("TablesDB", "minified-urls")
{
    CreateIfNotExists = true,
    // Specify the AppSetting key which contains the actual connection string information
    ConnectionStringSetting = "CosmosConnectionString",
});

// Create the MinifiedUrl object
var create = new CreateUrlHandler();
var minifiedUrl = create.Execute(data);

// Adding the newly created object to Cosmos DB
await output.AddAsync(minifiedUrl);

As you can see, there’s a lot of extra code in the body of the function. We have to give up some of the simplicity in order to make the code and configuration a bit more secure, but that’s worth it in my opinion.

If you want to check out the complete codebase of this solution, please check out the GitHub repository, it contains all of the code.

I need more bindings!

Well, a couple of days ago there was some amazing announcement. You can now create your own bindings! Donna Malayeri has some sample code available on GitHub on how to create a Slack binding. There is also a documentation page in the making on how to create these type of bindings.

At this time this feature is still in preview, but if you need some binding which isn’t available at the moment, be sure to check this out. I can imagine this will become quite popular once it has been released. Just imagine creating bindings to existing CRM systems, databases, your own SMTP services, etc.

Awesome stuff in the making!