In my latest post, I’ve shown you how you can use Azure Functions in your Microsoft Teams flow to handle errors in your environment. This stuff works great in a couple of projects I’ve worked on, but what would be even more awesome is to reply to a message in Teams when an action has completed after a button is pressed.

Well, replying & modifying the original message with a status update is quite possible and I’ll show you how in this post.

How do I send a reply to Microsoft Teams?

In the image below you can see a message having posted on my Teams channel and a reply is posted.

reply on teams message

This reply has been sent from my Azure Function. If you want to do this, you need to send a `HttpResponseMessage` with a status code 200 and a specific header value. This header value is `CARD-ACTION-STATUS` and the value will be the message which you will see in the reply.

The code for this will look something similar to the following.

public static async Task<HttpResponseMessage> Run(
	[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
	ILogger log)
{
	// Do your stuff...

	var result = new HttpResponseMessage
	{
		Headers =
		{
			{ "CARD-ACTION-STATUS", $"Timeout of `{request.Timeout}` miliseconds has expired."},
		},
		StatusCode = HttpStatusCode.OK
	};


	return result;
}

That’s all there is to it in order to send a single reply to your message.

So you also mentioned updating the original message?

Yeah, I did!

From your Azure Function (or any other API) it’s possible to change the original message. Updating the message might make sense in a couple of scenarios. The one scenario where we’re using it for is to remove the button(s) in the message, therefore limiting the ‘action’ only to a single use.

While our services are set up to be idempotent, we don’t want to spam the API with useless requests, so removing the button makes sense in our case.

In order to do this, you need to add another header to your response message, named `CARD-UPDATE-IN-BODY` and set the value to `true`. This tells the client (Teams) there’s an update for the card in the body of the response message.

If you want to use this, it makes sense to create a new card with data that’s useful after an action has been executed. The eventual code will look pretty similar to the following snippet.

public static async Task<HttpResponseMessage> Run(
	[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
	ILogger log)
{
	// Do all of your stuff...

	var result = new HttpResponseMessage
	{
		Content = new StringContent(GetContent(request.Timeout)),
		Headers =
		{
			{ "CARD-ACTION-STATUS", $"Timeout of `{request.Timeout}` miliseconds has expired."},
			{ "CARD-UPDATE-IN-BODY", "true" }
		},
		StatusCode = HttpStatusCode.OK
	};
	result.Content.Headers.ContentType = new MediaTypeHeaderValue("text/html");

	return result;
}

Over here I’m creating a new `Content` property with the update of the card. I do have to make clear it’s a full replacement of the original message. Therefore, you have to create a completely new MessageCard. For me, the content of the new MessageCard looks pretty much like the following piece of JSON.

{
	"@type": "MessageCard",
	"@context": "https://schema.org/extensions",
	"summary": "Testing the timeout",
	"themeColor": "0078D7",
	"sections": [
		{
			"activityImage": "https://jan-v.nl/Media/logo.png",
			"activityTitle": "Timeout test",
			"activitySubtitle":"Testing, testing...",
			"facts": [
				{
					"name": "Timeout (miliseconds):",
					"value": "1000"
				}
			],
			"text": "The response has returned with a timeout of 1000 miliseconds.",
		}
	]
}

In Microsoft Teams this will appear like the following screenshot.

updated message and response

The message gets an `Updated` status, which makes it clear for all users this isn’t the original message.

Erroneous statements on other sites / posts

So if you stumbled on this post while searching for this functionality in a search machine, you probably know it’s hard to find anything useful on the matter. While doing research on this I also saw a post stating the response message has to be returned within 5 seconds in order for Microsoft Teams to process it and show the reply and/or updated message in the channel.
From my experience, I can tell you this isn’t true (at the moment). I’ve tested this timeout function with delays up to 30 seconds and the functionality still works properly as you can see on the image below.

response with 30 seconds

Closing up

If you want to evaluate the complete code there’s a GitHub repository called ServerlessDevOps where I’m doing all of the code updates and trying out new stuff & integrations with Microsoft Teams and Azure Functions.

So, is this something you might consider using in your own projects and keeping your DevOps workplace happy? I’d love to hear it and if you’re missing something which you want to be highlighted in future posts.

There’s a relative new feature available in Azure called Managed Service Identity. What it does is create an identity for a service instance in the Azure AD tenant, which in its turn can be used to access other resources within Azure. This is a great feature, because now you don’t have to maintain and create identities for your applications by yourself anymore. All of this management is handled for you when using a System Assigned Identity. There’s also an option to use User Assigned Identities which work a bit different.

Because I’m an Azure Function fanboy and want to store my secrets within Azure Key Vault, I was wondering if I was able to configure MSI via an ARM template and access the Key Vault from an Azure Function without specifying an identity by myself.

As most of the things, setting this up is rather easy, once you know what to do.

The ARM template

The documentation states you can add an `identity` property to your Azure App Service in order to enable MSI.

"identity": {
    "type": "SystemAssigned"
}

This setting is everything you need in order to create a new service principal (identity) within the Azure Active Directory. This new identity has the exact same name as your App Service, so it should be easy to identify.

If you want to check out yourself if everything worked, you can check the AAD Audit Log. It should have a couple of lines stating a new service principal has been created.

clip_image001

You can also check out the details of which has happened by clicking on the lines.

image

Not very interesting, until something is broken or needs debugging.

An easier method to check if your service principal has been created is by checking the Enterprise Applications within your AAD tenant. If your deployment has been successful, there’s an application with the same name as your App Service.

clip_image001[5]

Step two in your ARM template

After having added the identity to the App Service, you now have access to the `tenantId` and `principalId` which belong to this identity. These properties are necessary in order to give your App Service identity access to the Azure Key Vault. If you’re familiar with Key Vault, you probably know there are some Access Policies you have to define in order to get access to specific areas in the Key Vault.

Figuring out how to retrieve the new App Service properties was the hardest part of this complete post, for me. Eventually I figured out how to access these properties, thanks to an answer on Stack Overflow. What I ended up doing is retrieving a reference to the App Service in the `accessPolicies` block of the Key Vault resource and use the `identity.tenantId` and `identity.principalId`.

"accessPolicies": [
{
  "tenantId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.tenantId]",
  "objectId": "[reference(concat('Microsoft.Web/sites/', parameters('webSiteName')), '2018-02-01', 'Full').identity.principalId]",
  "permissions": {
    "keys": [],
    "secrets": [
      "get"
    ],
    "certificates": [],
    "storage": []
  }
}],

Easy, right? Well, if you’re an ARM-template guru probably.

Now deploy your template again and you should be able to see your service principal being added to the Key Vault access policies.

clip_image001[7]

Because we’ve specified the identity has access to retrieve (GET) secrets, in theory we are now able to use the Key Vault.

Retrieving data from the Key Vault

This is actually the easiest part. There’s a piece of code you can copy from the documentation pages, because it just works!

var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyvaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
            
var secretValue = await keyvaultClient.GetSecretAsync($"https://{myVault}.vault.azure.net/", "MyFunctionSecret");
            
return req.CreateResponse(HttpStatusCode.OK, $"Hello World! This is my secret value: `{secretValue.Value}`.");

The above piece of code retrieves a secret from the Key Vault and shows it in the response of the Azure Function. The result should look something like the following response I saw in Firefox.

image

Using the `KeyVaultTokenCallback` is exclusive to be used with the Key Vault (hence the name). If you want to use MSI with other Azure services, you will need to use the `GetAccessTokenAsync` method in order to retrieve an access token to access the other Azure service.

So, that’s all there is to it in order to make your Azure Function or Azure environment a bit more safe with these managed identities.
If you want to check out the complete source code, it’s available on GitHub.

I totally recommend using MSI, because it’ll make your code, software and templates much safer and secure.

As I mentioned in my earlier post, there are 2 options available to you out of the box for logging. You can either use the `TraceWriter` or the `ILogger`. While this is fine when you are doing some small projects or Functions, it can become a problem if you want your Azure Functions to reuse earlier developed logic or modules used in different projects, a Web API for example.

In these shared class libraries you are probably leveraging the power of a ‘full-blown’ logging library. While it is possible to wire up a secondary logging instance in your Azure Function, it’s better to use something which is already available to you, like the `ILogger` or the `TraceWriter`.

I’m a big fan of the log4net logging library, so this post is about using log4net with Azure Functions. As it goes, you can apply the same principle for any other logging framework just the implementation will be a bit different.

Creating an appender

One way to extend the logging capabilities of log4net is by creating your own logging appender. You are probably already using some default file appender or console appender in your projects. Because there isn’t an out-of-the-box appender for the `ILogger`, yet, you have to create one yourself.

Creating an appender isn’t very hard. Make sure you have log4net added to your project and create a new class which derives from `AppenderSkeleton`. Having done so you are notified the `Append`-method should be implemented, which makes sense. The most basic implementation of an appender which is using the `ILogger` looks pretty much like the following.

internal class FunctionLoggerAppender : AppenderSkeleton
{
    private readonly ILogger logger;

    public FunctionLoggerAppender(ILogger logger)
    {
        this.logger = logger;
    }
    protected override void Append(LoggingEvent loggingEvent)
    {
        switch (loggingEvent.Level.Name)
        {
            case "DEBUG":
                this.logger.LogDebug($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "INFO":
                this.logger.LogInformation($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "WARN":
                this.logger.LogWarning($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "ERROR":
                this.logger.LogError($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            case "FATAL":
                this.logger.LogCritical($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
            default:
                this.logger.LogTrace($"{loggingEvent.LoggerName} - {loggingEvent.RenderedMessage}");
                break;
        }
    }
}

Easy, right?

You probably notice the injected `ILogger` in the constructor of this appender. That’s actually the ‘hardest’ part of setting up this thing, because it means you can only add this appender in a context where the ILogger has been instantiated!

Using the appender

Not only am I a big fan of log4net, but Autofac is also on my shortlist of favorite libraries.
In order to use Autofac and log4net together you can use the LoggingModule from the Autofac documentation page. I’m using this module all the time in my projects, with some changes if necessary.

Azure Functions doesn’t support the default app.config and web.configfiles, which means you can’t use the default XML configuration block which is used in a ‘normal’ scenario. It is possible to load some configuration file by yourself and providing it to log4net, but there are easier (& cleaner) implementations.

What I’ve done is pass along the Azure Functions `ILogger` to the module I mentioned earlier and configure log4net to use this newly created appender.

public class LoggingModule : Autofac.Module
{
    public LoggingModule(ILogger logger)
    {
        log4net.Config.BasicConfigurator.Configure(new FunctionLoggerAppender(logger));
    }
// All the other (default) LoggingModule stuff
}

// And for setting up the dependency container

internal class Dependency
{
    internal static IContainer Container { get; private set; }
    public static void CreateContainer(ILogger logger)
    {
        if (Container == null)
        {
            var builder = new ContainerBuilder();
            builder.RegisterType<Do>().As<IDo>();
            builder.RegisterModule(new LoggingModule(logger));
            Container = builder.Build();
        }
    }
}

I do find it a bit dirty to pass along the `ILogger` throughout the code. If you want to use this in a production system, please make the appropriate changes to make this a bit more clean.

You probably notice I’m storing the Autofac container in a static variable. This is to make sure the wiring of my dependencies is only done once, per instance of my Azure Function. Azure Functions are reused quite often and it’s a waste of resources to spin up a complete dependency container per invocation (IMO).

Once you’re done setting up your IoC and logging, you can use any piece of code which is using the log4net `ILog` implementations and still see the results in your Azure Functions tooling!

If you are running locally, you might not see anything being logged in your local Azure Functions emulator. This is a known issue of the currentprevious tooling, there is an openclosed issue on GitHub. Install the latest version of the tooling (1.0.12 at this time) and you’ll be able to see your log messages from the class library.

image

Of course, you can also check the logging in the Azure Portal if you want to. There are multiple ways to find the log messages, but the easiest option is probably the Log-window inside your Function.

image


Well, that’s all there is to it!

By using an easy to write appender you can reuse your class libraries between multiple projects and still have all the necessary logging available to you. I know this’ll help me in some of my projects!
If you want to see all of the source code on this demo project, it’s available on my GitHub page: https://github.com/Jandev/log4netfunction

Using certificates to secure, sign and validate information has become a common practice in the past couple of years. Therefore, it makes sense to use them in combination with Azure Functions as well.

As Azure Functions are hosted on top of an Azure App Service this is quite possible, but you do have to configure something before you can start using certificates.

Adding your certificate to the Function App

Let’s just start at the beginning, in case you are wondering on how to add these certificates to your Function App. Adding certificates is ‘hidden’ on the SSL blade in the Azure portal. Over here you can add SSL certificates, but also regular certificates

image

Keep in mind though, if you are going to use certificates in your own project, please just add them to Azure Key Vault in order to keep them secure. Using the Key Vault is the preferred way to work with certificates (and secrets).

For the purpose of this post I’ve just pressed the Upload Certificate-link, which will prompt you with a new blade from which you can upload a private or public certificate.

clip_image001[4]

You will be able to see the certificate’s thumbprint, name and expiration date on the SSL blade if it has been added correctly.

image

There was a time where you couldn’t use certificates if your Azure Functions were located on a Consumption plan. Luckily this issue has been resolved, which means we can now use our uploaded certificates in both a Consumption and an App Service plan.

Configure the Function App

As I had written before, in order to use certificates in your code there is one little configuration matter which has to be addressed. By default the Function App (read: App Service) is locked down quite nicely which results in not being able to retrieve certificates from the certificate store.

The code I’m using to retrieve a certificate from the store is shown below.

private static X509Certificate2 GetCertificateByThumbprint()
{
    var store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
    store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
    var certificateCollection = store.Certificates.Find(X509FindType.FindByThumbprint, CertificateThumprint, false);

    store.Close();

    foreach (var certificate in certificateCollection)
    {
        if (certificate.Thumbprint == CertificateThumprint)
        {
            return certificate;
        }
    }
    throw new CryptographicException("No certificate found with thumbprint: " + CertificateThumprint);
}

Note, if you upload a certificate to your App Service, Azure will place this certificate inside the `CurrentUser/My` store.

Running this code right now will result in an empty `certificateCollection` collection, therefore a `CryptographicException` is thrown. In order to get access to the certificate store we need to add an Application Setting called `WEBSITE_LOAD_CERTIFICATES`. The value of this setting can be any certificate thumbprint you want (comma separated) or just add an asterisk (*) to allow any certificate to be loaded.

After having added this single application setting the above code will run just fine and return the certificate matching the thumbprint.

Using the certificate

Using certificates to sign or validate values isn’t rocket science, but strange things can occur! This was also the case when I wanted to use my own self-signed certificate in a function.

I was loading my private key from the store and used it to sign some message, like in the code below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");
        var signature = csp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

This code works perfectly, until I started running it inside an Azure Function (or any other App Service for that matter). When running this piece of code I was confronted with the following exception

System.Security.Cryptography.CryptographicException: Invalid algorithm specified.
    at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash, Int32 cbHash, ObjectHandleOnStack retSignature)
    at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignHash(Byte[] rgbHash, Int32 calgHash)
    at System.Security.Cryptography.RSACryptoServiceProvider.SignData(Byte[] buffer, Object halg)

So, an `Invalid algorithm specified`? Sounds strange, as this code runs perfectly fine on my local system and any other system I ran it on.

After having done some research on the matter, it appears the underlying Crypto API is choosing the wrong Cryptographic Service Provider. From what I’ve read the framework is picking CSP number 1, instead of CSP 24, which is necessary for SHA-265. Apparently there have been some changes on this matter in the Windows XP SP3 era, so I don’t know why this still is a problem with our (new) certificates. Then again, I’m no expert on the matter.

If you are experiencing the above problem, the best solution is to request new certificates created with the `Microsoft Enhanced RSA and AES Cryptographic Provider` (CSP 24). If you aren’t in the position to request or use these new certificates, there is a way to overcome the issue.

You can still load and use the current certificate, but you need to export all of the properties and create a new `RSACryptoServiceProvider` with the contents of this certificate. This way you can specify which CSP you want to use along with your current certificate.
The necessary code is shown in the block below.

private static string SignData(X509Certificate2 certificate, string message)
{
    using (var csp = (RSACryptoServiceProvider)certificate.PrivateKey)
    {
        var hashAlgorithm = CryptoConfig.MapNameToOID("SHA256");

        var privateKeyBlob = csp.ExportCspBlob(true);
        var cp = new CspParameters(24);
        var newCsp = new RSACryptoServiceProvider(cp);
        newCsp.ImportCspBlob(privateKeyBlob);

        var signature = newCsp.SignData(Encoding.UTF8.GetBytes(message), hashAlgorithm);
        return Convert.ToBase64String(signature);
    }
}

Do keep in mind, this is something you want to use with caution. Being able to export all properties of a certificate, including the private key, isn’t something you want to expose to your code very often. So if you are in need of such a solution, please consult with your security officer(s) before implementing!

As I mentioned, the code block above works fine inside an App Service and also when running inside an Azure Function on the App Service plan. If you are running your Azure Functions in the Consumption plan, you are out of luck!
Running this code will result in the following exception message.

Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Sign ---> System.Security.Cryptography.CryptographicException: Key not valid for use in specified state.
   at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
   at System.Security.Cryptography.Utils.ExportCspBlob(SafeKeyHandle hKey, Int32 blobType, ObjectHandleOnStack retBlob)
   at System.Security.Cryptography.Utils.ExportCspBlobHelper(Boolean includePrivateParameters, CspParameters parameters, SafeKeyHandle safeKeyHandle)
   at Certificates.Sign.SignData(X509Certificate2 certificate, String xmlString)
   at Certificates.Sign.Run(HttpRequestMessage req, String message, TraceWriter log)
   at lambda_method(Closure , Sign , Object[] )
   at Microsoft.Azure.WebJobs.Host.Executors.MethodInvokerWithReturnValue`2.InvokeAsync(TReflected instance, Object[] arguments)
   at Microsoft.Azure.WebJobs.Host.Executors.FunctionInvoker`2.d__9.MoveNext()

My guess is this has something to do with the nature of the Consumption plan and it being a ‘real’ serverless implementation. I haven’t looked into the specifics yet, but not having access to server resources makes sense.

It has taken me quite some time to figure this out, so I hope it helps you a bit!

You might remember me writing a post on how you can set up your site with SSL while using Let’s Encrypt and Azure App Services.

Well, as it goes, the same post applies for Azure Functions. You just have to do some extra work for it, but it’s not very hard.

Simon Pedersen, the author of the Azure Let’s Encrypt site extension, has done some work in explaining the steps on his GitHub wiki page. This page is based on some old screenshots, but it still applies.

The first thing you need to do is create a new function which will be able to do the ACME challenge. This function will look something like this.

public static class LetsEncrypt
{
    [FunctionName("letsencrypt")]
    public static HttpResponseMessage Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "letsencrypt/{code}")]
        HttpRequestMessage req, 
        string code, 
        TraceWriter log)
    {
        log.Info($"C# HTTP trigger function processed a request. {code}");

        var content = File.ReadAllText(@"D:\home\site\wwwroot\.well-known\acme-challenge\" + code);
        var resp = new HttpResponseMessage(HttpStatusCode.OK);
        resp.Content = new StringContent(content, System.Text.Encoding.UTF8, "text/plain");
        return resp;
    }
}

As you can see, this function will read the ACME challenge file from the disk of the App Service it is running on and return the content of it. Because Azure Functions run in an App Service (even the functions in a Consumption plan), this is very possible. The Principal (created in the earlier post) can create these type of files, so everything will work just perfectly.

This isn’t all we have to do, because the url of this function is not the url which the ACME challenge will use to retrieve the appropriate response. In order for you to actually use this site extension you need to add a new proxy to your Function App. Proxies are still in preview, but very usable! The proxy you have to create will have to redirect the url `/.well-known/acme-challenge/[someCode]` to your Azure Function. The end result will look something like the following proxy.

"acmechallenge": {
  "matchCondition": {
    "methods": [ "GET", "POST" ],
    "route": "/.well-known/acme-challenge/{rest}"
  },
  "backendUri": "https://%WEBSITE_HOSTNAME%/api/letsencrypt/{rest}"
}

Publish your new function and proxy to the Function App and you are good to go!

If you haven’t done this before, be sure to follow all of the steps mentioned in the earlier post! Providing the appropriate application settings should be easy now and if you just follow each step of the wizard you’ll see a green bar when the certificate is successfully requested and installed!

image_thumb5

This makes my minifier service even more awesome, because now I can finally use HTTPS, without getting messages the certificate isn’t valid.