Lately, I’ve been busy learning more about creating serverless solutions. Because my main interest lies within the Microsoft Azure stack I surely had to check out the Azure Functions offering.

Azure Functions enable you to create a serverless solutions which are completely event-based. As it’s located within the Azure space, you can integrate easily with all of the other Azure services, like for example the service bus, Cosmos DB, storage, but also external services like SendGrid and GitHub!

All of these integrations are fine and all, but seeing Azure Functions perform in action is still easiest with regular HTTP triggers. You can just navigate with a browser (or Postman) to a URL and your function will be activated immediately. I guess most people will create these kind of functions in order to learn to work with them, at least that’s what I did.

Creating your Azure Functions App

In order to create Azure Functions, you first have to create a so called Function App in the Azure Portal. Creating such an app is quite easy, the only thing you have to think about is which type of Hosting Plan you want to use. At this time there are 2 options, the Consumption Plan or the App Service Plan.

image

For your regular “Hello World”-function it doesn’t really matter, but there are a few important differences between the two.

If you want to experience the full power of a serverless compute solution, you want to use the Consumption plan. This plan creates instances to host your Azure Functions on-demand, depending on the number of incoming events. Even when there is a super-high load on your system, this plan scales automatically.
The other main advantage is, you will only pay for the functions if they actually do something.
As you might remember, these two advantages are, in my opinion, the main benefits for people to move to serverless offerings.

However, using the App Service plan also has some advantages. The main advantage will be to utilize the full power of your virtual machines and not having unexpected (high?) costs. With the App Service plan, your function apps run on the App Service virtual machines you might already have deployed on your subscription (Basic, Standard and Premium). This means you can share the same (underlying) virtual machines of your websites with your Azure Functions. Using this plan might save you some money in the end, because you are already paying for the (unused) compute which you are able to utilize now. Running these functions won’t cost anything extra, aside from the extra bandwidth of course.
Another advantage is your functions will be able to run continuously, or nearly continuously. The App Service plan is useful in scenario’s where you need a lot of long-running compute. Keep in mind you need to enable the Always On setting in your App Service if you want your functions to run continuously.

There are some other little differences between the two plans, but the mentioned differences are most important, to me at least.

Do remember to enable Application Insights for your Functions App. It’s already an awesome monitoring platform, but the integration with Azure Functions makes it even more amazing! I can’t think of a valid reason not to enable it, because it is also quite cheap.

After having completed the creation of your Functions App you are able to navigate to it in the portal. A Functions App acts much like a container for one or more Azure Functions. This way you are able to place multiple Azure Functions into a single Functions App. It might be useful for monitoring if you are placing functions for a single functional use-case into one Functions App.
You can of course put all of your functions inside one App. This doesn’t really matter at the moment. It’s a matter of taste.

Your first Azure Function

If you are just staring with Azure Functions and serverless computing I’d advise to check out the portal and create a new function from over there. Of course, it isn’t a recommended practice if you want to get serious about developing a serverless solution, but this way you are able to take baby steps into this technology space. A recommended practice would be using an ARM template or CLI script.

From inside the Function App you have the possibility to create new functions.

image

Currently, the primary languages of choice are C#, JavaScript and F#. This is just to get you started, because there are more languages supported already (node.js, python, PHP) and more are coming. There’s even an initiative to support R scripts in Azure Functions!

For now I’ll go with the C# function, because that’s my ‘native’ programming language.

After this function is created you are presented with an in-browser code editor from which you can start coding and running your function.

image

This function is placed in a file called run.csx. The csx extension belongs to C# scripts (check out scriptcs.net), much like ps1 belongs to PowerShell scripts.
It should now be clear this Azure Function is ‘just’ a script file with an entry point. This entry point is much like the Main-method in the Program.cs file of your console application.

Because we have created an HTTP hook/endpoint, you should return a valid HTTP response, like you can see in the script. If you want to test your function, the portal has you covered by pressing the Test button. Even though testing in the portal is cool, it’s even cooler to try it out in your browser. If everything is set up correctly you will be able to navigate to the URL https://[yourFunctionApp].azurewebsites.net/api/HttpTriggerCSharp1?code=[someCode] and receive the content, which should be `Please pass a name on the query string or in the request body`. You can extract the proper URL from the Get function URL link in the portal.

Management & settings

Azure Functions are really, really, really short-lived App Services. They are also deployed in the same Azure App Service ecosystem, therefore you can leverage the same management possibilities which are available to your regular App Services.

On the Platform features tab you are able to navigate to most useful management features of your Functions App.

image

I really like this page, it’s much better and clearer compared to the configuration ‘blade’ of regular Azure services. Hopefully this design will be implemented with the other services also!

Keep in mind to configure CORS properly if you want to use your HTTP function from within a javascript application!

All other features presented over here are also important of course. I especially like the direct link to the Kudu site, from which you can do even more management!

Another setting, which is in preview at the moment, is enabling deployment slots. Yes, deployment slots for your Azure Functions! This works exactly the same like you are used to with the regular App Services. I’ve configured one of my Function Apps to use the deployment slots. By enabling deployment slots you can now deploy the `develop` branch to a development slot and the `master` branch to the production slot of the Function App.

image

If for some reason you want to disable the usage of a specific function, just navigate to the Functions leaf in the treeview and you are able to disable (and re-enable) the different functions individually.

image

Creating real functions

Creating functions from within the Azure Portal isn’t a very good idea in real life. Especially since you don’t have any version control, quality gates, continuous integration & deployment in place. That’s why it’s a good idea not to use the browser as your primary coding environment. For a professional development experience you have multiple options at hand.

The easiest option is to use Visual Studio 2017. You need version 15.3 (or higher) of Visual Studio, which is still in preview at this moment. When you are done installing this version you should be able to install the Azure Functions Tools for Visual Studio 2017 from the Visual Studio Marketplace on your machine.
Doing so will enable you to choose a new project template called Azure Functions. You can add multiple Azure Functions to this project. Currently, there already is an extensive list of events available to which you can subscribe to. I’m sure the list will grow in the future, but for now it will suffice for a great deal of solutions.
image

After having chosen the event of your choice you can change some settings, like Access Rights. If you want your HttpTrigger to be accessed by anonymous users from the web, you need to set it to Anonymous instead of Function. No worries if you forgot to do this, it’s something you can set from inside your code also.

When comparing the created functions (the one from the portal and from Visual Studio) you will notice a couple of differences.

namespace FunctionApp1
{
    public static class Function1
    {
        [FunctionName("HttpTriggerCSharp")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            //do stuff
        }
    }
}

First of all, your function is now wrapped inside a namespace and a (static) class. This makes organizing and integrating the code with your current codebase much easier.

Another thing you might notices are the extra attributes added to the function.

The Run-method now has an attribute FunctionName, which will tell the Function App what the name of the function will be. Do note, having multiple functions with the same FunctionName will override the one getting deployed. The function.json file will create an entrypoint for the latest function it finds with a specific name.
Also, the first parameter, the HttpRequestMessage, now has an HttpTrigger-attribute stating how the function can/should be triggered. In this case the function can only be triggered by other functions with a HTTP GET or POST. Because of these attributes it is easier to change the behavior of the functions later on. You aren’t dependent on choices made in some wizard.

I already mentioned the function.json file briefly. This file is used to populate the Function App with your functions. If you’ve explored the portal a bit, you might have seen this file already after creating the initial function.

image

This file contains all configuration for the provided functions within the Function App. The function.json file from the first function script contains the following information.

{
  "disabled": false,
  "bindings": [
    {
      "authLevel": "function",
      "name": "req",
      "type": "httpTrigger",
      "direction": "in"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ]
}

Now compare it with the one, generated, by the function in Visual Studio.

{
  "bindings": [
    {
      "type": "httpTrigger",
      "methods": [
        "get",
        "post"
      ],
      "authLevel": "function",
      "direction": "in",
      "name": "req"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    }
  ],
  "disabled": false,
  "scriptFile": "..\\FunctionApp1.dll",
  "entryPoint": "FunctionApp1.Function1.Run"
}

As you can see, the file which is generated in Visual Studio contains information about the entry point for the function.

Note: You won’t see this file, it’s located in the assembly output folder after building the project.

In the above example I’ve used Visual Studio to create functions. It is also possible to use any other IDE for this, only you have to take into consideration you’ll have to use the Azure Function CLI tooling in those environments.

Debugging

When using a proper IDE, like Visual Studio, you are used to debugging your software from within the IDE.

This isn’t any different with Azure Functions. When pressing the F5-button a command line application will start with your functions loaded inside.

image

This application will start a small webserver which emulates the Function App. When working with HTTP triggers you can easily navigate to the provided endpoints. Of course, you can also work with any other event as long as you are able to trigger them.

At BUILD 2017 it was also announced you can do live-production-debugging of your Azure Functions from within Visual Studio. In other words: Connecting to the production environment and setting breakpoints in the code.
The crowd went wild, because this is quite cool. However, do you want to do this? It’s nice there is a possibility for you to leverage such a feature, but in most cases I would frown quite a bit if someone suggested doing this.
I do have to note live-debugging Azure Function production code isn’t as dangerous as a ‘normal’ web application. Normally when you do this, the complete thread is paused and no one is able to continue on the site. This is one of many reasons why you never want to do this. With a serverless model this isn’t the case. Each function spins up a new instance/thread, so if you set a breakpoint in one of the instances, all other instances can still continue to work.
Still, take caution when considering to do this!

Deployment

We are all professional developers, so we also want to leverage the latest and greatest continuous integration & deployment tools. When working in the Microsoft & Azure stack it’s quite common to use either TFS or Azure Release Management for building your assemblies.

Because your Azure Functions project still produce an assembly, which should be deployed with along with your function.json file, it is also still possible to use the normal CI/CD solutions for your serverless solution.

If you don’t feel like setting up such a build environment you can still use a different continuous deployment feature which the App Services model brings to us.

On the Platform features tab click on the Deployment options setting.

image

This will navigate you to the blade from where you can setup your continuous deployment.

image

Using this feature you are able to deploy every commit of a specific branch to the specified application slot.

Setting up this feature is quite easy, if you are using a common version control system which is located in the cloud, like VSTS, GitHub, BitBucket or even DropBox and OneDrive.

I’ve set up one of my applications with VSTS integration. Every time I push some changes to the `master` branch, the changes are being built and deployed to the specified slot.

image

When clicking on a specific deployment, you can even see the details of this deployment and redeploy if needed.

image

All in all quite a cool feature if you want to use continuous deployment, but don’t want to set up TFS or Azure Release Management. The underlying technology still uses Azure Release Management, but you don’t have to worry about it anymore.

If you are thinking of using Azure Functions in your professional environment I highly recommend using a proper CI/CD tool! The continuous deployment option is quite alright (and better as publishing your app from within Visual Studio), but one of the major downsides is you can’t ‘promote’ a build to a different slot.
You can only push changes to a branch and those will get built. This isn’t something you want in your company, but that’s a completely different blog post and unrelated to serverless or Azure Functions.

Hope this helps you out a bit starting with Azure Functions.

For years we (a lot of people I know and myself included) have been using the Unit of Work and Repository pattern combined with each other. This makes quite a lot of sense as, in most cases, they both have something to do with your database calls.

When searching for both of these patterns you’ll often be directed to a popular article on the Microsoft documentation site. The sample code over there has a very detailed implementation on how you can implement both of these patterns for accessing and working with your database. I kind of like this post as it goes in great length to describe both the unit of work- and repository pattern and the advantages of using them. I see a lot of projects/companies having implemented the pattern combo like described in the Microsoft article. I can’t really blame them as it’s one of the top hits when you search for it in any search engine.

There is a downside to this sample though. It violates the Open/Closed principle which states “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”. Whenever you need to add a new repository to your database context, you also need to add this repository to your unit of work, therefore violating the open/closed principle.

It also violates the Single Responsibility Principle, which states “everymoduleorclassshould have responsibility over a single part of thefunctionalityprovided by thesoftware, and that responsibility should be entirelyencapsulatedby the class. All itsservicesshould be narrowly aligned with that responsibility.” or in short “A class should have only one reason to change.”. The reason why the sample implementation violates this principle is because it is handling multiple responsibilities. The unit of work’s purpose should be to encapsulate and commit or rollback transactions of atomic operations. However, it’s also creating and managing the several repository objects, therefore having multiple responsibilities.

Implementing the unit of work and repository pattern can be done in multiple ways. Derek Greer goes on about this at great length about this in an old post of him. As always there are several ways to improve the design. You might even want to keep the mentioned design in the Microsoft example, because ‘it-just-works’. For the sake of cleaner code I’ll describe one of the ways, which I personally like very much, to improve the software design. By adding a decorator to the project the functional code will be much cleaner.

First thing you have to consider is implementing some form of CQRS in your software design. This will make your live much easier when splitting the command, unit of work and repository functionality. You can perfectly implement the described solution without implementing CQRS, but why would you want to do this?

I’ll just assume you have a command handler in your application. The interface will probably look similar to the following piece of code.

public interface IIncomingFileHandler<in TCommand>
	where TCommand : IncomingFileCommand
{
	void Handle(TCommand command);
}

The actual command handler can be implemented like the following piece of code.

public class IncomingFileHandler<TCommand> : IIncomingFileHandler<TCommand>
    where TCommand : IncomingFileCommand
{
    private readonly IRepository<Customer> customerRepository;
    private readonly IRepository<File> fileRepository;
    
    protected IncomingFileHandler(IRepository<Customer> customerRepository, IRepository<File> fileRepository)
    {
        this.customerRepository = customerRepository;
        this.fileRepository = fileRepository;
    }

    public void Handle(TCommand command)
    {
        //Implement your logic over here.
        var customer = customerRepository.Get(command.CustomerId);
        customer.LatestUpdate = command.Request;
        customerRepository.Update(customer);
        var file = CreateNewIncomingFileDto(command);
        fileRepository.Add(file);

        return;
    }
}

All of the necessary repositories are injected over here so we can implement the logic for this functional area. The implementation doesn’t make much sense, but keep in mind it’s just an example. This piece of code wants to write to the database multiple times. We could implement the call to the SaveChanges() method inside the Update- and Add-methods, but that’s a waste of database requests and you’ll sacrifice transactional consistency.

At this time nothing is actually written back to the database, because the SaveChanges isn’t called anywhere and we aren’t committing (or rolling back) any transaction either. The functionality for persisting the data will be implemented in a transaction handler, which will be added as a decorator. The transaction handler will create a new TransactionScope, invoke the Handle-method of the actual IIncomingFileHandler<TCommand> implementation (in our case the IncomingFileHandler<TCommand>), save the changes and commit the transaction (or roll back).

A simple version of this transaction decorator is shown in the following code block.

public class IncomingFileHandlerTransactionDecorator<TCommand> : IIncomingFileHandler<TCommand> 
    where TCommand : IncomingFileCommand
{
    private readonly IIncomingFileHandler<TCommand> decorated;
    private readonly IDbContext context;

    public IncomingFileHandlerTransactionDecorator(IIncomingFileHandler<TCommand> decorated, IDbContext context)
    {
        this.decorated = decorated;
        this.context = context;
    }

    public void Handle(TCommand command)
    {
        using (var transaction = context.BeginTransaction())
        {
            try
            {
                decorated.Handle(command)

                context.SaveChanges();
                context.Commit(transaction);
            }
            catch (Exception ex)
            {
                context.Rollback(transaction);
                throw;
            }
        }
    }
}

This piece of code is only responsible for creating a transaction and persisting the changes made into the database.

We are still using the repository pattern and making use of the unit-of-work, but each piece of code now has its own responsibility. Therefore making the code much cleaner. You also aren’t violating the open/closed principle as you can still add dozens of repositories, without affecting anything else in your codebase.

The setup for this separation is a bit more complex compared to just hacking everything together in one big file/class. Luckily Autofac has some awesome built-in functionality to add decorators. The following two lines are all you need to make the magic happen.

builder.RegisterGeneric(typeof(IncomingFileHandler<>)).Named("commandHandler", typeof(IIncomingFileHandler<>));
builder.RegisterGenericDecorator(typeof(IncomingFileHandlerTransactionDecorator<>), typeof(IIncomingFileHandler<>), fromKey: "commandHandler");

This tells Autofac to use the IncomingFileHandlerTransactionDecorator as a decorator for the IncomingFileHandler.

After having implemented the setup you are good to go. So, whenever you think of implementing the unit-of-work and repository pattern in your project, keep in mind the suggestions in this post.

On a recent project I had to implement the decorator pattern to add some functionality to the existing code flow.

Not a big problem of course. However, on this project we were using Autofac for our dependency injection framework so I had to check how to implement this pattern using the framework built-in capabilities. One of the reasons I always resort to Autofac is the awesome and comprehensive documentation. It’s very complete and most of the time easy to understand. The advanced topics also have a chapter dedicated to the Adapter- and Decorator pattern which was very useful for implementing the decorator pattern in this project.

I wanted to use the decorator pattern to add some logic to determine if a command should be handled and for persisting database transactions of my commands and queries. You can also use it for things like security, additional logging, enriching the original command, etc.

As the documentation already states, you’ll have to register your original command handler as a Named service. The Autofac extensions for registering a decorator will use this named instance to add the decorators on to. One thing to remember when you need to add several decorators to your command, you’ll have to register each decorator as a named service also, except for the last one!

The command handlers we were using were accepting a generic argument to instantiate a class. Therefore, we also had to use the open generic version for registering the implementations and decorators.

The implementation of the actual command handler looks very much like the follwing code block.

public class ProcessedItemHandler<TCommand> : IProcessedMessageHandler<TCommand> 
		where TCommand : ProcessedMessageCommand
{
	public ProcessedItemHandler(
		IBackendSystemFormatter<TModel> formatter, 
		IQueueItemWriter<TModel> writer, 
		IRepository<ProcessQueue> processQueueRepository)
	{
	}
	
	public void Handle(TCommand command)
	{
		/* Implementation logic */
	}		
}

It implements the IProcessedMessageHandler<TCommand> interface and contains the logic to execute the command.

The decorator has to implement the same interface and one of the injected dependencies is the same interface. This tells Autofac to inject an IProcessedMessageHandler<TCommand> which is ‘linked’ in the registration of our application.

public class ProcessedMessageTransactionDecorator<TCommand> : IProcessedMessageHandler<TCommand>
		where TCommand : ProcessedMessageCommand
{
	private readonly IProcessedMessageHandler<TCommand> decorated;
	private readonly ITransactionHandler transactionHandler;

	public ProcessedMessageTransactionDecorator(
		IProcessedMessageHandler<TCommand> decorated,
		ITransactionHandler transactionHandler)
	{
		this.decorated = decorated;
		this.transactionHandler = transactionHandler;
	}

	public void Handle(TCommand command)
	{
		/* Decorator logic */

		decorated.Handle(command);

		/* Decorator logic */
	}
}

As you can see, you will be able to do all kinds of stuff in the Handle-method before or after invoking the decorated object.

The registration in our application looks very much like the following code block.

var storeProcessedMessageCommandHandlers = GetAllStoreProcessedMessageCommandHandlerImplementationsFromAssemblies();

foreach (var commandHandler in storeProcessedMessageCommandHandlers)
{
	builder.RegisterGeneric(commandHandler).Named("storeProcessedMessageHandler", typeof(IProcessedMessageHandler<>));
}

builder.RegisterGenericDecorator(typeof(ProcessedMessageTransactionDecorator<>), typeof(IProcessedMessageHandler<>),
										fromKey: "storeProcessedMessageHandler");

First we need to collect all implementations of the IProcessedMessageHandler<TCommand> and register them within the Autofac container. As you can see, all these implementations are registered as a named service with an index called storeProcessedMessageHandler. If you only have 1 implementation of the command handler, you can just register this one implementation of course.

After having registered all of the command handlers, the decorator(s) can be registered. The helper method RegisterGenericDecorator helps with this. This method also works with open generics and registration looks very similar to registering a ‘normal’ class and interface. The main difference is the addition of the fromKey argument. This argument is used to determine to which named service the decorator should be added to.

If you want to hook up multiple decorators you can also add the toKey argument to your RegisterGenericDecorator method. By adding the toKey argument, the decorator is also added as a named service to Autofac and you will be able to hook up another decorator to the earlier decorator by using the name defined in the toKey in the fromKey of the new decorator. This might be a bit abstract, so let me just write up a small example.

builder.RegisterGeneric(typeof(IncomingHandler<>)).Named("commandHandler", typeof(ICommandHandler<>));
builder.RegisterGenericDecorator(typeof(TransactionRequestHandlerDecorator<>), typeof(ICommandHandler<>), fromKey: "commandHandler", toKey: "transactionHandler");
builder.RegisterGenericDecorator(typeof(ShouldHandleCommandHandlerDecorator<>), typeof(ICommandHandler<>), fromKey: "transactionHandler");

Makes more sense right?

Just remember, not to add a toKey argument to the last decorator of your flow. Otherwise you will not be able to inject the interface, because everything is added to the IIndex<T> collection and there isn’t a defined entrypoint. Ask me how I know……

 

Hope this helps you in future projects. Knowing about this functionality surely has helped me to keep the code clean.