So, one of my previous customers reached out to me a couple of weeks ago. They had a question concerning on how to use dependency injection in their AutoMapper profiles. For this project we were using profiles which were dynamically loaded inside the application using MEF and were using Autofac for dependency injection.

The way you would normally load all of these profiles is by using the `AddProfiles` method when initializing AutoMapper. The code would look similar to the following excerpt.

private static void RegisterAutomapperDefault(IEnumerable<Assembly> assemblies)
{
    AutoMapper.Mapper.Initialize(cfg =>
    {
        cfg.AddProfiles(assemblies);
    });
}

This works fine on most occasions and is the recommended approach, to my knowledge.

When you start thinking about using dependency injection (constructor injection in this case), you might want to rethink your mapping profile. Because, if you have the need for dependencies when mapping object properties to the properties of a different object it probably means there’s too much logic going on over here.

Of course, if you need this, one thing you might want to consider is using the custom type convertors or custom value resolvers. You can use dependency injection (constructor injection) using these convertors and resolvers by adding a single line in the `Initialize` method of AutoMapper.

private static void RegisterAutomapperDefault(IContainer container, IEnumerable<Assembly> assemblies)
{
    AutoMapper.Mapper.Initialize(cfg =>
    {
        cfg.ConstructServicesUsing(container.Resolve);

        cfg.AddProfiles(assemblies);
    });
}

Now if you still feel like you need to do constructor injection inside your mapping `Profile` classes, that’s also quite possible, but please think about it in before doing so.

In order to get this working, I first created a new `Profile` class which injects an `IConvertor`, like below.

public class MyProfile : Profile
{
    public MyProfile(IConvertor convertor)
    {
        CreateMap<Model, ViewModel>()
            .ForMember(dest => dest.Id, opt => opt.MapFrom(src => src.Identifier))
            .ForMember(dest => dest.Name, opt => opt.MapFrom(src => convertor.Execute(src.SomeText)))
            ;
    }
}

What you need to do now is register all of the `Profile` implementations to your IoC-framework, like Autofac. To do this, you have to do some reflection magic. The code used below retrieves all `Profile` implementations in the assemblies which have their name starting with “Some”.

public static IContainer Autofac()
{
    var containerBuilder = new ContainerBuilder();

    // Register the dependencies...
    containerBuilder.RegisterType<Convertor>().As<IConvertor>();


    var loadedProfiles = RetrieveProfiles();
    containerBuilder.RegisterTypes(loadedProfiles.ToArray());

    var container = containerBuilder.Build();

    RegisterAutoMapper(container, loadedProfiles);

    return container;
}

/// <summary>
/// Scan all referenced assemblies to retrieve all `Profile` types.
/// </summary>
/// <returns>A collection of <see cref="AutoMapper.Profile"/> types.</returns>
private static List<Type> RetrieveProfiles()
{
    var assemblyNames = Assembly.GetExecutingAssembly().GetReferencedAssemblies()
        .Where(a => a.Name.StartsWith("Some"));
    var assemblies = assemblyNames.Select(an => Assembly.Load(an));
    var loadedProfiles = ExtractProfiles(assemblies);
    return loadedProfiles;
}

private static List<Type> ExtractProfiles(IEnumerable<Assembly> assemblies)
{
    var profiles = new List<Type>();
    foreach (var assembly in assemblies)
    {
        var assemblyProfiles = assembly.ExportedTypes.Where(type => type.IsSubclassOf(typeof(Profile)));
        profiles.AddRange(assemblyProfiles);
    }
    return profiles;
}

All of this code is just to register your mapping profiles to Autofac. This way Autofac can resolve them when initializing AutoMapper. To register your mapping profiles in AutoMapper you need to use a specific overload of the `AddProfile` method which takes a `Profile` instance, instead of a type.

/// <summary>
/// Over here we iterate over all <see cref="Profile"/> types and resolve them via the <see cref="IContainer"/>.
/// This way the `AddProfile` method will receive an instance of the found <see cref="Profile"/> type, which means
/// all dependencies will be resolved via the <see cref="IContainer"/>.
/// </summary>
private static void RegisterAutoMapper(IContainer container, IEnumerable<Type> loadedProfiles)
{
    AutoMapper.Mapper.Initialize(cfg =>
    {
        cfg.ConstructServicesUsing(container.Resolve);
                
        foreach (var profile in loadedProfiles)
        {
            var resolvedProfile = container.Resolve(profile) as Profile;
            cfg.AddProfile(resolvedProfile);
        }
                
    });
}

You can see I’m resolving all of the loaded profiles via Autofac and add each resolved instance to AutoMapper.

This takes quite a bit of effort, but resolving your profiles like this will give you the possibility to do any kind of dependency injection inside your AutoMapper code.


Just remember, as I’ve written before: “Just because you can, doesn’t mean you should!”
Still I wanted to show you how this can be done as it’s kind of cool. If you want to check out the complete solution, check out my GitHub repository for this project.

You’ve probably heard a lot of talk around a new buzzword `serverless`. It’s a pretty confusing name for an awesome technology/technique.

The main reason the word `serverless` isn’t a very good one is because it implies there aren’t any servers when using this technique. I found a fairly funny CommitStrip about this topic.

 

Source:http://www.commitstrip.com/en/2017/04/26/servers-there-are-no-servers-here

But what does the term mean then?
Well, it means you don’t have to worry about servers anymore. You just upload your software to the cloud provider of your choice and it runs on-demand/by-request. As Mark Russinovich said in an interview with InfoWorld “I don’t have to worry about the servers. The platform gives me the resources as I need them.”.Of course the hardware, operating system, webserver, firewall, etc. is all still there, but as a developer and operational person you don’t really have to care about it.

Isn’t this the same as PaaS from a couple of years ago?

The answer is: Yes and no!

Yes, there are a lot of similarities and the serverless offerings from each cloud provider are based upon their current PaaS offerings. Therefore, you could call it an evolution of PaaS.

No, because the ideology is a bit different.
Adrian Crockcroft (AWS) describes the difference rather well in a tweet:
”If your PaaS can efficiently start instances in 20ms that run for half a second, then call it serverless.”
https://twitter.com/adrianco/status/736553530689998848

These numbers aren’t set in stone and there are a couple of other ‘rules’ for a serverless solution, but it’s a good elevator pitch.

Does it scale?

Something which differs quite a lot is the scaling aspect of your solution. With the regular PaaS offerings you still have to think a bit about scaling. Most of the time you can choose the server plan/pricing plan, if you want auto-scaling enabled and how many servers should be spun up when certain criteria is met. Still quite a bit of administration.

For serverless solutions you don’t have to think about scaling. It scales out-of-the-box! Each request is handled individually by a server and if necessary, each request can be handled by a different server. Because you don’t have any knowledge of the underlying server architecture, you won’t get billed for it either. The only thing you’ll see on the bill is the number of executions of your software and get billed accordingly.
So if your cloud provider thinks it’s a good idea to deploy your software on 1000 expensive servers, no problem! You only get billed per execution cycle of your software. This is quite different from the original PaaS offerings as you get billed per server/plan with those solutions.

I’ve heard about containers, is this the same?

No, actually not.

Containers are something completely different. They are somewhat of a combination of IaaS and PaaS offerings. Of course, it is possible to create serverless solutions with containers, but it isn’t the main use case.

So, what is it exactly?

Saying ‘you don’t have to care about servers’ and ‘start within 20ms and run half a second’ doesn’t really say what a serverless solution really is.

A well-known synonym for serverless is Functions as a Service (FaaS). This term says it all. Your functions are deployed and run as a service, just like your websites, webservices, webhooks, webjobs, etc.

So instead of creating a fully operational application which has multiple services, some web logic, backend connections and maybe even an API layer, you will create dozens of super small services which are able to to their own little thing.
If you’ve paid attention, you might see some a lot of similarities compared with a microservices architecture. That’s because FaaS or serverless solutions are also known as nanoservices, even more fine-grained services when compared to regular microservices.

When running these functions you, as a developer, only have to care about the logic of the function. Not the management of the servers or other application logic which might be running somewhere. This is one of the biggest advantages of serverless solutions, your code will be much easier to understand and test!

Most of the time these functions are triggered by an external event. This could be a HTTP request, but also placing a message on a queue or creating a file on a storage location.

Sample

I found a nice image which represents, a very simple, design for uploading and retrieving an image to some backend system.

Over here you see a user is making an API call (HTTP) to a function, for uploading an image. This function will store the sent image to some storage location and another function is triggered (event) will be triggered by this action to store this file to a storage table. When the user is requesting the image again via an API call (HTTP), this function will check the storage table and return the requested image.
As you can see, each function is only responsible for doing 1 single, simple, thing. This will make testing and maintaining each function much easier compared to a big service which is responsible for doing multiple things.
Of course, the functionality described above can also be created in a single small microservice. A microservice is also easier to test and maintain compared to a big monolithic solution. Whether you should choose for a nanoservices or microservices solution is up to you, but there are some additional benefits when you choose a nanoservices/serverless solution which I’ll cover a bit later. You can also use both nano- and microservices in your overall solution so you can use the best technology for each use-case. This will make your software very loosely coupled, performant and robust (if done well).

Some benefits

Each function should be stateless and immutable. Every time a function is invoked, a new function will be spun up and destroyed afterwards. This makes testing your function very easy as every invocation should have the same result, if the inputs are the same.
If you do need to save state, it should be stored somewhere else, like an external storage device or database.

Functions are highly scalable by design. Each function is very short-lived, which has two reasons.
First, each cloud provider has some limit on how long a function can run. This limit will differ a bit over time, but it’s good advice to do stuff as fast as possible.
The second reason is because your functions are billed by invocation, but also on duration. For Microsoft Azure this is €0.000014 per GB-s (Gigabyte Seconds). This doesn’t sound like much, and it isn’t, but it will add up in the end. So if you are able to change your function from running 1 second to 200ms, you will immediately save 80% on next months bill (this is a bit simplified of course).

So, in short, serverless is about very small, stateless and immutable services which are capable of doing one thing within a very small timeframe.

Are there any other benefits?

We already covered these services should be small and do a single thing, which will make the services easy to understand and maintain. We also covered you will be billed per execution, which can be a benefit because you don’t have to pay for your services if no one is using them.

Another benefit might be the deployment of your software. Each function can be deployed individually and placed as a new version next to the functions. When your function is deployed (copy-pasted) to the cloud provider you can just route all requests to the new function. If, for some reason, the new version doesn’t work properly, you just have to route the traffic back to the old function and everything should be working again.
From a cost-perspective you don’t even have to remove the old versions of the functions/services, because they aren’t invoked anymore. And as you remember, if a function isn’t invoked, it doesn’t cost any money! Of course, from an operational perspective you might want to remove the old versions of a service after some period to keep a proper overview.

Something else you might benefit from is better performance of your code. Because you will be billed per second of usage, your company might benefit from optimizing code wherever possible. Every optimization will directly impact the bill at the end of the month.

What are the downsides?

As with every technology or technique, there are also a couple of (important) downsides for using a serverless solution. Not all of these downsides are exclusive to serverless solutions, but loosely coupled, distributed, solutions in general. I’ll cover a couple of them below.

Code duplication

Just as with a microservices solution, all functions should be able to do one thing and do it without relying on any other functions. This will result in a lot of services having similar logic, like communicating to a database, service bus, file system, etc.

This doesn’t has to be a bad thing overall, but we have learned to avoid code duplication if possible. Luckily there are a couple of ways to avoid code duplication, but it is something to keep in mind. The serverless offerings of the major cloud providers are still maturing, so sharing code will get easier and better over time.

Multiple server calls

You can create serverless endpoints which react on HTTP methods like GET, POST, etc. Because of this, it’s possible to let your client application (possibly a single page application) be fully dependent on these small functions. This will result in an enormous number of HTTP calls to the backend, because each function should only do 1 single thing.

Eventually, this will result in a slow client application as it has to wait on all those calls to finish.

If you want to use a serverless architecture it might be a good idea to use an API gateway solution. This gateway will act as a proxy between a client call and the multiple function calls in the backend. This way the client only has to do a single call and doesn’t has to bother itself with the internals of the backend system. Of course, this will add some additional complexity to the overall architecture. However, this will be necessary in order to create performant solutions.

State

As said, functions in a serverless solution are short-lived. This means they can’t hold state for longer as their execution time. Every time a function is finished it will be destroyed along with all its in-memory state. If a new call is made, a new object/function will be generated.

So if you need to manage state between multiple invocations, you will have to use some external state management. This will, by definition, be much slower compared to in-memory state. Therefore, using a serverless design will not be beneficial to all kind of solutions.

Testing

Your functions only do a single thing, therefore it should be easy to create unit tests for them. Creating integration- and regression tests is quite a different story. You are fully dependent on events being triggered (webhooks, HTTP calls, service bus messages, etc.). In order to test if your overall solution is working properly you will have to jump through a couple of loops in order to get this working.

This isn’t a problem unique to a serverless design, but also goes for a microservices design or any other distributed system. As these solution designs are gaining a lot of popularity in the past couple of years and the years to come, I’m sure tooling will become available to make this kind of testing easier. For now, I’m not aware of any good tooling to facilitate this kind of integration- or regression testing so you have to think of something yourself.

Monitoring

Having a lot of moving parts takes its toll on your monitoring tools. Monitoring a couple of virtual machines, services or websites is quite easy these days. The tooling has matured a lot over time and most system administrators are quite proficient with it.

Having hundreds (or maybe even thousands) of small services and functions in your solution architecture will probably result in making some changes to your monitoring software. It will become quite cumbersome to manage all of these services in the same way. You also don’t care much for memory, CPU and I/O anymore and will probably be more interested in the overall throughput of messages and events in your system.

The monitoring tooling which is currently available hasn’t fully matured yet to facilitate your serverless (or microservices) design. This can be a major problem for your company and I think it is one of the most important things to think about when designing your system.

Development tooling

Creating serverless solutions (small functions) which do just a single thing isn’t very difficult. Most of the time this will just be a single (or a couple) of classes / modules which you can develop like ‘regular’ software and deploy it as a small function.

This sounds quite easy, but it would be nice to have some proper integration in your development environment. All major cloud providers are working on this and it has matured quite a bit in the past couple of months/years. Still, there is a long way to go.

One of the most important features which has been worked on a lot is the continuous integration and deployment of your functions. The major ALM software- and cloud providers have worked hard to get this working in order to deploy your serverless solution in a professional way.

Where to go next?

As I already mentioned, all major cloud providers have some kind of serverless offering.

Microsoft has Azure Functions and Azure Logic Apps, Amazon has their Lambda solution and Google has a Cloud Functions offering.

Each of those offerings provide similar ways of creating functions and a serverless design, so I’d advice to check out the documentation of the cloud provider of your choosing. I’ll check out the Azure Functions and Logic Apps.

I’m quite a fan of using micro- and nanoservices in my solution designs and try to incorporate them whenever it makes sense.
The regular IaaS and PaaS solutions won’t disappear any time soon. They still have their place in your solution design. But as I’ve written before, use the right tool for the right job.

There are dozens of blog posts, articles and books talking about microservices. Some of them talk about the design, other on how to implement and even others talk about why and when to use them.

This post will be a combination of them all. I won’t claim to be the all-time-expert on the matter, but I have read quite a bit on the subject, attended some talks and have had the honor to design (and implement) such a solution a couple of years ago.

First and foremost, it’s important to understand a microservices design is just another standard architectural design pattern. This pattern can help you to create a high-performance, scalable software solution, but it can also bankrupt your company!

The short explanation

If you don’t have much time to read, or don’t really want to, here’s the elevator pitch for microservices:

It’s a set of small (independent) services, each of them able to carry out their own (functional/business) responsibility without having direct dependencies to other services.

The long explanation

Of course, such a short explanation is a bit…short, to say the least.

The general overview

The microservices pattern is a combination of several other, well-known, patterns which we probably have been using for a couple of years now. Take for example the Service-oriented Architecture, Event-driven architecture, Database per Service, API Gateway / Backend for Frontend and many more. All of them combined can form an architecture which has multiple small services, all operating individually.

There’s an awesome picture on microservices.io (copied over here for completeness) drawing lines to dozens of patterns which are related to a microservices architecture.

It’s recommended to have some experience with a couple of these patterns, before trying to implement the full microservices architecture. Even if you and your team have lots of experience, developing and maintaining this kind of a solution design can still proof to be complicated over time. Managing all these small services and keeping an overview requires quite some (software) management skills.

So, why would I want all this complexity?

An excellent question!

Keeping track of all those small, independent which might, or might not have some loosely coupled dependency to each different services can become quite a burden. Why would you ever want to encumber yourself with such a design?

Well, let’s take a step back and look at how we (I’m guessing about 98% of the software industry) have been developing software solutions in the past couple of decades. Most of the solutions are very big software systems with multiple layers, tiers, thousands of classes (I’m coming from an OO-background over here), specific boundaries, tightly coupled services, etc.

Many of us have been developing and maintaining these kind of solutions and we all know it can be very hard to learn these systems for new team members, but also to add, change or migrate functionality.

This is called a monolithic architecture. A nice diagram is shown below.

All of the business logic, data access, I/O operations and other stuff is bundled into one big system. Most of the time the Presentation/View layer is also put inside this big block.

I’m not saying designing and developing a monolithic software solution architecture is a bad thing. Each architectural pattern has its own use. Nowadays, most developers understand this type of architecture style as it has been quite a common way to design your software in the past couple of years/decades.

The major downside of this monolithic software architecture is changing different parts of the system. As said, most of the time all of the components are tightly dependent to each other and it becomes harder and harder to change functionality.

Let us not forget the deployment and testing strategies of these massive software solutions. I’ve been at a couple of companies where they got the deployment and testing in such a state they can deploy their monolithic software multiple times per day without breaking a sweat. However, most of the companies I’ve consulted at aren’t near this level of professionalism and deploying a new version also stresses out the teams.

If you are experiencing the issues mentioned above, or want to steer your software development in a more ‘agile’ way, changing your solution design to a microservice architecture might be a way to solve this. The idea behind this architecture is you can create dozens of small services which focus on one specific thing. This means these services will probably be easy to understand for the development team(s), but also to test the services and deploy them to the different environments.
Keep in mind though, migrating from a monolith architecture to a microservices architecture isn’t without risk and requires a lot of discipline and experience of your development & operations team. Most of the time I’d advice against such a migration!

When it’s easy to develop, test and deploy your solution, it should also be easier to change certain areas of it. This is the biggest win for a microservices architecture, in my opinion.

Some people say another big win is ‘you can write each service in the language/framework of your choice’. While this is true, I don’t think this should be the goal of your organization. If you are developing each service in a different programming language or framework, you loose one of the biggest advantages: learning to develop and maintaining the service.
Imagine you have 10 different services written in .NET Framework 4.6, .Net Core 1.0, .Net Core 2.0, Java, Ruby, Python, node.js, PHP and PowerShell. This means all of the developers need to learn these languages and different software designs of the services. The maintainability of such a solution will be worse compared to a large monolithic software solution written in a single language/framework. This is one of the reasons I’m a big believer in sticking to 1 or 2 languages per software solution. There might be a better fit in the solution for a specific problem, but if the overall solution will become even more complicated to understand I’ll pass and prefer to use the technique we are already using in other areas of the system.

So to conclude on the ‘why do you want all this complexity?’-question, it’s because this type of software design can make your software easier to develop, extend, test and deploy. All of these factors can cause a faster release-lifecycle, which means more business revenue. More business revenue means more profit!

So how do I get there?

Are you convinced creating a microservices solution might be a good idea for your next/current software solution?

If so, there are a couple of things to consider.

First and foremost it’s important your development team has basic understanding of about 60% of the patterns mentioned in the earlier diagram. Preferably more!

Now there’s another thing to consider, your software & network infrastructure.
While it’s completely possible to develop and deploy a microservices solution on-premise, I’d advise for you to go to a major cloud provider, like Microsoft Azure, Amazon AWS or the Google Cloud Platform. I’m a Microsoft fanboy, so I’ll advice Microsoft Azure of course!
The main reason to choose for a major cloud provider is the 24/7 support and they offer quite a lot of services which can help you create a proper microservices solution.
As I mentioned, it’s possible to deploy all of these services on-premise, but this will add additional load to your system administrators as they will have to learn and maintain all of the services and techniques you want to use in your software. This will also add some additional complexity to your deployment strategy and maintaining the software will probably be harder also.

Now that you are ready to create your microservices solution in the cloud you can start designing your first service. This first service will work like a charm and you will probably be able to deploy it within a couple of hours/days, maybe 2 weeks if you are unlucky! The development of the second service will probably go quite well also. Both of those services can operate individually without having any dependencies between them.
As the project moves along you’ll notice it will be useful if the services could share information with each other. This thought is where a lot of microservice solution architectures go to die. The moment one of the team members thinks it’s a good idea to call a different service from the service they are working in you are in trouble. Think about it for a moment……


If you choose to let the services work directly with each other, wouldn’t this be the same as a distributed monolithic architecture? In short: a monolith with latency?
Yes, it is!

That’s why it’s so important to understand the rules (or: guidelines) of a microservices architecture. I’ll repeat them again as they are so important and fundamental to the cause: Each service is able to carry out their own (functional/business) responsibility without having direct dependencies to other services.

So what does this mean?

This means each (micro)service is able to carry out whatever they need to do, without having the need for any (external) service. With ‘service’ I mean some other API or software service which lives in a different context.

This doesn’t mean you can’t use a database, cache, queue, etc. in your service. These are all systems which are necessary in modern development environments and help you create a stable, robust, scalable and performant service.

How would you design such a solution?

A couple of years ago, I was lucky enough to be hired as a solution architect on a project which was a perfect candidate for a microservices architecture. I won’t go into details over here, because it’s a confidential project.
At that time the hype of this architecture style was at its highest and there weren’t a lot of best-practices publicly available yet. However, because the project was such a perfect fit, I decided to go this route.

To this time I’m still quite happy with the overall design, even though there are some details I would have done differently in hindsight, which I can’t share a lot of info on.

Creating one or two services which can operate individually isn’t much of a problem. In this case it makes sense each service has some kind of service layer (REST API), business layer and probably even a data layer, your typical 3-layer software architecture.
Because you want to access data quickly a caching mechanism might be introduced (Redis Cache) and you need a ‘normal’ database of course. Your first option might be a SQL Server database, which we all know and love. However, when you think about it, why would you choose a relational database type, like SQL Server, if all you are interested in is retrieving and storing data for your own little service?
It might be a better idea to store the data in some kind of NoSQL database (DocumentDBCosmosDB. This way you can do a single get- or insert-statement whenever you are working with data. No more need for denormalizing & normalizing data, joins, strange foreign keys, etc. By moving to a NoSQL database you can improve performance quite a bit.
Note: I still love SQL Server, but it might not be the most logical database to store data for your microservice. Think about it and decide for yourself!

What I’ve described so far isn’t very special and probably how you have been designing your software already. With the exception to the database, because its highly likely you have 1 (maybe 2) database in your solution design which is shared between all application logic.

The next part might be a bit new to you or your team, using some kind of service bus to communicate between services.
I already mentioned each service should be able to work independently of other services. But it might still be useful to share data between these services. We don’t want any direct dependencies between these services so this communication will be abstracted via the service bus. If something happens in a service, this service will send a message to the service bus. If there happens to be a service which is interested in this message, it can pick it up and work with this information.


Example
Imagine you have an ProductManagement-service and an Billing-service. If a product manager changes the name of a product from DocumentDB to CosmosDB in the ProductManagement-service, you also want to reflect this name change in the Billing-service in order to create proper bills. This means the ProductManagement-service will send a Update-message to the service bus. The Billing-service is able to pick up the message and will update its own data store with the necessary changes.


One thing you have to make sure is messages should be able to be picked up by multiple subscribers (logging service, audit service, other business services, etc.). The service bus should be able to implement some kind of pub/sub mechanism, for example the Azure Service Bus Topics.

A simplified diagram of the system I was designing a couple of years ago looks very similar to the following picture.

image

Over here I’ve highlighted the Messages Service API green to make it stand out. This service handles all things related to messaging (sending, retrieving, marking as read, archiving, etc.).

Whenever an action is triggered within this service, it updates both its own persistent storage (the repository) and its cache. When both of these repositories are updated, a message will be sent down to the service bus. This message will contain some details on the action executed, which might be useful to other services. Not all information sent will be useful to all interested services, but its not the responsibility of the Messages Service to keep track of this. The Messages Service only takes care of itself and passes along some information of the executed action in case some other service is interested in it.

In the diagram above you can see a Messages Service backend, which keeps track of all mutations of Messages and stores them in some kind of reporting repository. There are also a couple of other worker services and an API service, which are interested in Message mutations. Each of these services can subscribe to specific or all Message actions posted on the service bus.
None of the services are dependent to each other, but this way they still can share data. If Interested worker system #1 will send some kind of message to the service bus later on, any other service can act upon this message also.
Most service bus systems will allow you to filter on certain properties of messages sent which means your service will only act on messages it’s interested in.
Also note, the Messages Service API will also subscribe itself to any Message mutations posted on the service bus. This means it can update its own repository and cache if necessary when messages are retrieved from the service bus also. This might be useful if any other service needs to post something which the Message Service API will have to return to connected clients.
You will understand why this might be necessary when implementing a Backend for Frontend system. I’d try to avoid this in the beginning, but when your solution grows and you want more specific API’s for different clients, this will become more important.

Wow, while typing I noticed how confusing this Message scenario might be, because I’m writing about Messages (with a capital M) as actual object types and messages (lowercase) as any type of object posted as a ‘message’ on the service bus. Excuse me if this strikes as confusing to you.

What about latency?

This is one of the most common heard complaints when designing or speaking about a microservices architecture. Because there are so many services, each doing their own thing, people will think there will be a lot of latency and retrieving data will be quite slow.

If this is the case in your solution, you’ve probably made a few mistakes in the overall design. When following the design from the example above, you’ll see each service will have a direct connection to a repository (and probably even a cache) nearby. This repository is designed for specific use for this service, which means it should be blazing fast when querying or updating the data store. Overall, I think this design should be faster compared to your traditional (monolith?) design, because data is near the service tier so it doesn’t has to ‘travel’ very far.

Remember, your service should be able to run independently of other services. So, effectively, there will be 0 latency in your system.

If your service is dependent on information from other services, it will retrieve this information via the service bus. Retrieving information via the service bus (or any other messaging system) will cause a lot of latency (most of the time a couple of milliseconds). We can’t do much about this, aside from keeping this in mind when designing the system. None of the services will have (real) realtime access to updated data from a different service, but all of the services will be eventual consistent. It’s up to you to determine if the latency of multiple milliseconds (maybe even seconds in the worst-case scenario) is something your services can afford for updating their repository data.

Managing the messages and system health

This is the hard part when designing, developing and maintaining a microservices architecture.

When having hundreds of small services, each doing their own little thing, it’s important to have some proper management tooling in place. Most major cloud providers offer such services, like the Azure Monitor and Azure Application Insights.

In order to properly use these services you have to implement the some logging mechanism in your services. Logging is very important when developing this kind of a solution architecture. No one will want to check each service individually to in order to find out if they have any errors.

With the logging in place you will be able to create awesome dashboards telling you if something is wrong or not. Even the latency and load of your services can be measured. Still, this requires some practice and you shouldn’t think too lightly on this matter. Keeping a good overview of your overall system performance and health is very, very important!

These operational insights is not something you should start considering near the end of the project. Creating logs and other insightful information should be implemented right from the beginning if you want useful and meaningful information.

To conclude

Implementing a microservices solution can be quite a pain in the beginning, but once you get the hang of it you will notice it will become easier. I’m a big fan of this kind of software design, if implemented properly!

There are dozens of other factors you have to consider which I haven’t mentioned in this post. If you like, I can dedicate some other posts on these matters, just drop me a comment and I’ll see what I can write up.

For years we (a lot of people I know and myself included) have been using the Unit of Work and Repository pattern combined with each other. This makes quite a lot of sense as, in most cases, they both have something to do with your database calls.

When searching for both of these patterns you’ll often be directed to a popular article on the Microsoft documentation site. The sample code over there has a very detailed implementation on how you can implement both of these patterns for accessing and working with your database. I kind of like this post as it goes in great length to describe both the unit of work- and repository pattern and the advantages of using them. I see a lot of projects/companies having implemented the pattern combo like described in the Microsoft article. I can’t really blame them as it’s one of the top hits when you search for it in any search engine.

There is a downside to this sample though. It violates the Open/Closed principle which states “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”. Whenever you need to add a new repository to your database context, you also need to add this repository to your unit of work, therefore violating the open/closed principle.

It also violates the Single Responsibility Principle, which states “everymoduleorclassshould have responsibility over a single part of thefunctionalityprovided by thesoftware, and that responsibility should be entirelyencapsulatedby the class. All itsservicesshould be narrowly aligned with that responsibility.” or in short “A class should have only one reason to change.”. The reason why the sample implementation violates this principle is because it is handling multiple responsibilities. The unit of work’s purpose should be to encapsulate and commit or rollback transactions of atomic operations. However, it’s also creating and managing the several repository objects, therefore having multiple responsibilities.

Implementing the unit of work and repository pattern can be done in multiple ways. Derek Greer goes on about this at great length about this in an old post of him. As always there are several ways to improve the design. You might even want to keep the mentioned design in the Microsoft example, because ‘it-just-works’. For the sake of cleaner code I’ll describe one of the ways, which I personally like very much, to improve the software design. By adding a decorator to the project the functional code will be much cleaner.

First thing you have to consider is implementing some form of CQRS in your software design. This will make your live much easier when splitting the command, unit of work and repository functionality. You can perfectly implement the described solution without implementing CQRS, but why would you want to do this?

I’ll just assume you have a command handler in your application. The interface will probably look similar to the following piece of code.

public interface IIncomingFileHandler<in TCommand>
	where TCommand : IncomingFileCommand
{
	void Handle(TCommand command);
}

The actual command handler can be implemented like the following piece of code.

public class IncomingFileHandler<TCommand> : IIncomingFileHandler<TCommand>
    where TCommand : IncomingFileCommand
{
    private readonly IRepository<Customer> customerRepository;
    private readonly IRepository<File> fileRepository;
    
    protected IncomingFileHandler(IRepository<Customer> customerRepository, IRepository<File> fileRepository)
    {
        this.customerRepository = customerRepository;
        this.fileRepository = fileRepository;
    }

    public void Handle(TCommand command)
    {
        //Implement your logic over here.
        var customer = customerRepository.Get(command.CustomerId);
        customer.LatestUpdate = command.Request;
        customerRepository.Update(customer);
        var file = CreateNewIncomingFileDto(command);
        fileRepository.Add(file);

        return;
    }
}

All of the necessary repositories are injected over here so we can implement the logic for this functional area. The implementation doesn’t make much sense, but keep in mind it’s just an example. This piece of code wants to write to the database multiple times. We could implement the call to the SaveChanges() method inside the Update- and Add-methods, but that’s a waste of database requests and you’ll sacrifice transactional consistency.

At this time nothing is actually written back to the database, because the SaveChanges isn’t called anywhere and we aren’t committing (or rolling back) any transaction either. The functionality for persisting the data will be implemented in a transaction handler, which will be added as a decorator. The transaction handler will create a new TransactionScope, invoke the Handle-method of the actual IIncomingFileHandler<TCommand> implementation (in our case the IncomingFileHandler<TCommand>), save the changes and commit the transaction (or roll back).

A simple version of this transaction decorator is shown in the following code block.

public class IncomingFileHandlerTransactionDecorator<TCommand> : IIncomingFileHandler<TCommand> 
    where TCommand : IncomingFileCommand
{
    private readonly IIncomingFileHandler<TCommand> decorated;
    private readonly IDbContext context;

    public IncomingFileHandlerTransactionDecorator(IIncomingFileHandler<TCommand> decorated, IDbContext context)
    {
        this.decorated = decorated;
        this.context = context;
    }

    public void Handle(TCommand command)
    {
        using (var transaction = context.BeginTransaction())
        {
            try
            {
                decorated.Handle(command)

                context.SaveChanges();
                context.Commit(transaction);
            }
            catch (Exception ex)
            {
                context.Rollback(transaction);
                throw;
            }
        }
    }
}

This piece of code is only responsible for creating a transaction and persisting the changes made into the database.

We are still using the repository pattern and making use of the unit-of-work, but each piece of code now has its own responsibility. Therefore making the code much cleaner. You also aren’t violating the open/closed principle as you can still add dozens of repositories, without affecting anything else in your codebase.

The setup for this separation is a bit more complex compared to just hacking everything together in one big file/class. Luckily Autofac has some awesome built-in functionality to add decorators. The following two lines are all you need to make the magic happen.

builder.RegisterGeneric(typeof(IncomingFileHandler<>)).Named("commandHandler", typeof(IIncomingFileHandler<>));
builder.RegisterGenericDecorator(typeof(IncomingFileHandlerTransactionDecorator<>), typeof(IIncomingFileHandler<>), fromKey: "commandHandler");

This tells Autofac to use the IncomingFileHandlerTransactionDecorator as a decorator for the IncomingFileHandler.

After having implemented the setup you are good to go. So, whenever you think of implementing the unit-of-work and repository pattern in your project, keep in mind the suggestions in this post.

On a recent project I had to implement the decorator pattern to add some functionality to the existing code flow.

Not a big problem of course. However, on this project we were using Autofac for our dependency injection framework so I had to check how to implement this pattern using the framework built-in capabilities. One of the reasons I always resort to Autofac is the awesome and comprehensive documentation. It’s very complete and most of the time easy to understand. The advanced topics also have a chapter dedicated to the Adapter- and Decorator pattern which was very useful for implementing the decorator pattern in this project.

I wanted to use the decorator pattern to add some logic to determine if a command should be handled and for persisting database transactions of my commands and queries. You can also use it for things like security, additional logging, enriching the original command, etc.

As the documentation already states, you’ll have to register your original command handler as a Named service. The Autofac extensions for registering a decorator will use this named instance to add the decorators on to. One thing to remember when you need to add several decorators to your command, you’ll have to register each decorator as a named service also, except for the last one!

The command handlers we were using were accepting a generic argument to instantiate a class. Therefore, we also had to use the open generic version for registering the implementations and decorators.

The implementation of the actual command handler looks very much like the follwing code block.

public class ProcessedItemHandler<TCommand> : IProcessedMessageHandler<TCommand> 
		where TCommand : ProcessedMessageCommand
{
	public ProcessedItemHandler(
		IBackendSystemFormatter<TModel> formatter, 
		IQueueItemWriter<TModel> writer, 
		IRepository<ProcessQueue> processQueueRepository)
	{
	}
	
	public void Handle(TCommand command)
	{
		/* Implementation logic */
	}		
}

It implements the IProcessedMessageHandler<TCommand> interface and contains the logic to execute the command.

The decorator has to implement the same interface and one of the injected dependencies is the same interface. This tells Autofac to inject an IProcessedMessageHandler<TCommand> which is ‘linked’ in the registration of our application.

public class ProcessedMessageTransactionDecorator<TCommand> : IProcessedMessageHandler<TCommand>
		where TCommand : ProcessedMessageCommand
{
	private readonly IProcessedMessageHandler<TCommand> decorated;
	private readonly ITransactionHandler transactionHandler;

	public ProcessedMessageTransactionDecorator(
		IProcessedMessageHandler<TCommand> decorated,
		ITransactionHandler transactionHandler)
	{
		this.decorated = decorated;
		this.transactionHandler = transactionHandler;
	}

	public void Handle(TCommand command)
	{
		/* Decorator logic */

		decorated.Handle(command);

		/* Decorator logic */
	}
}

As you can see, you will be able to do all kinds of stuff in the Handle-method before or after invoking the decorated object.

The registration in our application looks very much like the following code block.

var storeProcessedMessageCommandHandlers = GetAllStoreProcessedMessageCommandHandlerImplementationsFromAssemblies();

foreach (var commandHandler in storeProcessedMessageCommandHandlers)
{
	builder.RegisterGeneric(commandHandler).Named("storeProcessedMessageHandler", typeof(IProcessedMessageHandler<>));
}

builder.RegisterGenericDecorator(typeof(ProcessedMessageTransactionDecorator<>), typeof(IProcessedMessageHandler<>),
										fromKey: "storeProcessedMessageHandler");

First we need to collect all implementations of the IProcessedMessageHandler<TCommand> and register them within the Autofac container. As you can see, all these implementations are registered as a named service with an index called storeProcessedMessageHandler. If you only have 1 implementation of the command handler, you can just register this one implementation of course.

After having registered all of the command handlers, the decorator(s) can be registered. The helper method RegisterGenericDecorator helps with this. This method also works with open generics and registration looks very similar to registering a ‘normal’ class and interface. The main difference is the addition of the fromKey argument. This argument is used to determine to which named service the decorator should be added to.

If you want to hook up multiple decorators you can also add the toKey argument to your RegisterGenericDecorator method. By adding the toKey argument, the decorator is also added as a named service to Autofac and you will be able to hook up another decorator to the earlier decorator by using the name defined in the toKey in the fromKey of the new decorator. This might be a bit abstract, so let me just write up a small example.

builder.RegisterGeneric(typeof(IncomingHandler<>)).Named("commandHandler", typeof(ICommandHandler<>));
builder.RegisterGenericDecorator(typeof(TransactionRequestHandlerDecorator<>), typeof(ICommandHandler<>), fromKey: "commandHandler", toKey: "transactionHandler");
builder.RegisterGenericDecorator(typeof(ShouldHandleCommandHandlerDecorator<>), typeof(ICommandHandler<>), fromKey: "transactionHandler");

Makes more sense right?

Just remember, not to add a toKey argument to the last decorator of your flow. Otherwise you will not be able to inject the interface, because everything is added to the IIndex<T> collection and there isn’t a defined entrypoint. Ask me how I know……

 

Hope this helps you in future projects. Knowing about this functionality surely has helped me to keep the code clean.