What’s up with this serverless talk?

You’ve probably heard a lot of talk around a new buzzword serverless. It’s a pretty confusing name for an awesome technology/technique.

The main reason the word serverless isn’t a very good one is because it implies there aren’t any servers when using this technique. I found a fairly funny CommitStrip about this topic.

https://www.commitstrip.com/en/2017/04/26/servers-there-are-no-servers-here`

But what does the term mean then?

Well, it means you don’t have to worry about servers anymore. You just upload your software to the cloud provider of your choice and it runs on-demand/by-request. As Mark Russinovich said in an interview with InfoWorld _“I don’t have to worry about the servers. The platform gives me the resources as I need them.”. _Of course the hardware, operating system, webserver, firewall, etc. is all still there, but as a developer and operational person you don’t really have to care about it.

Isn’t this the same as PaaS from a couple of years ago?

The answer is: Yes and no!

Yes, there are a lot of similarities and the serverless offerings from each cloud provider are based upon their current PaaS offerings. Therefore, you could call it an evolution of PaaS.

No, because the ideology is a bit different.

Adrian Crockcroft (AWS) describes the difference rather well in a tweet:

“If your PaaS can efficiently start instances in 20ms that run for half a second, then call it serverless.”

https://twitter.com/adrianco/status/736553530689998848`

These numbers aren’t set in stone and there are a couple of other ‘rules’ for a serverless solution, but it’s a good elevator pitch.

Does it scale?

Something which differs quite a lot is the scaling aspect of your solution. With the regular PaaS offerings you still have to think a bit about scaling. Most of the time you can choose the server plan/pricing plan, if you want auto-scaling enabled and how many servers should be spun up when certain criteria is met. Still quite a bit of administration.

For serverless solutions you don’t have to think about scaling. It scales out-of-the-box! Each request is handled individually by a server and if necessary, each request can be handled by a different server. Because you don’t have any knowledge of the underlying server architecture, you won’t get billed for it either. The only thing you’ll see on the bill is the number of executions of your software and get billed accordingly.

So if your cloud provider thinks it’s a good idea to deploy your software on 1000 expensive servers, no problem! You only get billed per execution cycle of your software. This is quite different from the original PaaS offerings as you get billed per server/plan with those solutions.

I’ve heard about containers, is this the same?

No, actually not.

Containers are something completely different. They are somewhat of a combination of IaaS and PaaS offerings. Of course, it is possible to create serverless solutions with containers, but it isn’t the main use case.

So, what is it exactly?

Saying ‘you don’t have to care about servers’ and ‘start within 20ms and run half a second’ doesn’t really say what a serverless solution really is.

A well-known synonym for serverless is Functions as a Service (FaaS). This term says it all. Your functions are deployed and run as a service, just like your websites, webservices, webhooks, webjobs, etc.

So instead of creating a fully operational application which has multiple services, some web logic, backend connections and maybe even an API layer, you will create dozens of super small services which are able to to their own little thing.

If you’ve paid attention, you might see a lot of similarities compared with a microservices architecture. That’s because FaaS or serverless solutions are also known as nanoservices, even more fine-grained services when compared to regular microservices.

When running these functions you, as a developer, only have to care about the logic of the function. Not the management of the servers or other application logic which might be running somewhere. This is one of the biggest advantages of serverless solutions, your code will be much easier to understand and test!

Most of the time these functions are triggered by an external event. This could be a HTTP request, but also placing a message on a queue or creating a file on a storage location.

Sample

I found a nice image which represents, a very simple, design for uploading and retrieving an image to some backend system.

image

Over here you see a user is making an API call (HTTP) to a function, for uploading an image. This function will store the sent image to some storage location and another function is triggered (event) will be triggered by this action to store this file to a storage table. When the user is requesting the image again via an API call (HTTP), this function will check the storage table and return the requested image.

As you can see, each function is only responsible for doing 1 single, simple, thing. This will make testing and maintaining each function much easier compared to a big service which is responsible for doing multiple things.

Of course, the functionality described above can also be created in a single small microservice. A microservice is also easier to test and maintain compared to a big monolithic solution. Whether you should choose for a nanoservices or microservices solution is up to you, but there are some additional benefits when you choose a nanoservices/serverless solution which I’ll cover a bit later. You can also use both nano- and microservices in your overall solution so you can use the best technology for each use-case. This will make your software very loosely coupled, performant and robust (if done well).

Some benefits

Each function should be stateless and immutable. Every time a function is invoked, a new function will be spun up and destroyed afterwards. This makes testing your function very easy as every invocation should have the same result, if the inputs are the same.

If you do need to save state, it should be stored somewhere else, like an external storage device or database.

Functions are highly scalable by design. Each function is very short-lived, which has two reasons.

First, each cloud provider has some limit on how long a function can run. This limit will differ a bit over time, but it’s good advice to do stuff as fast as possible.

The second reason is because your functions are billed by invocation, but also on duration. For Microsoft Azure this is 0.000014 per GB-s (Gigabyte Seconds). This doesn’t sound like much, and it isn’t, but it will add up in the end. So if you are able to change your function from running 1 second to 200ms, you will immediately save 80% on next months bill (this is a bit simplified of course).

So, in short, serverless is about very small, stateless and immutable services which are capable of doing one thing within a very small timeframe.

Are there any other benefits?

We already covered these services should be small and do a single thing, which will make the services easy to understand and maintain. We also covered you will be billed per execution, which can be a benefit because you don’t have to pay for your services if no one is using them.

Another benefit might be the deployment of your software. Each function can be deployed individually and placed as a new version next to the functions. When your function is deployed (copy-pasted) to the cloud provider you can just route all requests to the new function. If, for some reason, the new version doesn’t work properly, you just have to route the traffic back to the old function and everything should be working again.

From a cost-perspective you don’t even have to remove the old versions of the functions/services, because they aren’t invoked anymore. And as you remember, if a function isn’t invoked, it doesn’t cost any money! Of course, from an operational perspective you might want to remove the old versions of a service after some period to keep a proper overview.

Something else you might benefit from is better performance of your code. Because you will be billed per second of usage, your company might benefit from optimizing code wherever possible. Every optimization will directly impact the bill at the end of the month.

What are the downsides?

As with every technology or technique, there are also a couple of (important) downsides for using a serverless solution. Not all of these downsides are exclusive to serverless solutions, but loosely coupled, distributed, solutions in general. I’ll cover a couple of them below.

Code duplication

Just as with a microservices solution, all functions should be able to do one thing and do it without relying on any other functions. This will result in a lot of services having similar logic, like communicating to a database, service bus, file system, etc.

This doesn’t has to be a bad thing overall, but we have learned to avoid code duplication if possible. Luckily there are a couple of ways to avoid code duplication, but it is something to keep in mind. The serverless offerings of the major cloud providers are still maturing, so sharing code will get easier and better over time.

Multiple server calls

You can create serverless endpoints which react on HTTP methods like GET, POST, etc. Because of this, it’s possible to let your client application (possibly a single page application) be fully dependent on these small functions. This will result in an enormous number of HTTP calls to the backend, because each function should only do 1 single thing.

Eventually, this will result in a slow client application as it has to wait on all those calls to finish.

If you want to use a serverless architecture it might be a good idea to use an API gateway solution. This gateway will act as a proxy between a client call and the multiple function calls in the backend. This way the client only has to do a single call and doesn’t has to bother itself with the internals of the backend system. Of course, this will add some additional complexity to the overall architecture. However, this will be necessary in order to create performant solutions.

State

As said, functions in a serverless solution are short-lived. This means they can’t hold state for longer as their execution time. Every time a function is finished it will be destroyed along with all its in-memory state. If a new call is made, a new object/function will be generated.

So if you need to manage state between multiple invocations, you will have to use some external state management. This will, by definition, be much slower compared to in-memory state. Therefore, using a serverless design will not be beneficial to all kind of solutions.

Testing

Your functions only do a single thing, therefore it should be easy to create unit tests for them. Creating integration- and regression tests is quite a different story. You are fully dependent on events being triggered (webhooks, HTTP calls, service bus messages, etc.). In order to test if your overall solution is working properly you will have to jump through a couple of loops in order to get this working.

This isn’t a problem unique to a serverless design, but also goes for a microservices design or any other distributed system. As these solution designs are gaining a lot of popularity in the past couple of years and the years to come, I’m sure tooling will become available to make this kind of testing easier. For now, I’m not aware of any good tooling to facilitate this kind of integration- or regression testing so you have to think of something yourself.

Monitoring

Having a lot of moving parts takes its toll on your monitoring tools. Monitoring a couple of virtual machines, services or websites is quite easy these days. The tooling has matured a lot over time and most system administrators are quite proficient with it.

Having hundreds (or maybe even thousands) of small services and functions in your solution architecture will probably result in making some changes to your monitoring software. It will become quite cumbersome to manage all of these services in the same way. You also don’t care much for memory, CPU and I/O anymore and will probably be more interested in the overall throughput of messages and events in your system.

The monitoring tooling which is currently available hasn’t fully matured yet to facilitate your serverless (or microservices) design. This can be a major problem for your company and I think it is one of the most important things to think about when designing your system.

Development tooling

Creating serverless solutions (small functions) which do just a single thing isn’t very difficult. Most of the time this will just be a single (or a couple) of classes / modules which you can develop like ‘regular’ software and deploy it as a small function.

This sounds quite easy, but it would be nice to have some proper integration in your development environment. All major cloud providers are working on this and it has matured quite a bit in the past couple of months/years. Still, there is a long way to go.

One of the most important features which has been worked on a lot is the continuous integration and deployment of your functions. The major ALM software- and cloud providers have worked hard to get this working in order to deploy your serverless solution in a professional way.

Where to go next?

As I already mentioned, all major cloud providers have some kind of serverless offering.

Microsoft has Azure Functions and Azure Logic Apps, Amazon has their Lambda solution and Google has a Cloud Functions offering.

Each of those offerings provide similar ways of creating functions and a serverless design, so I’d advice to check out the documentation of the cloud provider of your choosing. I’ll check out the Azure Functions and Logic Apps.

I’m quite a fan of using micro- and nanoservices in my solution designs and try to incorporate them whenever it makes sense.

The regular IaaS and PaaS solutions won’t disappear any time soon. They still have their place in your solution design. But as I’ve written before, use the right tool for the right job.


Share

comments powered by Disqus