In my previous article I discussed both Azure Functions and Azure Container Apps from the perspective of effective software development and deployment.

This programming and deployment model, “serverless functions”, emerged some years ago with a promise to provide a way for developers to focus on developing “only functionality”, without the need to care about the server infrastructure any longer.

Short History of “Serverless” Functions

“Serverless functions” came to compete with the existing de-facto ruling deployment model, Kubernetes containers. In the Kubernetes deployment model, the developers build web services on their desktops into containers using containerization tools such as Docker and then deploy them to the cloud with Kubernetes. Kubernetes allows to scale up and down these containers and configure their interaction.

Kubernertes containers are like miniature operating systems themselves and allow any kind of software to run on them, from web services to background jobs or databases. This is a flexible, proven and established model. However, learning to package software into containers, run Docker on localhost and understand Kubernetes properly has a certain learning curve, which often makes developers desire a somewhat simpler way to deploy their applications. Another problem with “dumb” Kubernetes containers is that they don’t flexibly adjust to to changes in the network traffic. You might want to keep fewer pods running in the night time when there are fewer visitors. Of course, some sites might have more visitors in the dark hours of the night!

AWS Lambda

Serverless functions, such as AWS Lambdas came to rescue with a promise to allow developers to focus only on the development of the functionality itself without the need to bang their heads at a wall with the actual infrastructure. The service provider would just automatically provide scaling to this functionality.

While this sounds wonderful, the reality turned out to be more bumpy. In the first years of serverless functions, developing functions as such was rather awkward because you could not really test them on your localhost. Development would crawl at snail’s pace when you had to deploy your function every time to the cloud only to see whether and how your little change works. The situation somewhat improved later by the introduction of AWS SAM cli tool that would allow the developers to test their functions locally on their machines. The crazy thing is that in reality, AWS SAM cli itself actually relies on Docker as well! Therefore, we didn’t get very far from Docker / Kubernetes model anyway in that sense. The serverless function model promised to liberate you from Docker containers - but you still ended up using Docker containers! To me it isn’t clear even to date how exactly the SAM cli tool is easier to use than using just Docker directly.

A more grave problem with these “serverless functions” are cold starts, timeouts and memory problems. Not to forget that going “serverless”, you lock yourself to the vendor, since serverless implementations are very vendor specific. Once you implement your software on AWS, you cannot transfer over to Google, Azure, IBM etc. anymore without writing your system from scratch. Even more, timeouts make serverless functions quite unsuitable for background processes - and trying to engineer around it is a mess!

Re-introducing Containerized Apps

In the meantime, major cloud providers have come up with fantastic new ways to provide automatic scaling to Kubernetes containers. As a result, building and deploying containerized apps is easier than ever. These things are called (serverless) container apps in distinction to serverless functions. These new advances in container deployment might completely remove the need for “serveless functions” altogether… Who knows!

Coming Soon…

Let’s make your business great and deploy scalable containerized next generation user interfaces in no time! Talk to you soon!