Every year there are dozens of new tools and trends that pop up in the software engineering industry. Now that I’ve been around for a while I think I’m starting to develop a decent radar for which trends are going to have a lasting impact and which ones are going to fizzle out. To be sure, I have made a few embarrassing predictions, such as betting one of my friends that Git would lose out to Mercurial because Git’s user ergonomics were so horrific. We all know how that turned out.
But overall I think I have a really good sense of what technologies will be winners, and what technologies will be losers. And when it comes to Serverless computing, I don’t need to use any of that expertise.
Wait, what?
Yeah, you read that right. It doesn’t take a prognosticator to see that Serverless computing is the future. Nobody wants to manage servers. Managing servers is a nasty side effect of wanting to execute code. I need a secure environment to run code, and because of von Neumann architecture I need some memory, some disk space, and a processor. It doesn’t really matter what form those items take, as long as I have enough. My code needs some amount of all three, and I need an environment that provides it to me. Simple.
My desire to accomplish that doesn’t mean I want to manage a server, or the environment in which it runs. In an ideal scenario I would just go up to a cloud provider and say, “Here is my app, run it for me.”
What is Serverless?
Before we get started, let’s get on the same page about what exactly Serverless is. You’ll see some definitions of Serverless that say it provides computing resources on an as-needed basis. While that is a purist definition, the more frequently used wider definition is that it is a way of providing computing resources in a way that doesn’t require you to think about managing servers.
Serverless comes in a few flavors
Serverless Containers
Serverless container services such as Heroku, Netlify, AWS ECS/EKS Fargate, Google Kubernetes Engine, and Azure Kubernetes Service provide you with an environment where you can build a container and push it up into a service that manages the deployment and execution of the container. You don’t have to worry about running the cluster that hosts your control servers, node servers, etc., you just have to push up a container with some metadata and the service handles the rest.
Serverless Functions
Serverless functions such as AWS Lambda, Google Cloud Functions, or Azure Functions are services that provide an environment where you can push up a chunk of code with a specific interface and later invoke that code.
Serverless vs Virtual Machines
Many people don’t consider Serverless containers to be true Serverless, because when you build and push a container you are essentially bundling a whole server in a nice package. While I tend to agree that they aren’t *true* Serverless, they have absolutely have enormous benefits over running a full Virtual Machine, and distinct advantages in some situations over Serverless functions.
Pros of Serverless Containers
Serverless containers have a ton of advantages over traditional servers. Here are a few of them:
- Very little server management – No servers to manage, patch, or troubleshoot. You still have an operating system inside of the container, but that can be an incredibly minimal install, and the surface area of management is much smaller.
- Generally stateless – When building applications designed for containers, you’re usually building a 12-factor app or following a similar pattern. Your containers are cattle, not pets. If your container crashes, a new one is automatically fired up.
- Easy horizontal scalability – Nothing about a Virtual Machine is inherently limited in terms of scalability, but containers push you in a direction that allows Serverless Container services to easily scale your software as needed. Based on factors such as load, timing, and request count, your Serverless Container service can run one instance of your container or 10,000 instances of it all while transparently handling storage allocation, load balancing, routing, etc.
- Security – The operating system installed in a container is usually short-lived, very minimal, and sometimes read-only. It therefore provides a much smaller attack surface than a typical general purpose and long-lived server environment.
- Source controlled environment – Your container definition is described in a file that can be put into source control. While this is best practice in almost any situation these days, it is still a distinct advantage when compared with a traditional server environment where someone can get in and change things that make your server configurations drift.
- Application and environment bundling – You’re combining your application with the environment it is running in, and deploying that as a single unit. This way if the new version of your software uses updated libraries, operating system version, or new language version, it can all be deployed, and rolled back, as a single unit.
- Cost – You can easily scale your workloads up and down. While running a Serverless container might be a bit more expensive, with some providers you can make up for it in flexibility. Serverless containers usually provide you with more flexibility to slice up resources into smaller units than traditional Virtual Machine options. For instance, an EC2 T3 nano instance provides 2 vCPUs, but you can request a container with only 0.25 vCPUs.
Pros of Serverless Functions
Serverless functions have all of the advantages of Serverless containers, but take it to another level.
- Virtually zero management – You don’t need to think about the OS at all in most instances. You can just push your code up, and then run it. There’s nothing at the OS level to patch, and nothing to maintain—just push it and forget it.
- Stateless by default – Serverless functions force you to write your code in a stateless way, since you can’t depend on anything being left around between invocations. This allows them to easily scale, since your function can be fired up on any server without depending on local state.
- Almost perfect horizontal scalability – Something invokes your function, and it runs. If it is invoked once, then it runs once. If it is invoked 100,000 times, then it runs 100,000 times. Sure, there are some platform limitations that might come into play, but those are generally safeguards to keep you from accidentally spending $10,000, rather than limitations of the platform.
- Cost – Serverless functions only cost money while they are executing. So if you have functions that only execute rarely, or are very bursty, you can have significant savings.
Serverless Containers vs Serverless Functions
Advantages of Serverless Containers
- Easy migration – If you have an existing application, it might take a bit of work, but you can get it running inside of a container.
- Cheaper for stable workloads – If you have a consistent workload, then it is likely that Serverless containers will be cheaper than equivalent invocations of a Serverless function.
- Flexibility – There are no limits to your OS, binaries, languages, versions, etc. you literally control the entire container. Serverless function services will limit you to particular runtimes and versions. Some Serverless function services allow custom runtimes, but you will still be locked into the OS.
- Troubleshooting – Containers make it easy to jump in and troubleshoot what is happening in your live environment. They also allow you to run a chunk of your environment locally, which makes it easier to debug what is happening.
- Long running tasks – A Serverless container runs all of the time, which suits long-running tasks best. Most Serverless functions are going to have limits around how long a function can execute. For example, as of this writing, AWS Lambda has a 15-minute limit.
Advantages of Serverless Functions
- Lower cost for bursty workloads – Serverless functions are pay-per-invocation, meaning that you only pay when your code is actually executing. This means that for workloads that don’t run very often, they can be much cheaper in comparison to typical servers or containers.
- Fast scaling – Serverless function services can create a new instance of your function and have it ready to serve traffic within a few seconds (sometimes with in a fraction of a second). There are certain limits to this, and you can see more discussion about those limits in the “Scaling Serverless functions” section below.
- Fine-grained scalability – Let’s say you have an application that consists of a few dozen different Serverless functions, and one of those functions is called 1000 times more than the other functions. That one function will be scaled independently of the rest of your functions and you don’t even have to think about it. .
Disadvantages of Serverless Containers
- Heavier deployments – Serverless containers usually require a large build step and then you have to push a several hundred megabyte container to your repository. Then you have to deploy your containers across your cluster which could take a while if you have large deployments. This turnaround time is significantly longer than pushing up a single cloud function and have it picked up and begin serving requests within a few seconds.
- Coarse scalability – When you deploy out a Serverless function you are really just deploying out a single function. That function could perform multiple tasks, but generally you are deploying a single-purpose function that can scale independently of all of your other functions. When you deploy a Serverless container you’re generally deploying an entire application or Microservice. All of the functionality in that application or Microservice is going to be deployed into a single container, so in order to scale it you have to spin up more instances of that container. That means the whole thing scales as a single unit. If one chunk of your application is getting hit a ton, you’ll have to scale the whole thing to increase the amount of traffic you can serve.
Disadvantages of Serverless Functions
- Lack of control – Someone is managing the servers your code is running on. Your code is running in an operating system, just not one you have any control over.
- Proprietary – There aren’t any real standards around Serverless functions. Because of this you are usually writing your Serverless applications using a particular provider’s tooling and interfaces. Using a tool such as AWS step functions makes for a strong vendor tie-in, because orchestrating across Serverless functions is not standard at all right now. This can pull you deeper into a particular vendor’s ecosystem and make it harder to switch.
- Rewrite – Taking an existing application and making it work within a Serverless function is generally not a possibility. You almost always have to write your application from scratch to take advantage of Serverless functions.
- Traceability – Serverless functions have the same challenges as Microservices, but taken to an extreme. Tracing a single request across your system could involve dozens of Serverless functions. You need to make sure you’re leveraging tools like AWS X-ray, Google Cloud Trace, or Distributed Tracing in Azure.
- Debugging/Testing – You can run a cloud function fairly easily on your local machine using tools like Serverless, Google Function Framework, or AWS SAM, but getting realistic invocations can be a challenge because cloud functions often integrate with cloud ecosystems in automated and proprietary ways. Also, services such as AWS step functions, which introduce an orchestration layer between lambdas, can make it even harder to debug what is happening in a live environment.
- Deployment – Deployment of Serverless functions can be a challenge, but mostly because they provide tools (like an IDE) that encourage bad behaviors. Using the Serverless framework can make your deployments automated and manageable, but you need to be sure you take the effort to set it up and keep it organized, otherwise versioning and maintaining dozens, or hundreds, of functions will become a real pain.
Scaling Serverless Functions
Scaling Serverless functions requires a bit of extra attention here because people often think that tools like AWS Lambda or GCP Cloud Functions are a panacea for scalability. They assume that you can just push up your cloud function and get virtually instant scalability. But this is far from the truth. These Serverless function services make scaling incredibly easy, but there are limitations to the platforms that affect how fast and how high your functions can scale.
As an example, AWS Lambda has an initial per-region limit of 1000 concurrent function invocations (this is across all functions in the region). This limit is in place as a safety limit to prevent accidental resource usage and can be increased by contacting AWS support.
Based on that you might think you can just call up AWS support and request an increase to 20,000 concurrent invocations, and then you can push a bunch of requests to your Lambda function and have it rapidly scale up to that level to meet the demands of your service. Unfortunately, this is not the case.
Even after getting AWS support to increase your limit to 20,000 concurrent invocations, AWS Lambda will still limit you to 500 additional concurrent invocations per minute, meaning that it will take nearly 40 minutes to scale up to 20,000 concurrent invocations if you’re starting from zero traffic. In the meantime, all of the requests hitting your service that can’t be routed to an active function will receive 429 errors.
If you know that your traffic will need to burst more than this, you can purchase what Amazon calls “provisioned concurrency.” This will keep a certain number of Lambda functions warm and ready to go, but then you’ll be giving up some of the benefits of Serverless functions because you’re paying to keep them running all of the time. But, in some cases, this is worth the tradeoff.
There’s also the concern that a single function will eat up all the concurrency available for a particular region. You could configure the “reserved concurrency” for specific functions to ensure that their concurrency can’t be completely consumed by other functions. But let’s say you have a total concurrency of 5000 and you set the reserved concurrency of a function to 1000, then you’ll only be left with 4000 concurrency for the rest of your functions in that region.
While many of these settings are necessary to provide an environment that is both safe and usable, it can provide a lot of surprises for folks new to working with Serverless functions.
Vendor Lock-In
Almost all cloud platforms take every opportunity to lock you in, and Serverless is no exception to that. However, Vendor Lock-In is more of a concern with Serverless functions than it is with Serverless containers. The ways in which functions are invoked, deployed, orchestrated, and allocated are all dependent on the cloud provider you’re using.
There are projects like Knative that are making progress in creating a standard environment that companies can use to deploy Serverless workloads, but, in general, you have to deploy and manage the platform itself to get the benefits. This can quash many of the upsides to running your code in a Serverless manner. The goal was to avoid running infrastructure, right? I should mention that you can get native support for Knative with Google Cloud Run and with some effort you can run Knative on AWS Fargate.
What Do You Have Against Serverless Functions?
It might sound like we don’t like Serverless functions, but that isn’t true at all. We just think their uses are more limited than Serverless containers. There are certain use cases when Serverless functions are the perfect solution. Surprisingly, it’s often when you need a strong integration with the underlying cloud platform. Say you want to upload an image to S3, and have it automatically trigger a cloud function that processes it in some way; or you have logs coming off a logging service like Cloudwatch and you want to have a piece of code to easily analyze the log stream. That’s when Serverless functions truly show their worth. They also work well in places where you have a handful of hot endpoints that you want to scale differently than the rest of your application.
As you can tell, in most cases we are still recommending Serverless containers. But you won’t see us leaping for joy because Serverless containers don’t provide the true holy grail of Serverless computing. What is the holy grail of Serverless computing? I’m so glad you asked.
The Holy Grail of Serverless Computing
The holy grail of Serverless computing is true utility computing. To have all of my resources available when I need them, as I need them. To be able to upload a chunk of code (whether that is a single function or a whole application) along with a bit of metadata and have it run in a way that allows it to infinitely scale (with some safety limits). To not have to think at all about how much memory, storage, or compute it needs, it just figures that out automatically. Serverless functions are actually closer to this than Serverless containers, but for the reasons noted above, they still miss the mark.
Serverless, All Grown Up
Please don’t interpret this post as saying that I don’t think Serverless functions or containers are ready for real-world adoption. For most organizations, running servers should be as important to them as generating their own power (and some large orgs need to do both!). With Serverless computing, your code is still running on a server somewhere, just not a server you have to care about. And that really should be the long-term goal for most organizations, to be able to just push a chunk of code into a service and have it run. Serverless computing isn’t quite able to achieve the dream of “push code up and forget about it,” but we’re getting close.
Serverless computing is here, and here to stay. We’ll continue to see Serverless services getting closer and closer to the ideal described here. But while Serverless has definitely grown up, we still have a ways to go. To truly reach the ideal of Serverless computing we need to significantly rethink current computing and security models. While the challenge is huge, the payoffs are even bigger, so we’ll probably get there more quickly than we think.
from Simple Thread https://www.simplethread.com/serverless-im-a-big-kid-now/