March 28, 2022

A Reckoning for Serverless

Matt Butcher Matt Butcher

serverless faas functions software as a service webassembly

A Reckoning for Serverless

Serverless is long overdue for a reckoning. For a few years, companies have waved the “serverless” banner. Serverless was supposed to be the next big thing. It promised freedom from one of the core concepts of computing: The server. But in big ways, the solution never fully materialized. Serverless held a great deal of promise. Only a small part of that promise was realized, though.

There are two reasons serverless did not seem to deliver.

  • The concept of “serverless” was never clearly articulated
  • The core technologies simply were not there

What serverless needed wasn’t bigger signs and more laptop stickers. It needed conceptual clarity. It needed grounding in a more adaptable compute layer. VMs and containers were not enough of a basis to build serverless. It needed a third wave of cloud compute. And this is why WebAssembly revives the ideas behind serverless and makes the vision not only compelling, but achievable.

The Hazy Definition of Serverless

What does the term “serverless” mean? I’ve asked many people this over the last few years. I’ve heard many different answers. For the most part, though, they can be rounded up into three general definitions:

  1. Serverless is a way of writing applications so that they do not run as daemon processes, but as event/request handlers
  2. Serverless means developers do not need to think about or know about the infrastructure layer
  3. Serverless means there is no infrastructure to manage

It should immediately stand out that while there is a sense that one platform could have all three of these attributes, these three different definitions are clearly not describing the same thing. The first is a developer model. The second is an operational stance. And the third appears to be a statement about topology. Advocates of serverless were quick to declare victory if they even partially accomplished any one of the above. One serverless advocate triumphantly declared to me, “Once you set up the cluster and get the servers running, it’s a completely serverless environment and nobody has to worry about the infrastructure.” The haziness of these definitions has, if nothing else, made it difficult to even understand when we are achieving serverless success.

Taking a closer look at each of these three definitions, we can get clarity on what is important in each definition. And we’ll see which of these definitions WebAssembly satisfies.

A Way of Writing Applications

The first definition of serverless focuses on the software developer. It says that Serverless is all about writing minimal code, largely without having to worry about boilerplate. Usually, the development experience is articulated like this:

As a developer, I write an event handler that takes a request, does a small amount of processing, and then returns a response.

Initially “small amount of processing” literally meant “few lines of code.” This was the initial headline of Functions as a Service (FaaS) and AWS’ Lambda. Literally, early expectations were that you write one function. Of course, this vision of computing was unrealistic to begin with, and advocates quickly adopted a fantastic refinement: “small amount of code” really meant that you only write code specific to answering a single request. Gone were the days of writing server listeners, bootstrapping SSL, and interacting with low-level operating system concepts like processes, pipes, and sockets.

At Fermyon, we are huge advocates of this model. Not only does this approach make for faster development, but it absolves the developer of a whole class of operational tasks and security obligations that shouldn’t have been foisted on the developer in the first place.

It is not entirely clear that the FaaS platforms available today have provided enough to make this model generally successful. The reason, as we at Fermyon hear it, is that most FaaS platforms are just too restricted in what they can do. About Amazon, Azure, and Google, we hear developers claim that while it is nice to whip out a few functions here or there for utility purposes, developers do not enjoy being constrained to only the services offered by these companies. The term “lock-in” inevitably arises. But a deeper listen to these developers suggests that it’s not just having cloud-provider-specific code that rubs folks the wrong way; it’s that they feel like there are genuinely things they want to do but cannot. They cannot add new listeners, for example. Or they cannot breakdown, compose, or organize functions into the configuration they desire. Or, most troublesome, they cannot set up an environment that makes it easy to isolate and debug problems.

For example, when chatting with an engineer about his frustrations with FaaS, I was not understanding why he felt the system had failed him. He grabbed his laptop, opened it, and showed me a topological map of his application, which consisted of dozens of individual “functions” (FaaS programs) organized in a tree shape. In order to work around the vast array of limitations on runtime, space, and services, he had written this enormous conglomeration that took–according to him–over eight hours to run a single batch process (of a truly large dataset). He had spent months building this system. “When it works, it’s great,” he said, “But the problem is, when it fails there is no way to figure out what went wrong.” Bad data could originate early in the system, but make it two or three additional process steps before that data surfaced in a way that caused a problem. And it was nearly impossible to work from a one-line error message in a single step function back to whatever the origin of the problem was.

This is only a single illustration of how the function model might have expedited development, but at a heavy operational cost.

Another strain of developer complaints also cropped up from these conversations: Developers bought into serverless not simply because of the programming model, but because they thought it would absolve them from needing to know about the infrastructure.

The Infrastructure Layer

The second definition of serverless suggests that developers (and to some extent operators) do not need to know about infrastructure. It is worth repeating that this definition can be accomplished separate from the first definition. It is possible to achieve this without anything even approximating Functions-as-a-Service. In fact, the majority of larger platforms that claim to be serverless are relying on this definition. A Platform-as-a-Service developer once told me that they had achieved serverless long before the term existed. How so? Because their users were free from having to think about the infrastructure when they deployed their applications to the PaaS. If this is a definition of PaaS, then Heroku might have been the first commercially successful serverless company several years before the term was even coined.

I don’t think this definition of serverless is bad, per se. But I do think that it is unhelpful. Because at the end of the day, this definition of serverless is merely the statement that someone else is operating your infrastructure for you. Software-as-a-Service vendors of all sorts, from the mega-clouds to startups, are keen on hitching their wagon to this definition of serverless. And that’s fine. But saying, “someone else runs it for you” is hardly the sort of ground-breaking sea change that serverless advocates once promised. In fact, by this definition it is not even clear that serverless is anything more than a rebranding of Software-as-a-Service.

If that is what we, collectively, decide that serverless is, I am fine with that. Frankly, there were far too many “as-a-service” terms anyway. But there has been a peculiar tendency to blur serverless from this definition into the third definition. And at first reading, the third definition makes a very bold claim.

There Is No Infrastructure

I once heard Kelsey Hightower, one of Kubernetes’ earliest evangelists, say that “serverless” was an utterly nonsensical term because of course there were servers. He was rightly annoyed with an early (and persisting) myth that what “serverless” really means is that there simply are no servers behind the services they provide.

Part of this myth comes from a misunderstanding about how FaaS works. In a FaaS architecture, the developer writes a function and loads it “to the cloud.” Then the function gets executed to handle an appropriate event like an HTTP request. That’s the user experience of the FaaS. And it is, according to most of the developers we have talked to, a delightful experience.

But it doesn’t mean there are no servers there.

Public cloud FaaS platforms are backed by a robust system that involves pre-warming a gigantic pool of virtual machines so that as a request comes in, a single virtual machine can be allocated to run a single invocation of a function, after which the entire stack is reset or destroyed. That one function the developer wrote is not run by itself. Nor is it run on a single server. Over its lifetime, it may run on tens of thousands of different VMs, each provisioned just to handle a request. In this way, FaaS is not at all an efficient system that somehow magically floats above regular server usage. It is, for all intents and purposes, a system that creates a virtual machine, installs a program, runs the program once, and then destroys the whole thing–only to repeat the process again for the next request. (CloudFlare’s FaaS works differently than this, but discussion of it is outside of the present scope.)

So Kelsey is right, very right. It’s silly to pretend that serverless means there are no servers. But when I phrased this definition, I was careful not to say “there is no infrastructure,” but rather that “there is no infrastructure to manage.” As infrastructure-heavy as a public cloud FaaS is, the fact of the matter is that for better or worse, there is nothing for the user to manage.

When considering this third definition, the magic of tools like Amazon Lambda and Azure Functions is that it seems to us (the users) that the only compute unit in play is the function itself. We only have to think in terms of the function, its resources, its behavior, and its links to other services like storage. Even with most SaaS/PaaS systems, this is not the case. For example, Heroku might have been the first developer oriented SaaS to massively succeed, but their Dyno model still requires the developer and operator to think in terms of resource allocation at the server level. “How many requests am I receiving?” is the first question of a calculus to arrive at “How many Dynos do I need?”

At the end of the day, the third definition may really reflect tireless devotion to user experience. The software developer can remain blissfully ignorant of how the system works beneath the hood. And moreover, understand how it works won’t even help the developer write better code. While Amazon and others have achieved this definition in their FaaS platforms, FaaS is certainly not the only model that can work this way. Even Fitbit’s watch development platform several years ago felt this way. This definition of serverless captures a laudable goal.

Ironic as it is, this third definition initially looked like a statement of infrastructure topology, but in the end is merely a story about user interface design: If a designer hides enough of the infrastructure, the slight-of-hand can make it appear that there is no infrastructure. Or perhaps it’s only that there is no reason to understand the infrastructure because doing so provides no advantage.

How Does WebAssembly Fit In?

A few months ago, my friend Mark observed that it felt like Fermyon had completely ignored serverless. We never talked about it. The term was entirely absent from every piece of Fermyon writing, from documentation to tweets (and, though he didn’t know it, investor pitch decks, too). He was right: We simply proceeded as if there was no such term.

Mark was also right to call us out on this.

The fact was, we didn’t (and perhaps still don’t) like the term serverless. The term is too ambiguous. If we used “serverless” to mean that first definition of developer experience, we would still have to listen to Kelsey Hightower-style arguments that “of course there are servers!” (or worse, deal with the accusatory questions like, “then why does Fermyon build server technologies?”). It is not worth anyone’s energy to try to disambiguate the term every single time we use it.

I believe that is why the term has largely fallen out of the everyday parlance of most developers. It’s tiring to have to clarify the definition each and every time.

But what Fermyon is actually building is a “serverless platform” in the sense of the first definition, and in a way that we hope will feel to most developers like the third definition. More specifically, we are creating a platform that allows the developer to focus on building just the important part (the business logic) of an application. Just like with FaaS, the Fermyon Platform uses a model in which the developer writes code to handle a request and return a response.

Moreover, we have written a platform that absolves the developer of having to think hard about scaling, resource allocation, distribution, and runtime services. Just like the large cloud providers, the Fermyon Platform runs a new WebAssembly app per request. But we can do so with a tiny fraction of the compute power.

WebAssembly has such fast startup and shutdown times (as well as an appropriate security model) that we don’t need several seconds or more to start a WebAssembly handler. It’s nearly instantaneous. This virtue extends behind the veil, too: Unlike large cloud providers who use a fresh VM to execute each function, the Fermyon Platform can take advantage of WebAssembly’s security model and run innumerable WebAssembly invocations on the same Spin instance. One Spin runtime is long-lived, and runs many modules concurrently. As each module exits, it is destroyed and cleaned up. And a new request starts a new module. But there is no need to tear down the entire environment to the VM level. We just tear down the WebAssembly module.

The efficiency of WebAssembly is what makes this possible. We like to talk about compute technology using boxing terminology (which is rendered ironic by the fact that none of us at Fermyon actually knows anything about boxing). VMs represent the heavyweight class. They’re big heavy hitters. But they are slow and cumbersome. Containers are the middleweight class. Though they may be faster and lighter than their heavyweight companion, their startup time is still measured in seconds and their resource footprint is hefty. WebAssembly represents a third class–a lightweight class. Startup time is in milliseconds (or nanoseconds even), and resource footprint is almost 1:1 with the weight of the actual code to be executed.

The Third Wave

Viewed this way, WebAssembly is the third wave of cloud compute. It provides a new model for executing code, and for running it in the cloud. And we believe that it provides the necessary undergirding to realize the promises of serverless.

Serverless is not dead. It was not a bad idea. And it wasn’t a “silly nonsensical term.” But the idea might have been a step ahead of the technology required to realize it. We think the Fermyon Platform can realize the developer experience promised by serverless – the event-driven development model focused on developing concise business logic. We also believe that the Fermyon Platform goes a long way toward removing the requirement that the developer needs to think about or manage infrastructure.

But we also think we can accomplish all of this while still granting DevOps and SREs the ability to do their job, carefully and economically allocating resources while still maintaining the ability to robustly troubleshoot. And that, we think, is what will make this style of serverless computing a pleasure to use.


🔥 Recommended Posts


Quickstart Your Serveless Apps with Spin

Get Started