June 06, 2024

Why Serverless Is Trending Again

Matt Butcher Matt Butcher

spin serverless webassembly

Why Serverless Is Trending Again

The first time I heard the term “serverless” in 2016, I groaned. I feared that like Cloud, it was just another word that marketing departments would throw around, but which meant nothing substantive. And true to form, the term spiked in usage in 2020, and then went into decline.

But something happened: Early in 2022, interest began to climb again. And right now, the term is as popular as it’s ever been. Why?

Source: Google Trends (Serverless)

The resurgence of serverless seems to be a combination of four things:

  1. A solid definition has emerged.
  2. High productivity has been realized.
  3. Huge success stories have been told.
  4. A new compute layer, WebAssembly, has fixed the flaws of earlier serverless.

In this post, we’ll go through these four in more detail.

1. Getting Rigorous in a Definition

Part of the problem with the early use of the term “serverless” is that the term meant different things to different people. And it led to some misunderstandings. I once had to explain to a cloud engineer that there were still hardware servers running somewhere. He had mistakenly believed that serverless solutions literally ran without a server somewhere in the stack. But the same sorts of misunderstandings were typified as the term “cloud” became popular several years prior. Now, though, we tend to have developed a common understanding of serverless and the cloud.

To me, the best way to understand serverless is to begin with a question: What is the “server” we are doing without? And, the answer is that serverless has more to do with the developer design pattern than the hardware sitting beneath:

A serverless app is one where the developer does not need to write a software server (a socket listener or daemon), but instead writes a function that handles an individual event, like a request, and returns a response. (Paraphrased from Serverless Applications)

This function is not long-lived. It is invoked once per request, and handles only one request. Another way to think about serverless is as an event-based service architecture. An event triggers a handler, which responds and then shuts down.

2. Undeniable Productivity

Adrian Cockcroft, ex-VP of Cloud Architecture Strategy at AWS, tells this story about how he encountered serverless (via AWS Lambda) when he first joined Amazon:

[A]t the end of 2016, I joined AWS and was judging a one-day AWS re:Invent hackathon. I was surprised to see every team choosing to build serverless architectures using Lambda, and I was also amazed to see what they were able to build from scratch in one day…. This was eye opening, but the problem was that serverless seemed like a fairytale. When it worked, the results were better by an order of magnitude or more—a ludicrous improvement that most people discounted as fantasy or something that only “unicorns” could use. (From the Forward to The Value Flywheel Effect)

In the quote above, Cockcroft captures that magical feeling of disbelief when confronted with a technological leap.

3. Big Success Stories

Then came the success stories. Codebases cut to a fraction of their original size. Small teams suddenly able to maintain multiple serverless apps where before they could manage only a single server-based app. Amazon reported at re:Invent a few years ago that they handle 10 trillion function invocations a month.

Here’s one such story from a large insurance organization:

[A] single web application at Liberty Mutual was rewritten as serverless and resulted in reduced maintenance costs of 99.98%, from $50,000 a year to $10 a year…. Thanks to serverless…, we were also able to release applications quicker, which meant getting feedback from users and customers sooner. This in turn gave us an advantage in the market… Serverless also opened new possibilities that seemed too costly or difficult before, such as integration with AI and data services or event streaming services. (From The Value Flywheel Effect)

These were big claims made not by the marketing folks trying to bolster their company’s product, but by end users claiming victory over their very real business problems.

Two years ago, I tested the hypothesis that serverless could substantially cut code weight. And this is what I found – installing ExpressJS added 100 upstream dependencies to my project before I even wrote “Hello World!”.

Sum up the dependencies, and the starting point before we have begun coding is 54,000 lines of code. And that’s for a minimalist framework. If we add in an MVC layer like Locomotive, our starting code weight jumps to almost 220,000 lines of code. (From Rethinking Microservices)

In contrast, the basic “Hello World” in JavaScript in Spin literally imports no dependencies. It’s just a simple event handler. This is the source code for a Spin JS app in its entirety:

export async function handleRequest(request) {
    return {
        status: 200,
        body: "Hello World!"
    }
}

Code weight may seem like a nice-to-have until you start thinking about day 2 operations. Every one of those 100 dependencies in the NodeJS app is a maintenance task. Many times, keeping this updated is automatic. But there are breaking changes, security vulnerabilities, package renames and all kinds of things that can suddenly turn some low-level dependency into a headache.

But then there were the drawbacks….

Serverless seemed to be on the path to success. As a software development pattern, it seems, as Cockcroft suggested, almost too good to be true.

However, a few limitations hampered many development teams from building front-line (user-facing) serverless applications:

  1. Performance: Lambda’s cold start is 200-500 milliseconds, and competitors like Azure Functions are even slower. For some self-hosted serverless frameworks, we’ve heard of delays as long as 37 seconds for a cold start. The stop-gap has been to “pre-warm” instances, which makes your short-running serverless function suddenly a long-running process.
  2. Cloud Lock-in: A function written for Lambda cannot be run on Azure Functions. An Azure Function cannot be run on Google Cloud Functions. And so on. Buying into a hosted serverless platform comes at the cost of lock-in.
  3. Open Source Solutions were Container Based: The number one issue with the open source implementations of serverless frameworks is that they were based on the wrong compute platform. Packaging serverless functions in containers introduced all of the server-ish runtime constraints that serverless was supposed to obviate.
  4. Not Portable: Surprisingly, for something that is “serverless,” developers must know a lot about the servers on which their serverless functions will execute. A function is tied, at the API level, to an operating system (Windows and Linux being the only real choices) and architecture (Intel or Arm).
  5. Local Runtime: Many of the serverless architectures either can’t be run locally or run in a local “compatibility” mode that merely simulates what a production environment would look like. This makes the development process hard.

One frustrated operator once blurted out to me (and I’m paraphrasing):

I have this nice Kubernetes cluster that does most of our work, and then I’ve got this Frankenstein’s monster of serverless functions that are on Lambda. They have their own security policies, deployment strategies, testing methods… they just don’t fit in to the way we do platform engineering.

What he was expressing was the culmination of serverless’ shortcomings.

Fortunately, the problem is now solved.

WebAssembly is the Future of Serverless

In 2022, Fermyon released Spin, a serverless developer tool that gives you all of the benefits of the first generation of serverless, but fixes the problems. Earlier this year, we released SpinKube, making it possible to run mindbogglingly high-performance serverless functions inside any Kubernetes distribution:

  • Performance: Spin apps cold start in under one millisecond. That’s orders of magnitudes faster than Lambda and other serverless frameworks.
  • Runs Anywhere: From devices smaller than a Raspberry Pi to servers with hundreds of CPUs, Spin applications can run anywhere.
  • Open Source WebAssembly Runtime: With its compact binary format optimized for speed, portability, and execution, WebAssembly is much better as a runtime for serverless functions. And since we’ve integrated into the container stack, you can run your serverless applications side-by-side with containers in Kubernetes or other Docker-ized environments. For example, as of this week, Rancher Desktop now includes Spin and SpinKube. Integration with SpinKube is documented here.
  • Portable: The WebAssembly binary format is portable across a wide variety of operating systems (not just Linux, macOS, and Windows) and architectures (including Intel and Arm).
  • Local Runtime: As a result of all of these, you can run applications locally or just about anywhere else.

This is why the term “WebAssembly” is showing up so frequently in conversations about serverless, and why Fermyon is so excited about building a runtime that addresses the limitations of the first generation.

That’s Why Serverless is Hot Again

I started with a graph showing how serverless is on the rise again. And we’ve seen why:

  • We are more clear about what serverless is.
  • We’ve seen productivity go up (and maintenance go down).
  • And we’ve read the success stories from organizations large and small.
  • Most importantly, WebAssembly is the serverless runtime that fixes the problems of the first generation.

Your journey begins here! Go ahead and start optimizing your Kubernetes environment with SpinKube, book a demo with us and try out Fermyon, Platform for Kubernetes. Build high-performance WebAssembly-powered applications with Spin, and explore our comprehensive Cloud offerings.

 

 

 


🔥 Recommended Posts


Quickstart Your Serveless Apps with Spin

Get Started