The MonkCast: WASM and Edge Compute
Fermyon Staff
serverless
edge
webassembly
When Half a Millisecond Meets 300 Milliseconds of Network Latency
Our CEO Matt Butcher recently joined the Monkcast, RedMonk’s podcast series exploring the intersection of developer experience and emerging technologies. Host Rachel Stephens brought together Matt and Allen Duet, Director of Product Management at Akamai, for a conversation about WebAssembly, serverless performance, and what “edge native” computing actually means in practice.
Here are some the highlights from the episode, unpacked -
The physics problem nobody talks about
Matt described a pivotal moment for the team: they’d gotten WebAssembly cold starts down to half a millisecond—an incredible achievement. Then they realized that if you still have 300 milliseconds of network latency, you haven’t actually solved the problem.
It’s one of those realizations that seems obvious in hindsight, but it reframes the whole conversation about serverless performance.
The thing nobody talks about with Gen1 serverless
Matt brought up this stat that anyone doing Core Web Vitals work knows cold: it takes about 100 milliseconds to blink your eye, and research shows users start losing interest if they don’t see progress in that time. Meanwhile, AWS Lambda takes 200-500 milliseconds just to cold start.
The real kicker is what this forces teams to do. You either accept that your users are waiting, or you pay to keep containers warm—which means you’re paying a performance tax that defeats the whole “pay for what you use” promise of serverless. As Allen put it: “You realize you can only really do this at scale.”
We’ve heard the same thing from developers over and over: they love writing serverless functions because it’s a small code base focused on business logic. But the performance tradeoffs have been rough.
Why we ended up looking at WebAssembly
The team evaluated a lot of options: hyper-optimized containers, VMs, different strategies. What we kept coming back to was that WebAssembly was already solving these problems in browsers. Sub-millisecond cold starts. Real sandboxing that lets you run user code side-by-side without worrying about one tenant attacking another. And crucially, those instant startups mean you can actually do true pay-per-use without the performance penalty.
That’s what led to building Spin (now a CNCF project along with SpinKube). The goal was to make it feel natural—not “here’s a completely foreign way of doing things, rewrite all your stuff,” but more like “you’re removing complexity and making your code smaller.” The APIs mirror what developers already know in their language of choice.
What Akamai’s Edge Network delivers
Here’s what makes the conversation with Allen interesting. He’s coming at this from the Akamai side, where they’ve got 4,100 locations, 1,000 terabit per second capacity, presence in 130 countries. They acquired Linode a few years back and have been thinking about what “edge native” actually means.
Allen made a point about how early serverless adoption was challenging not just because of the code changes, but because “the infrastructure necessary to help support that, operate it and secure that was still in flight.” Now that those pieces have matured, the developer experience has become the real differentiator.
What’s interesting is how the two sides of this fit together. We can get compute time down to half a millisecond. Akamai can get network latency down to 7-10ms ranges because of that distributed presence. Combined, you’re actually delivering on what serverless was supposed to be.
The part that surprised us
Allen mentioned something we’ve been noticing too: “We’re seeing customers who aren’t there at all for Wasm — they’re there to learn how to more easily deploy containerized solutions. And the feedback is: boy, Spin is a great experience. I want that, irrespective of the fact that it’s fast and secure.”
People are finding the developer experience compelling independent of the performance story. Which means they’re starting to use it in hybrid architectures we didn’t necessarily design for—mixing WebAssembly functions with containerized workloads, splitting up what used to be a single service and pushing just the parts that benefit from edge execution out there.
Early on, people came to WebAssembly because they were interested in the technology and then found a use case. Now it’s flipping—they’re solving a problem and finding that Spin fits, almost incidentally discovering it’s also fast and runs at the edge.
What this actually looks like
The applications people are building have evolved. It started with personal blogs and API servers—things that were hard to do well on Lambda. Then we saw experimental AI apps.
Now we’re seeing more sophisticated patterns: authentication and authorization pushed to the edge to stop bad actors before they hit your infrastructure, geographic content decisions happening at the point of request, digital rights management for streaming platforms. Allen mentioned a pattern with media services where you’ve got a control plane centrally, LKE clusters in various locations handling streams, and WebAssembly functions doing ad injection into those streams.
It’s composite architectures: WebAssembly handling fast, stateless edge logic, and traditional containers managing stateful services regionally.
Where we are now
Both Spin and SpinKube are CNCF projects now, which feels like the right home for this work. As Matt said in the interview: “Having worked on OpenStack and then worked on Kubernetes, to me it was the obvious truth that we needed to make sure we built a good, solid open source developer experience and runtime.”
The interesting question going forward is what applications emerge as people discover they can push more logic to the edge without thinking too hard about it. Allen mentioned he expects the next time they talk, they’ll be discussing “a whole new set of use cases that are coming to us because people are finding this amazing developer experience and then just applying it in creative ways.”
There’s a lot more in the full Monkcast episode, including deeper dives into specific use cases, how Akamai’s App Platform integrates with Spin, and what both teams are seeing at KubeCon. Rachel does a great job pulling out the technical details while keeping the conversation grounded in real-world developer experience.
Watch the full Monkcast episode for more on WebAssembly, edge computing, and the future of serverless. Learn more about Spin at spinframework.dev.