The Spin and Kubernetes Story

Server-side WebAssembly (Wasm) applications built with Spin can run anywhere. The ability to run Spin applications on any processing architecture and operating system is a big benefit, stemming from using WASI through Wasmtime as the runtime for Spin.

In most scenarios today, we rarely run processes directly on a server, e.g., through a shell or using a service daemon like systemd. For the platforms we run, there are typically highly sophisticated schedulers involved, to ensure availability with multiple servers, and ease of scaling for more dynamic workloads.

Implementations of the suite of Open Container Initiative (OCI) specifications and Kubernetes (K8s), are highly adopted across public cloud providers, and in private data centers, to help solve the above. This means that for Spin to fulfill its promise of being portable across any platform, it is important that Spin also has a story for being part of the eco-system around OCI and K8s.

Packing and Distributing Spin Applications Using OCI

For a while now, the Spin Command Line Interface (CLI) has had an option to package Spin applications as OCI Images. By running spin registry push command, you can push your Spin application to any OCI-compliant registry. spin up -f <oci-reference> is the command you’d use to run an application directly form a registry. This solves an important piece in being able to easily distribute Spin applications, and pull them into servers, when needed (deployments, scaling, etc.), using well-known and mature standards to solve this problem.

Spin OCI images are not containers as we normally think about them. The OCI image specification allows for different media-types to be defined as part of an image. This is, for instance the way multi-arch images work, where the manifest of the image can contain multiple media-types and architectures of the same image.

Running Spin Applications using an OCI-Compliant Runtime

Having solved the distribution problem, we also need a way to ensure there’s a runtime, which understands running the Spin OCI images. As said before, because these are not containers, the usual runtimes, like runc (used by e.g., containerd, CRI-O), youkido (Rust-based), crun (C-based), do not know how to “execute” a Spin OCI image, as there is no filesystem, and no process to call on start (typically the ENTRYPOINT of a Docker file) - it’s not a container!

Microsoft’s Deis Labs team has helped solve this problem, by building a new OCI compliant runtime, to support Wasm server-side - runwasi (enabled through containerd). The implementation which enables Spin applications is the containerd-wasm-shim. This containerd shim uses runwasi and Spin to enable running Spin applications, which are bundled as OCI images.

Putting Things at Work With K8s

Now that we have all the three OCI specifications available for Spin (packaging, distribution and runtime), let’s have a look at the tooling we have available to use this in practice.

As mentioned earlier, the Spin CLI is all you need to create images, and push them to a registry (e.g., Docker Hub, GitHub Packages, etc.). It’s easy to use in a CI/CD systems like GitHub Actions, by simply downloading the Spin binary and run those commands as part of your workflow, e.g., spin registry push --build [<registry>/<image-name>:<tag>] will both build and push your Spin application.

As also mentioned earlier, you can now run your Spin application on any system, simply using spin up -f [<registry>/<image-name>:<tag>]. But how do we do this with K8s?

We’ve recently documented the steps in various scenarios on https://developer.fermyon.com, for the latest and up-to-date information, I’d recommend you to go there and get all the details.

So at a high level, we need the following:

  1. A K8s cluster using containerd as its runtime
  2. A way to install containerd-wasm-shim, and configure containerd with the new shim
  3. Runtime classes defined in the cluster
  4. Deployment specification for our Spin application

And all of this is available today, using the following tools:

  1. Locally, you can use k3d or Minikube. But any K8s cluster supporting containerd will work.
  2. Liquid Reply has been working on a project called Kwasm, which enables you to install the shim and configure containerd across your cluster. The list of supported K8s distributions is really long for Kwasm, providing a lot of great options. However, be aware that the project “is meant to be used for development or evaluation purposes.”, at the time of writing this article. For example, if you work with k3d, the containerd shim project publishes an image for the cluster nodes, which already contains the shims and containerd configuration.
  3. The runtime classes can easily be applied to your cluster, using the following spec:
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime-spin
handler: spin
  1. Finally, the Spin application deployment specification will look like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasm-spin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasm-spin
  template:
    metadata:
      labels:
        app: wasm-spin
    spec:
      runtimeClassName: wasmtime-spin
      containers:
        - name: spin-hello
          image: ghcr.io/deislabs/containerd-wasm-shims/examples/spin-rust-hello:v0.10.0
          command: ["/"]
          resources: # limit the resources to 128Mi of memory and 100m of CPU
            limits:
              cpu: 100m
              memory: 128Mi
            requests:
              cpu: 100m
              memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
  name: wasm-spin
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: wasm-spin

What’s really nice about this solution, is that it very nicely aligns the experience of working with Spin Wasm applications in K8s, to how it looks like for containers, so we can easily attach a service (as seen above), and an ingress network policy. Not all concepts in K8s are compatible with Spin Wasm applications, but we’re in the early days, and feedback on what users will need is highly appreciated at this point.

Containers and Spin side-by-side

One scenario this opens up, is to use a Redis installation to back the Spin wasi-keyvalue implementation, enabling developers to use the straight forward KeyValue API in the Spin SDKs.

For an end-to-end scenario for configuring this, check out this article: Runtime Configuration on Kubernetes. The, using K8s secrets section, of the article also shows how to create an application that consumes a K8s secret. To achieve this the secret is provided as a volume in a pod manifest and then referenced as an environment variable once the application is running.

I hope the above gave you a good idea of how we’re working with a lot of partners like Docker, Microsoft, Liquid Reply, and others, in making the promise of portable Wasm also be true to the higher-level infrastructures and platforms a lot of people use today.

Resources

Finally, a list of resources, to help you dive further in to this topic:

Interested in learning more?

Talk to us