TNS
VOXPOP
Tech Conferences: Does Your Employer Pay?
Does your employer pay for you to attend tech conferences?
Yes, registration and travel are comped.
0%
Yes, just registration but not travel expenses.
0%
Yes, travel expenses but not registration.
0%
Only virtual conferences.
0%
No reimbursement.
0%
Kubernetes

How Rancher Labs’ K3s Makes It Easy to Run Kubernetes at the Edge

An introduction to Rancher's K3s, a stripped-down verion of Kubernetes.
Aug 13th, 2020 10:59am by
Featued image for: How Rancher Labs’ K3s Makes It Easy to Run Kubernetes at the Edge
Feature image by Angela Compagnone on Unsplash.

Kubernetes is everywhere — on a developer’s laptop, Raspberry Pi, cloud, data center, hybrid cloud, and even multicloud. It has become the foundation of modern infrastructure that abstracts the underlying compute, storage, and network services. Kubernetes is a level playing field that hides the differences between various infrastructure environments, which turned multicloud into a reality.

Kubernetes has also become the universal control plane for orchestrating, not just containers but also a variety of resources, including virtual machines, databases, and even SAP Hana instances.

Despite its rapid growth and evolution, Kubernetes throws many challenges to developers and operators. One of the key challenges is running Kubernetes at the edge. When compared to the cloud or data center, the edge is very different. It runs in remote locations in a highly constrained environment. Edge devices have a fraction of the compute, storage, and networking resources compared to their counterparts running in the data center. Edge devices have intermittent connectivity to the cloud, and they operate mostly in an offline environment. These factors make it hard to deploy and manage Kubernetes clusters at the edge.

Rancher Labs, soon to be part of SUSE, created K3s, a flavor of Kubernetes that is highly optimized for the edge. Though K3s is a simplified, miniature version of Kubernetes, it doesn’t compromise the API conformance and functionality. From kubectl to Helm to Kustomize, almost all the tools of the cloud native ecosystem seamlessly work with K3s. In fact, K3s is a CNCF-certified, conformant Kubernetes distribution ready to be deployed in production environments. Almost all the workloads that run a full-blown Kubernetes cluster are guaranteed to work on a K3s cluster.

Kubernetes, the 10-letter word, is often referred to as K8s by the community. Since K3s is precisely half of the size of the Kubernetes memory footprint, Rancher Labs found a five-letter word, “K3s,” for its new distribution.

A Closer Look at K3s Architecture

The beauty of K3s lies in its simplicity. Packaged and deployed as a single binary (~100MB), you get a fully-fledged Kubernetes cluster running in just a few seconds. The installation experience is as simple as running a script on each node of the cluster.

The K3s binary is a self-sufficient, encapsulated entity that runs almost all the components of a Kubernetes cluster, including the API server, scheduler, and controller. By default, every installation of K3s includes the control plane, kubelet, and containerd runtime which are sufficient to run a Kubernetes workload. Of course, it is possible to add dedicated worker nodes that only run the kubelet agent and containerd runtime to schedule and manage the pod lifecycle.

Compared to a traditional Kubernetes cluster, there is no clear distinction between the master nodes and worker nodes in K3s. Pods can be scheduled and managed on any node irrespective of the role they play. So, the nomenclature of master node and worker node is not applicable to a k3s cluster.

In a k3s cluster, a node that runs the control plane components along with the kubelet is called a server, while a node that only runs the kubelet is called an agent. Both, the server and agent have the container runtime and a kubeproxy equivalent that manages the tunneling and network traffic across the cluster.

In a typical k3s environment, you run one server and multiple agents. During the installation, if you pass the URL of the server, the node becomes an agent; otherwise, you end up running another standalone k3s cluster with its own control plane.

So, how did Rancher Labs manage to bring down the footprint of k3s? Firstly, they got rid of a lot of optional components of Kubernetes that are not critical to run a bare minimum cluster. Then, it added some of the essential elements including containerd, Flannel, CoreDNS, CNI, Traefik ingress controller, local storage provisioner, an embedded service load balancer, and an integrated network policy controller. All of these elements are packaged as a single binary and run within the same process. Apart from these, the distribution also supports Helm charts out of the box.

The upstream Kubernetes distribution is bloated with a lot of code that can be easily excluded. For example, the storage volume plugins and cloud provider APIs contribute significantly to the distribution’s size. K3s conveniently omits all of this to minimize the size of the binary.

The other key difference is the way the cluster state is managed. Kubernetes relies on etcd, the distributed key-value database, to store the entire state of the cluster. K3s replaced, etcd, with a lightweight database called SQLite, which is a proven database for embedded scenarios. Many mobile applications bundle SQLite to store the state.

By running etcd on at least three nodes, the Kubernetes control plane becomes highly available. On the other hand, SQLite is not a distributed database that becomes the weakest link of the chain. To enable high availability of the control plane, K3s servers can be pointed to an external database endpoint. The supported databases include etcd, MySQL, and PostgreSQL. By effectively delegating the state to an external database, K3s supports multiple control plane instances, making the cluster highly available.

Rancher Labs is experimenting with a distributed version of SQLite called DQLite which may eventually become the default data store for K3s.

The best thing about K3s is its “batteries included but replaceable” approach. For example, we can replace containerd runtime with a Docker CE runtime, Flannel with Calico, local storage with Longhorn, and more.

For a detailed discussion of K3s architecture, I highly recommend you to watch this KubeCon 2019 session by Darren Shepherd, the architect of K3s.

K3s Deployment Scenarios and Topologies

The K3s distribution supports a variety of architectures, including AMD64, ARM64, and ARMv7. With a consistent install experience, K3s can run on a Raspberry Pi Zero, NVIDIA Jetson Nano, Intel NUC, or an Amazon EC2 a1.4xlarge instance.

In an environment where you need a single-node Kubernetes cluster to maintain the same workflow of deploying the manifests, install K3s in a server or edge device. This gives you the flexibility of using your existing CI/CD pipelines and container images along with the Helm charts or YAML files.

If you need a highly available cluster at the edge running on an AMD64 or ARM64 architecture, install a 3-node etcd cluster followed by 3 K3s servers and one or more agents. This gives you a production-grade environment with HA for the control plane.

When running the K3s cluster in the cloud, point the servers to a managed database such as Amazon RDS or Google Cloud SQL to run a highly available control plane with multiple agents. Each K3s server can run in a different availability zone to get the maximum uptime.

If you are running K3s in an edge computing environment with reliable, always-on connectivity, run the servers in the cloud with the agents at the edge. This gives you the flexibility of running a highly available and manageable control plane in the cloud while running the agents in remote environments.

Finally, you can deploy the K3s HA control plane in a 5G edge location such as AWS Wavelength and Azure Edge Zones environments with the agents running in the devices. This topology reflects the scenario of smart buildings, smart factories, and smart healthcare.

In the next part of the series on K3s, I will walk you through the steps of deploying a HA cluster in an edge environment. Stay tuned.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker, Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.