Top Kubernetes Distros

Kubernetes logo

Managed Kubernetes as a Service

Kubernetes (k8s) is the front runner in market share for orchestration solutions, and arguably the most robust solution that scales up to the needs of large enterprises. Kubernetes is based on the application containers that Google has been using internally for years to scale their web services, like Search, Gmail and Maps. When they open-sourced the technology by donating the code to the Cloud Native Computing Foundation (CNCF), infrastructure experts everywhere gained a powerful, new tool to deploy their own applications at web-scale.

Just look at the list of brands that use Kubernetes to orchestrate their container infrastructure and you’ll understand why it has become the de-facto standard. Comcast, eBay, Goldman Sachs, SAP – you’re certainly in good company.

The most common barrier that new users face setting up Kubernetes is the number of modules required to be configured, including the containerd runtime, CNI Container Networking Interface and etcd key-value store. From setting up your own Certificate Authority (CA) to setting up networking, it can quickly get overwhelming.

There are automation scripts such as minikube and get.k8s.io that can install upstream Kubernetes for local development or on your cloud provider of choice, but there are drawbacks. You have to manually install SDKs on your machine and the scripts make certain assumptions that would probably have to be tweaked for a production environment. When a new version of Kubernetes comes out, you would also have to update it manually and ensure the new version doesn’t break compatibility with your existing containers.

As a result, a sizeable industry has sprung up around maintaining commercial distros of Kubernetes. Key players include Rancher, CoreOS Tectonic and OpenShift Origin. These vendors test and validate the changes made upstream before pushing it out to their userbase, in addition to baking in features like automated deployment, their own compose file formats and an enhanced management console. The outcome would be best described as a managed Kubernetes experience, where your Rancher, Tectonic or Origin server automatically performs health-checks on the components of your Kubernetes clusters and keeps them up to date.

Whichever way you decide to achieve enterprise, production-grade Kubernetes in your organization, our cloud architects are here to help you succeed. Contact us for more information about any of these solutions.

Rancher

Free for unlimited nodes, with commercial support

Up to version 2.0, Rancher supported Kubernetes, in addition to Docker Swarm, Mesos and its own Cattle container clusters. Beginning with version 2.0, Rancher decided to standardize its orchestration layer on Kubernetes in a big vote of confidence for the ecosystem. Its own tools like Cattle and rancher-compose are not going away, but any new clusters you launch in Rancher 2.0+ will be based on Kubernetes behind the scenes.

The Rancher server can be self-hosted on your metal, or in the cloud. Being itself a Docker container, Rancher can be launched in minutes on any Docker host. By default, Rancher uses Github as a third-party identity provider to authenticate users, but you can use a local password store, LDAP or Active Directory also.

Rancher 2.0 ScreenshotWith a few clicks from the dashboard, you can deploy Rancher’s distro of Kubernetes onto the public or private cloud, using the AWS, Azure, Google Cloud, DigitalOcean, VMware, OpenStack, CloudStack or bare-metal drivers. Once launched, you can graphically visualize the infrastructure containers living on each of your nodes, point-and-click to add services/pods from the Rancher Catalog or using your own YAML files.

If you’re a Kubernetes maven who wants to manage your cluster using the kubectl client and Kubernetes’ own UI, Rancher makes that easy too. From the Rancher dashboard, you can use a web-based terminal, or download a config file for your local client.

Not unlike CoreOS, Rancher has its own Container OS dubbed RancherOS designed as a lightweight solution for running container workloads. RancherOS goes one step further than Container Linux in the sense that PID 1 is a system Docker that runs a user Docker engine, which houses all the user containers. In place of a traditional init system, all the system processes have also been containerized.

Naturally, you can choose between RancherOS or a traditional Linux OS, such as Ubuntu when you launch a new Kubernetes cluster from within Rancher. If you decide to install RancherOS as a standalone container OS (without the Rancher server), you can pass in the containers using a cloud-config, much like Container Linux’s Ignition file format.

Rancher is a beginner-friendly way for smaller teams to get started with Kubernetes, abstracting away almost all of the complexity of setting up the respective components. By extending docker-compose files to support multi-host deployments through rancher-compose, Rancher makes the learning curve as gentle as possible to existing Docker Swarm and Stack users. The Rancher CLI and Catalog are somewhat vendor specific, not fitting within the paradigm of upstream Kubernetes, but provide additional options to interact with your Kubernetes cluster through Rancher.

CoreOS Tectonic

Tectonic Logo

Free for up to 10 nodes, starting at $995 for 10 server license

From the start, CoreOS has hitched its wagon to the Kubernetes community, envisioning it as a vendor-agnostic orchestrator that can run containers based on Docker, or its own rkt (Rocket) runtime. CoreOS has been instrumental in contributing many open source components that power Kubernetes today, including etcd and flannel.

Tectonic is one of their flagship products, along with Container Linux and the Quay.io container registry. With a step-by-step install wizard, Tectonic deploys a cluster that is designed to diverge minimally from upstream Kubernetes on your choice of AWS, Azure or bare metal.

One of the design principles of CoreOS’ products is enabling “automated operations”, which represents the belief that automating component updates is the best way to keep systems secure. Like Container Linux, which automatically downloads and applies updates as they’re released, Tectonic has an Operator that goes out to your cluster and keeps every aspect up to date – more promptly than a human operator could.

The Tectonic console is designed with enterprise-ready features, including full RBAC (role based access control) to delegate access to the specific resources your team members need across multiple virtual and physical environments in your infrastructure. Tectonic Identity supports LDAP, SAML as well as DEX, an open-source identity server developed by CoreOS based on OpenID.

Tectonic ScreenshotFor day-to-day operations, you can easily manage every aspect of your Kubernetes cluster include pods, services and ingresses from the Tectonic Console. Depending on how you prefer to work, you can upload a YAML file through the console or use kubectl commands to launch your services.

For testing purposes, there is a free Tectonic Sandbox which can be launched on your local machine using Vagrant and VirtualBox. In production scenarios, Tectonic should be installed with either the Tectonic Installer or Terraform configuration file, and activated with a license.

Tectonic is an enterprise Kubernetes solution aimed at teams that want to hit the ground running with container orchestration right away, with the peace of mind that sane defaults are being used. For enterprises concerned about lock-in, Tectonic keeps its distro as close to upstream Kubernetes as possible without hampering usability. This is excellent for organizations that want to make sure they’re training their employees on the “Kubernetes way of doing things” from the start.