Docker Swarm Consulting and Support

CaaS vs Self-Managed Container Clusters

Choosing a containers-as-a-service (CaaS) platform or building your own container cluster is one of the most popular decisions facing IT organizations today. Although we might be biased because Autoize helps enterprises build & operate and container clusters on commodity cloud providers at ⅕-⅓ of the cost of Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), and Amazon Fargate, hear us out.

Using a pure CaaS platform has many advantages, not the least being that the service provider provides a tailored distribution of Kubernetes and handles load balancing, high availability, backup and recovery. The complexity of setting up Kubernetes master and worker nodes, and networking them together, is abstracted away by a CaaS. Fargate goes one step further, abstracting away the underlying instances altogether for stateless workloads (no persistent storage) instead charging for RAM/CPU consumption per-hour. Although you won’t find a management fee listed in Google or Amazon’s pricing for deploying a “managed Kubernetes” cluster, the cost is very much embedded in the 3-5x higher price of compute resources compared to VPS/dedicated providers.

Furthermore, none of the major CaaS provide “Docker Swarm as a Service”, only “Kubernetes as a Service” or in the case of Amazon’s ECS, their own proprietary container orchestration platform. Kubernetes is well-known for being overly complicated to maintain for leaner organizations without a dedicated ops team, so Docker Swarm and Mesos remains the only other open source container orchestration alternatives. Furthermore, no CaaS out there uses pure upstream Kubernetes from the open source project. Although the primitives such as pods and services remain the same, it’s non-trivial to move a Kubernetes infrastructure between GKE, AKS, and EKS.

Many agencies and managed service providers tell us that the learning curve for Docker Swarm was much friendlier compared to Kubernetes, for both deploying dev/test and production workloads. That’s why we are pleased to provide support for building & operating Docker Swarm clusters on any cloud (or even a hybrid cloud).

If using one of the major cloud providers (AWS, Azure, or Google) is not an option due to the costs, or because your organization requires an EU owned and operated hosting provider (for instance due to GDPR), we can help you deploy a container orchestrator and management UI on any Linux servers.

Custom Docker Swarm Deployment & Management

All the infrastructure resides in our customers’ own hosting accounts or on-premises, giving you complete ownership (but shared responsibility) for your cluster. Our Docker Swarm consulting, deployment, and management services can address:

  • Planning server capacity for your expected container workload(s)
  • Deploying, patching, and maintaining the Swarm manager and worker nodes
  • Implementing open source logging and monitoring (Grafana and Prometheus)
  • Implementing centralized logging (ElasticSearch, Logsearch, Kabana)
  • Adding a graphical dashboard (Portainer) to deploy & manage your stacks
  • Handling persistent storage for stateful containers (Ceph or GlusterFS)
  • Configuring Swarm overlay networking and reverse proxy / load balancer
  • Writing Dockerfiles and Docker Stack files for your open source or custom apps
  • Architectural considerations for deploying applications as containers
  • Deploying a self-hosted container image registry (Portus, VMware Harbor)