Hosting Customer Applications Made Easy
Are you a web agency or managed service provider looking to build or renew your infrastructure for hosting customer applications? If you’re still using one virtual machine for each service (e.g. application, database, memory caching, etc.), here’s why you should consider the Docker container platform along with an orchestrator such as Docker Swarm or Kubernetes to construct a custom hosting environment based on a multi-tenant container cluster.
What is Docker
Contrary to popular belief, a Docker container is not simply a “lightweight VM.” In a conventional hosting environment, a single VM might typically include multiple services managed by Systemd or SystemV Init. A Docker container is designed to run a single process responsible for a microservice, comprising one component of the overall application architecture.
Why are Containers Better than VMs
Like its predecessor, LXC containers, Docker containers running on a single host share the underlying kernel of the host operating system. Compared to OS virtualization, the overhead of a container is much smaller — as unlike a VM, each container does not need to run its own instance of a kernel.
Containers also spin up much faster than VMs, in seconds compared to the minutes it might take to deploy a VMware or AMI image. In many cases, the performance of a container is virtually indistinguishable from a workload running directly on the underlying OS.
Benefits of Containers for Agencies & Managed Service Providers
By defining services in a YAML-based Docker stack file or Kubernetes resource file, the orchestrator will automatically schedule containers on the available worker nodes (with the required capabilities) to maintain a desired state. If a node crashes for any reason, the orchestrator will detect the deviation from the desired state, and bring up the affected containers on a healthy worker node.
Containers are designed to be as stateless as possible, with any persistent data stored outside of the container in an external volume or persistent storage claim. As long as the user data isn’t persisted locally on the individual node, a container carries on exactly where it left off, when its recreated on a different node. Services such as Amazon EBS (block storage) or Amazon EFS (network attached storage) are commonly used as the backing storage for external volumes, although using open source storage such as OpenStack Cinder, Ceph, or GlusterFS is also viable.
With a modern container orchestration solution, it becomes easier to deploy self-healing applications, reducing the workload on your ops team when something goes wrong. Deployments using stack files are repeatable, taking the pain out of configuration management and manually installing dependencies. Working with containers makes troubleshooting easier, by making your dev/test environments and production environments identical (or as close to identical as possible).
Generating Monthly Recurring Revenue for Agencies & Managed Service Providers
Reselling application hosting services can be a valuable source of MRR for your agency or MSP. The wholesale cost of renting infrastructure from a cloud provider, such as AWS or Google Cloud, or VPS/Dedicated server provider is falling every year, so the operating costs of hosting customer applications is in decline. Let us build & operate your Docker Swarm-based custom hosting environment, which you can use to deploy end user applications with a few clicks (and sell application hosting as a value-added service).
While you leverage your relationship with the end customer, our infrastructure architects do the heavy-lifting by providing customized Docker stack files for each open source or custom application you wish to deploy, in addition to monitoring, and scaling your Swarm cluster.
Get in touch with our container orchestration experts for help with:
- Determining the right size for your multi-tenant container cluster
- Choosing a public cloud or VPS/Dedicated server provider
- Building a business plan to sell application hosting as a value-added service
- Using EBS vs EFS vs open source storage for persistent volumes
- Using Traefik as edge router for HTTP/HTTPS requests and service discovery
- Setting resource limits & quotas for each container to avoid “noisy neighbor” effects
- Operating a production-ready container cluster, optimized for agencies and MSPs