Kubernetes is getting a lot of attention these days. The container industry has coalesced around Kubernetes. And for this reason, many people are giving eulogies for Docker Swarm. While Kubernetes is incredibly popular, the dominance of Kubernetes does not imply the death of Swarm. These two systems do have overlap, but they approach a similar problem from two different angles and two different set of assumptions. Docker as a company doesn’t really care about the “Kubernetes vs Swarm” decision. In fact, they have enabled both as part of the commercial offering on Docker. The choice between these two isn’t necessarily this or that decision for a given app, although it can be. But more fundamentally, the choice to use Swarm or Kubernetes really what one wants to do with the containers that will be running in the environment and how one wants to set up the environment. There are numerous feature by feature comparisons that look at these two side by side. While this is important, it doesn’t take into consideration what these two differ and how they can be used for some more particular container environments, which isn’t always around a more textbook case of containers.

Kubernetes grew out of Google as a platform to manage containerized application. Google’s Borg orchestration system heavily influenced it. Kubernetes is highly scalable cluster management system. Kubernetes is application focused. Kubernetes is more of an out of the box experience that provides everything needed to support highly available applications. It does this by supplying networking, proxying, scaling, and load-balancing. An application in the context of Kubernetes is defined by a pod. A pod is one or more containers that run in a local context and are attached to the Kubernetes cluster. Kubernetes supplies a set of tools for managing the cluster which includes monitoring and Web UI tools.

Swarm grew out of Docker as a more Docker-integrated orchestration experience. Swarm clusters was initially something that required a manual setup after Docker was setup, but it was later integrated into the Docker daemon. Docker Swarm exposes a cluster that is compatible with existing Docker tools, so the same tooling can be used. The later integration of Docker Swarm into the Docker daemon made setting up Docker clusters a cinch. Docker Swarm was a later addition to the orchestration game, coming much later than other projects that get a big head start such as DC/OS and Kubernetes. Swarm however has gained enough traction to warrant the attention of those who are considering container orchestration.

The biggest architectural differences on Kubernetes and Swarm stems from the most basic building blocks on each of these orchestrators. For Kubernetes, the most basic block unit are not the containers per se, rather the pods that host the container and give them their context. The Orchestration in Kubernetes is more designed to serve the pod rather than the containers themselves, which are more along for the ride. For Swarm, the containers themselves are the most basic unit is a container so the containers are what are served by the orchestration.

The pod/container distinction has downstream implications for how one might go about building solutions. In many cases, this might not matter at all. On other cases though it might matter a lot. Containers in the context of pods are going to have more limited uses for things like proxies and access controls or using containers for deeper systems integration, however in the context of Swarm proxies and access control can be delegated to a container. Kubernetes does provide the necessary network plumbing to achieve similar results with kube-proxy, which is integrated into the Kubernetes platform. In short, Kubernetes is squarely aimed at providing a platform for containerized applications so the platform provides a more integrated solution to support these applications. Docker Swarm, while it can support applications, is more general purpose than Kubernetes, so it leaves a lot of the details for setting up some of the orchestration infrastructure that Kubernetes supplies out of the box to the user.

So, how does this impact decision making when it comes to choosing which orchestration to use? The kind of questions that architects and engineers need to ask concerning selecting orchestration should not start by comparing orchestration tools, rather evaluating the applications that they intend to deploy into these environments. What kinds components are going into containers? How do the components of the application talk to one another? How is availability handled? How is access control handled? What sort of scalability is needed?

Example Cases:

Container level orchestration for edge services — In this case, an “edge” environment — that is an environment that lives on the edge of a private network and provides connectivity for some back-end system to some system cloud system — has a solution that can automatically install modules into the edge environment. The solution itself is a container that installs containers that provide functionality within the edge environment. These containers receive settings from the orchestration container as well as communicate through the orchestration container.

In this case, the orchestration not something that is defined by Kubernetes or Swarm, rather it’s more custom. In this context then, Docker Swarm is likely to be a better option because the containers themselves have deeper integration with the container environment. While this may be possible in Kubernetes, it would not be a simple.

A massively scalable API — One of the key uses for container technology is to deploy massively scalable API’s where each container handles some unit of data (ie. a microservice). In this case, the API is stateless and connects to a database backend service and other microservices in the same API suite. Each service herein needs to scale independently under load, health monitoring, high availability, and self-healing.

For this case, both Swarm and Kubernetes can work, but probably the better choice would be Kubernetes. Kubernetes provides a lot of the plumbing already out of the box to support microservice architecture and it rationalizes it a way that makes deploying these services to Kubernetes straight forward.

A legacy N-Tier application — In this case, a legacy N-tier web app with UI layer, a service layer, and a database layer need to be containerized. Like most traditional, N-Tier applications the app is monolithic and stateful. Scalability has traditionally been more manual and vertical rather than horizontal. The state of the application maintained in memory by the application.

To provide scalability and availability to such an application, an environment would need a way to provide cookie based session affinity as part of the proxy configuration. Out of the box, Kubernetes provides this level of functionality while Swarm does not. To do this in Swarm, one would need some sort of proxy in addition to that which Swarm can do. For this reason, Kubernetes is likely to be a better choice because it provides better support for these types of applications.

An email server — In this case, an email server needs to be containerized. The email uses a database backend to maintain records and logging and store messages on a file system. This system uses standard protocols and standard, directory integrated authentication for the email server itself.

For such a case, Swarm provides sufficient resources out of the box to handle the load balancing and availability for such an application given the statefulness of the email protocols. This simpler system gives more resources to the containers and unnecessary complexity when considering what Kubernetes introduces into the mix.


Undoubtedly, Kubernetes gets a lot of attention given that it easily rationalizes applications, particularly web-centric applications. But it may not always be the best choice for a given context. Therefore Kubernetes vs Swarm is not always the question, rather the kind of questions that need to be asked when considering container orchestration need to consider the kinds of applications that need to run.