When Not to Use Docker: Understanding the Limitations of Containers
Docker is a great tool. But Docker containers are not a cure-all. If you really want to understand how Docker is impacting the channel, you have to understand its limitations.
Docker containers have become massively popular over the past several years because they start faster, scale more easily and consume fewer resources than virtual machines.
Things Docker Can’t Do Well
But that doesn’t mean that Docker containers are the perfect solution for every type of workload. Here are examples of things Docker can’t do or can’t do well:
- Run applications as fast as a bare-metal server. Docker containers have less overhead than virtual machines. But Docker does not have zero overhead. The only way to get true bare-metal speed from an application is to run it directly on a bare-metal server, without using containers or virtual machines.
- Provide cross-platform compatibility. An application designed to run in a Docker container on Windows can’t run on Linux, and vice versa. Virtual machines are not subject to this limitation. In highly heterogeneous environments composed of both both Windows and Linux servers, this makes Docker less attractive.
- Run applications with graphical interfaces. Docker was designed as a solution for hosting applications that run on the command line. There are some tricks you can use (such as X11 forwarding) to make it possible to run a a graphical interface inside a Docker container, but this is clunky. (You could also run a Web interface, which is easier to do, but then you have to run a Web server and your interface options will still be limited.) Practically speaking, Docker is not a good solution for applications that require rich interfaces.
- Solve all your security problems. Docker can improve security in some ways by isolating applications from the host system and from each other. Containers also make it easy to break your application into small parts, so that if one part is compromised, the rest is not necessarily affected. Yet Docker also creates new security challenges — such as the difficulty of monitoring so many moving pieces within a dynamic, large-scale Docker environment. Before moving workloads to Docker, you need to evaluate the Docker-specific security risks and make sure you can handle them.
Like cloud computing before it, Docker is a game-changer, for good reason. In many situations, containers offer enormous advantages over older forms of application deployment technology.
But just as the cloud is not the right fit for every type of situation, Docker can’t handle all of your needs.
I see the mistake of people thinking docker helps with portability often. Docker comes about leveraging specific Linux features. Features that even if they existed on other systems are difficult to port as they live close to the metal.
Put simply, docker does not run on other operating systems. The ecosystem one the other attempt is littered with attempts to make it work on Windows and MAC. All these do is wrap Docker in a VM running Linux. That means that Docker does not run on other systems or solve the portability problem. This is the same mistake as thinking the pot (docker) creates the heat rather than the oven (virtualization).
There are already many virtualisation options available to solve this problem. Docker isn’t one of them. I do however hear over and over again across multiple companies the suggestion of using Docker so that people can use Windows or more commonly Mac which is particularly fashionable right now though not always suited to development (including Linux development as at the end of the day it’s not Linux).
POSIX containers can make better use of a single virtualized instance and docker is one way that can be exploited though realistically few people really have the need for more than one system image in a single instance. Things should not be made more complex until they need to be more complex.
There’s something else docker doesn’t do so well which is assembling and managing images. You’ll find for non-trivial cases very quickly that to simply keep DRY you’ll end up having to assemble your image build instructions and container layouts with scripts. You’ll probably find yourself resorting to using the same kind of tools you might use to manage bare metal servers or virtualized instances. This is again an area where docker gives an impression of being manageable but you’ll quickly bump into the limitations of Dockerfiles, docker-compose, etc.
Docker is also not necessarily a great packaging system either. You can make something that will run on many Linux distributions but it wont be fast, efficient nor guaranteed to be entirely compatible. Traditionally in this situation where it’s a must, chroot would be used which tends to be simpler and more efficient. None of this will solve the architecture problem, mandating many builds or virtualization for optimal results.
Otherwise you can end up shipping something that as an RPM would be 10KB, becomes 100MB using docker.
This is an interesting point as when I was first introduced to docker at a demonstration during a meetup, it was advertized as having solved or rather minimising this problem. In reality, there’s no free lunch.
Another missing limitation is maturity. It lacks stability and a complete feature set. That includes the ability to fully containerize applications. In certain cases you’ll find yourself having to add firewall rules and other settings to truly containerize an application. These are less security issues in the sense of being unavoidable and more a case of they simply haven’t implemented that part yet. You’ll find panels to your container are absent.