What’s new in Docker and how it can affect your code?
Docker version 1.12 is out, along with some updates that came during the past few weeks. The version name might imply that this update includes just a few new features, but going through the release notes tells a whole different story.
In the following post we’ve gathered the most interesting new features, so you’ll know what you definitely should check out. Ships ahoy!
— OverOps (@overopshq) August 30, 2016
Bring in the features
Everyone is talking about Docker. No wonder, since the company took the challenging task of shipping code and made it faster and easier, using containers. However, if we had to pick one word that’s identified with Docker, “Features” would probably be it.
Each Docker version comes with a huge variety of new features. It doesn’t matter if we’re talking about version 1.12, 1.11 or even version 1.8 – the release notes always filled with improved, better or brand new features.
But not all of them are exciting to every user, as Docker has a wide variety of use cases. Some developers are using it just to ship out their code more easily, while others don’t care about shipping but they do care about containing applications and code.
This might create a clutter, especially because these new features are optional and some might go by unnoticed. That’s why we decided to do the hard work for you and show you the top 5 new features we came across.
Instead of deploying containers individually on a single host, orchestration allows you to deploy complex multi-container apps on many machines. This option was once available to those who had time, resources and money to maintain a distributed platform, but not anymore.
Docker 1.12 has added features to make multi-host and multi-container orchestration available for everyone who wants or needs it.
You’ll be able to manage your applications on a group of Docker Engines, that are called a Swarm. What’s a Swarm? Glad you asked, since it’s the next feature you should learn about:
Our last sentence might ruined the surprise, but in case you missed it: Swarm lets you control a cluster of Docker hosts as a single virtual host. It’s a self-organizing group of engines, that aims to enable pluggable backends. What was once a separate tool is now part of Docker itself, making using it a whole lot easier.
What does it mean? Swarm uses the Docker API as its front end, which lets you use various tools to control it, such as Jenkins, Shipyard and others.
The first node in this group will be a manager, accepting commands and schedule tasks. The other nodes added will be workers that execute their orders, but you can add additional managers in case you need them.
How do you create it? With this simple line:
docker swarm init
A few highlights this feature gives us:
- The ability to build an entire Swarm from a single disk image and use the engines to deploy both managers and worker nodes at deployment time
- Swarm also monitors cluster state and assigns replicas in case a machine crashes
- Specify an overlay network for your services and automatically assign addresses to containers on that network
- Expose the ports for services to an external load balancer, and specify how to distribute service containers between nodes
And these are just a few of the options available in Swarm. If this got your attention, you can view the full feature list here.
3. Routing Mesh
Just as the name suggests, the Routing Mesh routes the incoming requests for published ports on available nodes, to an active container. In fact, the feature enables the connections even if there’s no task running on the node.
If we take a closer look, Routing Mesh enables each node to accept connections on published ports for any service running inside the Swarm. So it can reach a service by the same port on all nodes inside the Swarm, even if it doesn’t have the services deployed on them.
Along with routing the requests, Routing Mesh also balances them across all available services inside the Swarm, and detects node failures. This leads us right to the next new feature in 1.12:
A Healthcheck instruction tells Docker how to test if a container is still working, and in version 1.12.1, you’ll find a new Healthcheck Dockerfile instruction to support user-defined healthchecks. So now we’re able to define what exactly healthy containers means for us, and check them accordingly.
Using this feature will allow us to discover various cases we might have overlooked before, such as a server process that’s still running, but is unable to handle new connections.
This feature works along with the Swarm feature, and in case one of our containers is marked as unhealthy, the problem will be handled for us and a replica container will arise.
Services is a list of tasks, that lets you specify the state of the container inside a cluster. Each task represents one instance of a container that should be running, and Swarm schedules them across nodes.
Once we create services, we can choose one of two options: replicated services, or global services:
Replicated services let you have any number of containers spread across available hosts, while global services schedule just one instance for the same container on every host in the Swarm.
In the background, Docker monitors the clusters and changes them accordingly to match the states we desire.
Also worth mentioning
In case you missed it, one of the biggest updates Docker introduced in version 1.11 was RunC, the universal container runtime.
It’s an open source CLI tool that is lightweight, portable and built on libcontainer that powers millions of Docker Engine installations. That means it’s “Docker ready” and include the code used by Docker itself to interact with containers related system features.
This tool offers full support for Linux namespaces and native support for live migration. And of course, it aims to make standard containers available everywhere. You can use it in production, or as part of a Docker deployment in your platform.
Every Docker version comes with a backpack full of feature for you to choose from. This is part of the reason there’s a big hype around this project (perhaps too big?), but it also pulls some negative feedback.
While the company focus on new releases, some might claim that it neglects older bugs, issues or even give important tasks such as stability the back seat.
Btw, if you’re interested in monitoring JVMs in Docker, you should also check out the OverOps Docker integration.