Posted on by & filed under Devops, Docker, Information Technology, infrastructure, IT, Programming & Development.

What is Docker Swarm?

Docker Swarm is native clustering for Docker. It allows you create and access to a pool of Docker hosts using the full suite of Docker tools.

With the increased attention of containerization and microservices, Docker is an obvious choice for development and perhaps production. How can an Infrastructure team leverage shared machine resources and build something self-service for their awesome Engineering teams?

The answer may be Docker Swarm, which will allow you to build a cluster of Docker hosts that can each run many Docker containers, and can scale with your needs. In this post I will walk you through setting up a test environment for you to play with Docker Swarm.

If you have not already, then please download and install the Docker Toolbox.

A good run-through is the Get started with Docker Swarm page; I will mostly be following that, but with the addition of showing you how to set up service discovery at the same time.

What is Service Discovery?

Service discovery just means a central location for programs to set and get information about other programs in your environment. For example, if my program provides a service called foo on port 1337, then I can announce that to my discovery system. Any other program looking for the service foo will learn that it is located at my IP on port 1337.

For our purposes we will use for the service discovery part. From your command prompt issue the following commands:

With the first command we create a new virtual machine named consul1 on VirtualBox. This machine runs a docker daemon. The second command allows all the future docker commands to use our new consul1 machine. Next we pull a Docker image of the consul program from the Docker Hub, and run it. The options to our run command name the container, do some port mapping, container host naming, and pass options to the consul program, one of which is to start the web UI. (For more details on this image see the Docker Hub page for it:

The last command tells us what IP address our new consul service will be listening on. We will need this for the rest of the setup.

You now should be able to point your browser here and see the web interface for consul: http://<Your Consul IP>:8500/ui/#/dc1/services

Creating the swarm

Next let’s create our swarm, which will consist of one swarm-master and two swarm-agents. Each member of the swarm will use the consul service we setup to find each other.

Now you should have at least four docker machines. Note when you installed Docker Toolbox you probably created some other machines as well, so you should have something like this now:

You can now see what consul has stored for these nodes by pointing your browser here:

Consul by HashiCorp 2015-11-15 12-51-51

or via curl on the command line like so:

You can see we received JSON back. Consul also has a DNS interface for doing the same; see the Consul Getting Started documentation for more on how you can store and retrieve values from consul.

Using the Swarm

Let’s check our Swarm by communicating with the Swarm Master:

This command sets our terminal environment so our docker commands are issued against the swarm-master.To see what your Swarm looks like try the following:

You can see we have three members in our Swarm: swarm1, swarm2, and the swarm-master.

To see the docker images running in your swarm already:

Deploying to the Swarm

To deploy a container to the Swarm all we do is issue a docker run:

You can see our nginx container ended up on swarm1.If we do this a few times then you will see your containers spread around:

You can use constraint filters to assign containers to certain swarm nodes. Try running the following three times:

As you can see all three containers ended up on the same machine:

Putting registrator on the Swarm nodes

Registrator automatically registers and deregisters services for any Docker container by inspecting containers as they come online.

Registrator works by running on our Swarm nodes, listening to the Docker daemon socket, and reporting back to consul with the service information of containers that are running in the swarm. This will allow us to have a central location where we can view all the services available from our Swarm. We will start by running the registrator container on each of the members in the Swarm:

Now if we deploy our nginx container again and expose an external port, we will see the service appear in consul:

Using curl we can check for the catalog of services in consul:

If we remove the container, then we see the service is de-registered in consul:

Conclusion and suggested reading

We can use Docker Swarm and service discover tools to build our own Infrastructure services, and getting started is as easy as launching a few virtual machines on your laptop. In addition to the links in this post here are some good reads to expand your knowledge about Docker:

Docker Cookbook

Docker Cookbook

User Docker

Using Docker

Docker Up & Running

Docker Up & Running

Tags: containerization, containers, devops, docker, swarm,

Comments are closed.