6 Ways Ansible Makes Docker-Compose Better

May 25, 2016 by Greg DeKoenigsberg

ansible-and-containers.png

Containers are popular for many reasons. One key reason: container images are easy to build and, once built, don't change.  When Developer A says, "Hey, check out this new application, just download this container image and run it," Developer B doesn't have to ask the question, "How do I configure it?"  Developer B can just download the image and run the container, and enjoy a high likelihood that it will run exactly as Developer A intended.

Until Developer A announces the need for a second, third and fourth container, that is.  A microservices approach advocates for simple containers, sure -- but that also means more of them, all doing different things, and all connecting together... somehow.  So now Developer A needs to tell Developer B "be sure to run all of these containers together, and make sure these two containers share a data volume, and make sure these other two containers have a network link between them, and make sure the REST API for this one is exposed on these ports. Oh, also! Make sure you've got your DNS set up right, because it's all a hilarious dumpster fire if you don't."

Complexity doesn't go away in the container world; it just moves to different places.

Docker provides a tool that helps to simplify this problem for some Docker users. It's called docker-compose, and it provides an easy way to configure and launch multiple containers.  It's a good tool for users who can safely rely upon a completely Docker-centric view of the world. But most users don't live in that world -- and docker-compose is not designed to solve non-Docker orchestration problems. That’s where Ansible comes in.

Here are six ways that Ansible and docker-compose are better together.

1. If you know docker-compose, you know Ansible (almost).

Here's a simple docker-compose file for launching two containers:

wordpress:

 image: wordpress

 links:

   - db:mysql

 ports:

   - 8080:80

db:

 image: mariadb

 environment:

   MYSQL_ROOT_PASSWORD: example

And here's an Ansible playbook that does exactly the same thing:

---

# tasks file for ansible-dockerized-wordpress

- name: "Launch database container"

 docker:

   name: db

   image: mariadb

   env:

     MYSQL_ROOT_PASSWORD: example

- name: "Launch wordpress container"

 docker:

   name: wordpress

   image: wordpress

   links:

   - db:mysql

   ports:

   - 8080:80

Both the docker-compose file and the Ansible playbook are YAML files, and the syntax is nearly identical. This is no accident: the docker-compose tool is written in Python, and it uses the docker-py API client.  Ansible is also written in Python, and the Docker module uses the exact same docker-py API client that docker-compose uses.

The key difference is that docker-compose can only do one thing: manage Docker containers. Ansible can do that too, and it can also do everything else that Ansible does, all in the same playbook.

2.  Because you need to configure the system that your containers are running on.

Every Docker container is ultimately running on a Linux system somewhere. When you download the suite of Docker tools to your Windows or Mac system, the key element is docker-machine -- which is basically a tiny Linux distro.  But people choose lots of different Linux distros for lots of different reasons, and people may not want to use the same Linux distro that Docker chooses to put under the hood. There are perfectly valid reasons to choose another Linux distro than the one provided by default.

For example: let's say that you want the strong security features provided to you by SELinux -- a reasonable desire if you're going to be running your containers in production. That means setting SELinux contexts on your host machines properly. There's no concept of such a thing in docker-compose -- but Ansible makes it simple:

- name: "Set SELinux context properly"

 command: chcon -R system_u:object_r:admin_home_t:s0 your-app

(Of course, it's only simple once you know how to write SELinux policy. Fortunately, there's a coloring book to help you with that.) 

3. Because you want to call out to other systems to configure things.

Not everything is in a container, and not everything is easily containerized. Sure, in development you can hook your application up to a mocked-up database, but what if you've got an actual production database that requires authorization for your application to talk to it? Or what if you need to change Route 53 settings because you're doing a blue-green deployment of containers in AWS? What if you need to set up networking between two containers on different hosts, but the two hosts can't even talk to one another?

The truth is that the laptop, where containers are all easily set up and linked together by developers, is always different than the production environment -- and Ansible is an ideal tool for managing complex production environments. With hundreds of modules to handle cloud tasks or networking tasks or plain ol' boring UNIX-y tasks, Ansible can provide the glue that can make it easier to deploy your containers into any environment.

4. Because you want to build testing directly into your container deployment process.

Containers are immutable -- but only if you're always referring to the same container.  If you tell docker-compose that you want to use the "latest" version of some container from Dockerhub, that "latest" version can and generally will change! Which means that deploying with docker-compose could leave you with an application stack that no longer works.  It's always possible to pin the versions of the containers that you're using, but then you have no way of consuming necessary changes, like key functionality additions or security updates. Immutability is a double-edged sword: you don't consume bad changes, but you don't get to consume good changes either.

One of the less obvious advantages of Ansible is the ability to roll testing directly into your deployment process. Using the assert module, you can easily build functional checks directly into your deployment playbook, giving you the ability to know immediately whether a container has changed in a way that breaks your application.

Take a look at the following snippet, in which Ansible runs a simple test to see if the playbook worked as expected, and throws an assert failure if not:

- name: "Wait for services to start"

  pause: minutes=1

- name: "Check the status of kibana UI"

  command: curl localhost:5601/app/kibana

  register: curl_result

- name: "Ensure that proper data is present in stdout"

  assert:

    that:

      - "'kibana.bundle.js' in curl_result.stdout"

The ability to know instantly when a deployment breaks is a fundamental requirement for continuous deployment. Ansible can give you that knowledge.

5. Because you want to integrate containers into your Ansible Tower workflow.

For organizations with more complex orchestration needs, Ansible Tower is a leading choice for managing that complexity. It provides the safeguards to help ensure that the right people are in charge of deploying the right things to the right places, and it tells you exactly who deployed what, when, where and how.

By using Ansible as the control mechanism for container deployment, you can get the benefits of Tower when container orchestration eventually becomes a key piece of your overall IT deployment strategy.

6. Because Ansible now speaks docker-compose!

Maybe you've already written hundreds of lines of docker-compose, and you don't want to be bothered to rewrite it, but you also want all of the advantages afforded by Ansible.

That's fine! In Ansible 2.1, we are introducing the docker_service module, which allows Ansible users to consume docker-compose files directly. Just call the docker_service module from any Ansible playbook, and specify either an external docker-compose file, or put the docker-compose syntax directly into the Ansible playbook itself.

Learning new tools can be challenging and time-consuming. From the very beginning, the Ansible team has worked hard to build tools that are easy to learn, and that meet users where they are. Just because you have a shiny new tool, that doesn’t mean you throw all your old tools away; tools should work together to help you accomplish the things that matter to you. That’s the approach that has made Ansible successful, and that’s the approach that Ansible is working to bring to an increasingly containerized world.

Share:

Topics:
Containers


 

Greg DeKoenigsberg

Greg DeKoenigsberg Greg is the Director of Ansible Community with Red Hat, where he leads the project's relationship with the broader open source community. Greg has over a decade of open source advocacy and community leadership with projects like Fedora and One Laptop Per Child, and on the executive teams of Red Hat Ansible and of open source cloud pioneers Eucalyptus Systems. Greg lives in Durham NC and is on twitter at @gregdek.


rss-icon  RSS Feed