Control with Ansible Tower, Part 2

April 25, 2016 by Bill Nottingham


This is the second in a series of posts about how Ansible and Ansible Tower enable you to manage your infrastructure simply, securely, and efficiently.

When we talk about Tower, we often talk in terms of control, knowledge, and delegation. But what does that mean?  In this series of blog posts, we'll describe some of the ways you can use Ansible and Ansible Tower to manage your infrastructure.

In our first blog post, we described how Ansible Tower makes it easy to control the way your infrastructure is configured via configuration definition and continuous remediation.

But controlling the configuration of your infrastructure is just one step. You also need control of the components of your infrastructure - your inventory. You need to do day-to-day management tasks on demand. And Ansible Tower makes those easy as well.


If you’ve used Ansible, you know about the basics of inventory. A static Ansible inventory is just an INI-style file that describes your hosts and groups, and optionally some variables that apply to your hosts and groups. Here's an example from the Ansible documentation.






You can easily enter the same sort of inventory into Ansible as well. Here, we enter the inventory into Tower: 


Tower’s REST API makes entering and inventory easy as well. Here’s the same example, using the API. For this, I’m using the tower-cli wrapper (with one patch) for Tower’s REST API, but you can make direct calls as well.

echo "Creating inventory..."
tower-cli inventory create --name "Project Inventory" --organization "Default"
tower-cli group create --name "usa" --inventory "Project Inventory" --source manual
tower-cli group create --name "southeast" --inventory "Project Inventory" --source manual --variables vars
tower-cli group create --name "atlanta" --inventory "Project Inventory" --source manual
tower-cli group create --name "raleigh" --inventory "Project Inventory" --source manual
tower-cli group associate --group "southeast" --parent "usa"
tower-cli group associate --group "atlanta" --parent "southeast"
tower-cli group associate --group "raleigh" --parent "southeast"
tower-cli host create --name "host1" --inventory "Project Inventory" 
tower-cli host create --name "host2" --inventory "Project Inventory"
tower-cli host create --name "host3" --inventory "Project Inventory"
tower-cli host associate --host "host1" --group "atlanta"
tower-cli host associate --host "host2" --group "atlanta"
tower-cli host associate --host "host2" --group "raleigh"
tower-cli host associate --host "host3" --group "raleigh"

Tower, of course, supports multiple inventories. So if you want to create dev, test, and production inventories that are similar, it’s not a problem to create them. In this example, we create three inventories ('dev', 'test', and 'prod'), each with their own identical sets of servers, but with custom variables for their environment.

echo "Creating inventories..."

for inv in dev test prod ; do
    tower-cli inventory create --name "${inv}" --organization "Default"
    tower-cli group create --name "servers" --inventory "${inv}" --source manual --variables "${inv}-vars"
    tower-cli group create --name "frontend" --inventory "${inv}" --source manual
    tower-cli group create --name "backend" --inventory "${inv}" --source manual
    tower-cli group associate --group "servers"
    tower-cli group associate --group "frontend" --parent "servers"
    tower-cli group associate --group "backend" --parent "servers"
    tower-cli host create --name "host1" --inventory "${inv}" 
    tower-cli host create --name "host2" --inventory "${inv}"
    tower-cli host create --name "host3" --inventory "${inv}"
    tower-cli host associate --host "host1" --group "frontend"
    tower-cli host associate --host "host2" --group "frontend"
    tower-cli host associate --host "host2" --group "backend"
    tower-cli host associate --host "host3" --group "backend"


As seen above, Tower can be a source of truth for your inventory. However, most environments have a highly dynamic inventory as machines are provisioned and retired, and complex sets of groups, facts, and variables for those machines that can come from a variety of sources - a cloud provider, a provisioning system, or a CMDB.

Ansible and Tower work with these sources through the concept of dynamic inventory

Here’s an example of using AWS as an inventory source: Just create a group for your AWS hosts, and configure the inventory to use Amazon EC2 as an inventory source.This inventory can be filtered in a variety of ways - region, image tags, and most any other piece of Amazon metadata.

Once this inventory group is created, you can update this inventory on demand, on a schedule, or even automatically whenever you run a playbook that references the inventory. 


And, as always, setting up dynamic inventory is available via the API as well.

echo "Creating inventory..."
tower-cli inventory create --name "Overthruster Project" --organization "Banzai Institute"
tower-cli group create --name "Cloud servers" --inventory "Overthruster Project" \
  --credential "Amazon keys" --source ec2 --source-regions "us-east-1,us-west-1" \
  --overwrite true --overwrite-vars true --update-on-launch true


Not only does Tower come with inventory scripts for the all the major public and private cloud providers, such as Amazon, Microsoft Azure, OpenStack, and more, but it’s easy to add your own dynamic inventory as well. Under Tower's Setup menu, there is an item for Inventory Scripts, which allows you to upload custom inventory scripts. For a trivial example, here's a script that defines localhost:


These custom scripts can be scripts from Ansible such as the Cobbler or Digital Ocean inventory scripts, or they could be a custom script you write in any language. For example, a dynamic inventory script, written in python, that just reproduced our earlier static inventory might look like this:

import sys

import json

atl = {
    'hosts': [ 'host1', 'host2']
rdu = {
    'hosts': [ 'host2', 'host3']
se = {
    "children": [ 'atlanta', 'raleigh' ],
    "vars": {
        "nameserver": "",
        "halon_system_timeout": 30,
        "self_destruct_countdown": 60,    
        "escape_pods": 2

usa = {
    "children": [ 'southeast' ]

inv = { 'atlanta': atl, 'raleigh': rdu, 'southeast': se, 'usa': usa, '_meta' : { 'hostvars': {} } }
if len(sys.argv) > 1 and sys.argv[1] == '--list': print json.dumps(inv)

This a contrived example - but you can write a script that returns any dynamic data, as long as it's in the proper JSON format. See for details.

Once you've added a custom inventory script, you can select it as a dynamic inventory source for an inventory group just as we did for Amazon in the example above.


Taking that example script a bit further - maybe you have a similar inventory, but it's stored alongside your Playbooks in source control. Ideally, you'd want to update it in lockstep with your Playbooks without having to manually sync it to Tower. Ansible Tower makes this possible as well. Here’s an example script - it uses the tower-cli code to communicate to Tower, so you'll need that installed and appropriately configured on your Tower instance.

Note: This is an example script to show how custom dynamic inventory scripts can be written. Due to how this sample script uses tower-cli and reads out of a pre-existing project, you will need to disable 'Job Isolation' on your Tower instance when using it as-is.

# -*- coding: utf8 -*-
# Ansible dynamic inventory script for reading from a Tower SCM project
# Requires: ansible, ansible-tower-cli
#    Copyright © 2016 Red Hat, Inc.
#    This program is free software: you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation, either version 3 of the License, or
#    (at your option) any later version.
#    This program is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    GNU General Public License for more details.

import os
import sys

import json
import urllib

from ansible.inventory import Group
from ansible.inventory.ini import InventoryParser as InventoryINIParser
from tower_cli import api

# Standard Tower project base path

def rest_get(request):
    c = api.Client()
    response = c.get(request)
    if response.ok:
        j = response.json()
        if j.has_key('results'):
            return j['results'][0]
            return j
        return None

# Get ID from project name
def get_project_id(project):
    result = rest_get("projects/?name=%s" % (project,))
    if result:
        return result['id']
        return None

# If a project update is running, wait up two minutes for it to finish
def wait_for_project_update(project_id):
    retries = 120
    while retries > 0:
        result = rest_get("projects/%d" %(project_id,))
        if not result:
        if not result['related'].has_key('current_update'):
        retries = retries - 1

# Find the toplevel path to the synced project's on-disk location
def get_file_path(project_id):
    result = rest_get("projects/%d" % (project_id,))
    if not result:
        return None
    return '%s/%s' % (BASE_PATH, result['local_path'])

# Read and parse inventory
def read_file(project_id, inv_file):
    file_path = get_file_path(project_id)
    if not file_path:
        return ""
    group = Group(name='all')
    groups = { 'all': group }
    parser = InventoryINIParser([], groups, filename = "%s/%s" %(file_path, inv_file))
    return groups

# Convert inventory structure to JSON
def dump_json(inventory):
    ret = {}
    for group in inventory.values():
        if == 'all':
        g_obj = {}
        g_obj['children'] = []
        for child in group.child_groups:
        g_obj['hosts'] = []
        for host in group.hosts:
        g_obj['vars'] = group.vars
        ret[] = g_obj
    meta = { 'hostvars': {} }
    for host in inventory['all'].get_hosts():
        if not meta['hostvars'].has_key(
            meta['hostvars'][] = host.vars
    ret['_meta'] = meta
    return json.dumps(ret)

    project_name="Test project"

if len(sys.argv) > 1 and sys.argv[1] == '--list':
    project_id = get_project_id(project_name)
    if not project_id:
        sys.stderr.write("Could not find project '%s'\n" %(project_name,))


    inv_contents = read_file(project_id, file_name)
    if not inv_contents:
        sys.stderr.write("Parse of inventory file '%s' in project '%s' failed\n" %(file_name, project_name))

    json_inv = dump_json(inv_contents)
    print json_inv

In this example script, you'd set it as a custom inventory source for a group, and set PROJECT_NAME and INVENTORY_FILE appropriately in the Environment Variables section of the inventory source configuration. Now you have an inventory group that will read inventory from your source control, and if you set Update on Launch appropriately on the source, it will happen automatically as needed when Playbooks are run.


Now that you've created your configuration management and continuous remediation, and set up your inventory source of truth, you might think you're done controlling your systems. But there can be more to the day-to-day management of your systems. Sometimes you need to restart a service, reboot a machine, or perform a one-off patch. With Ansible Tower, it’s easy to do just-in-time management via Tower’s Remote Commands feature.

Remote commands are accessed via the Tower’s inventory view. Just select the hosts you need to manage, and click on the ‘Remote Commands’ icon to bring up the Remote Command launcher.


The list of available Ansible modules to use in a remote command is administrator-configurable - Tower by default allows, command, shell, yum, service, apt, and many other common modules. Just pick the module, set the arguments, and launch your command.


Like any other automation launched in Tower, this is gated via Tower’s role-based access control. There is a special permission to run ad-hoc commands, as letting a user run arbitrary commands is tantamount to administrator access. And of course, all remote commands are logged for audit purposes.

If you’re paying attention, you’ll see how the hosts are specified above - as an Ansible limit. Hence, you can edit this to be any valid Ansible limit for the selected inventory, and you can use those limits when you’re launching remote commands via Tower’s API. In this example, we shut down apache on all of our hosts that are in both the ‘webservers’ and ‘staging’ group, using Ansible’s limit syntax.

[banzai@institute: ~]$ tower-cli ad_hoc launch --inventory "project servers" --machine-credential "ssh key" --module-name "service" --module-args "name=httpd state=stopped" --limit="webservers:&staging"


Between managing your inventory itself, and running on-demand remote commands for day-to-day management, Ansible Tower allows you to manage your infrastructure in ways beyond just the running of automation Playbooks. We'll be back with more examples of how you can use Ansible and Tower to manage your infrastructure in the near future. In the meantime, you can find some of the examples from this and other blog posts at


Ansible Tower


Bill Nottingham

Bill Nottingham is a Product Manager, Ansible, Red Hat. After 15+ years building and architecting Red Hat’s Linux products he joined Ansible ... which then became part of Red Hat a year and a half later. His days are spent chatting with users and customers about Ansible and Red Hat Ansible Tower. He can be found on Twitter at @bill_nottingham, and occasionally doing a very poor impersonation of a soccer player.

rss-icon  RSS Feed