Subscribe to our blog

RH-Ansible-Tower-Spotlight

As we continue to improve Red Hat Ansible Tower, we’ve focused on allowing you to automate in more flexible ways, no matter your deployment scenario. As part of this, we’ve introduced two new features: Instance Groups and Isolated Nodes. These new features allow you to use Ansible Tower automation more flexibly in ways that match both the structure of your organization and your infrastructure.

Instance Groups

What is an instance group?

Ansible introduced Clusters in Ansible Tower 3.1. Tower Clusters allow you to add capacity to your Ansible Tower environment - the more nodes in your Tower Cluster, the more job execution capacity you have. If you have to run many jobs simultaneously, adding more nodes to the cluster lets you run them all without queueing.

However, this just gives you an additional mass of capacity. If you just have one group using a Tower instance, that may be enough. But we know that many Ansible Tower instances are shared among teams, groups, and organizations that may have different uses for their automation.

That’s why, in Ansible Tower 3.2,  we created Instance Groups.

An Ansible Tower Instance group is a set of cluster nodes dedicated for a particular purpose. You can organize your Ansible Tower Cluster into any number of instance groups, and cluster nodes can exist in multiple instance groups. Each instance group has its own job queue, and any node in the group can take jobs off of that queue. Jobs can be assigned to an instance group in three ways - by the organization, by the inventory, or by the individual job template.

How to configure Ansible Tower Instance Groups

Instance Groups are set up in the inventory file used by the Ansible Tower setup playbook.

To define an instance group named Grouper, just create a [instance_group_Grouper] group in the inventory, and define what Tower nodes should be in that group.

[instance_group_Grouper]
one.fish
two.fish
red.fish
blue.fish

You can create as many instance groups as you need, and put nodes in as many different groups as needed, as long as at least one node exists in the base [tower] group. Note that all job events from running jobs are processed by the [tower] group, so the number of nodes in the [tower] group does need to scale with your job load even if it is not being used for direct job execution.

Once you’ve configured and installed your instance groups, you can view their status under Ansible Tower Configuration.

Viewing Instance Groups in the Tower UI

The status of instance groups can also be retrieved from the Ansible Tower REST API at /api/v2/ping:

{
    "instance_groups": [
        {
            "instances": [
                "one.fish",
                "two.fish",
                "red.fish",
                "blue.fish"
            ],
            "capacity": 1700,
            "name": "Grouper"
        },
        {
            "instances": [
                "big.fish"
            ],
            "capacity": 425,
            "name": "tower"
        }
    ],
    "instances": [
        {
            "node": "blue.fish",
            "heartbeat": "2018-02-02T04:34:50.308Z",
            "version": "3.2.2",
            "capacity": 425
        },
        {
            "node": "two.fish",
            "heartbeat": "2018-02-02T04:34:52.197Z",
            "version": "3.2.2",
            "capacity": 425
        },
        {
            "node": "big.fish",
            "heartbeat": "2018-02-02T04:34:52.392Z",
            "version": "3.2.2",
            "capacity": 425
        },
        {
            "node": "red.fish",
            "heartbeat": "2018-02-02T04:34:52.895Z",
            "version": "3.2.2",
            "capacity": 425
        },
        {
            "node": "one.fish",
            "heartbeat": "2018-02-02T04:34:52.979Z",
            "version": "3.2.2",
            "capacity": 425
        }
    ],
    "ha": true,
    "version": "3.2.2",
    "active_node": "big.fish"
}

To configure an organization, inventory, or job template to use a particular instance group, just configure it under the respective organization, inventory, or job template.

Assigning an Instance Group to an Organization

If not specified, an org/inventory/job template will use the [tower] instance group.

Let’s look at how this can work in practice.

Example use case: Reserved capacity for different organizations

You’ve got multiple teams that need to run automation - Alice's team of developers, Bob's testers, and Carol runs the  production ops team. You don’t want your QA team’s tests to prevent your developers from releasing new code.

In this case, you can set up three instance groups for each team to use in their environments. These instance groups can share some nodes with the main [tower] group, so you can have shared global capacity, while each having some dedicated capacity.

[tower]
tower1.happy.company
tower2.happy.company
tower3.happy.company

[instance_group_alice]
devtower.happy.company
tower1.happy.company
tower2.happy.company
tower3.happy.company

[instance_group_bob]
testtower.happy.company
tower1.happy.company
tower2.happy.company
tower3.happy.company

[instace_group_carol]
prodtower.happy.company
tower1.happy.company
tower2.happy.company
tower3.happy.company

We can then associate each instance group with the appropriate inventories…in this case, using the REST API and tower-cli:

[carol@tower1: ~]$ tower-cli organization associate_ig --organization development --instance-group alice
OK. (changed: true)
[carol@tower1: ~]$ tower-cli organization associate_ig --organization testing --instance-group bob
OK. (changed: true)
[carol@tower1: ~]$ tower-cli organization associate_ig --organization prodops --instance-group carol
OK. (changed: true)

Example use case: ensure emergency patching always runs

You may also have a situation where you don’t care as much about capacity separation across your groups…execpt for that emergency security patching you need to do when new CVEs appear. The following inventory setup sets up an emergency patching server.

[tower]
tower1.ohoulihan.gym
tower2.ohoulihan.gym
tower3.ohoulihan.gym

[instance_group_security]
patches.ohoulihan.gym

We then associate this directly with our job template that we use for emergency patching:

[wrench@dodgeball: ~]$ tower-cli job_template associate_ig --instance-group security --job-template "Security Patching"
OK. (changed: true) 

Isolated Nodes

What is an Isolated Node?

Many people use Ansible to manage far-flung, complex infrastructures. They can have machines and networks in multiple datacenters, servers behind firewalls or in VPCs, or remote devices where unstable links may not survive the length of the job. In all these cases, it can be simpler to run automation local to the nodes.

To solve this, we created Isolated Nodes.

An Isolated Node is an Ansible Tower node that contains a small piece of software for running playbooks locally to manage a set of infrastructure. It can be deployed behind a firewall/VPC or in a remote datacenter, with only SSH access available. When a job is run that targets things managed by the isolated node, the job and its environment will be pushed to the isolated node over SSH, where it will run as normal. Periodically, the master Ansible Tower cluster will poll the isolated node for status on the job, updating in as close to real-time as possible. When it finishes, the remote execution on the isolated node will be cleaned up, and the job status will be updated in Ansible Tower.

How to Configure Ansible Tower Isolated Nodes

Isolated nodes are also set up in the inventory file used by the Ansible Tower setup program. Isolated nodes make up their own instance group. For example, if we've got a remote fortress that we want to set up isolated nodes to access,  create the following:

[isolated_group_fortress]
solitude1.fortress
solitude2.fortress

[isolated_group_fortress:vars]
controller=tower

Note the [isolated_group_fortress:vars] section - each isolated group must have a controller variable set. This variable describes the instance group that manages tasks that are sent to the isolated node. That instance group will be responsible for starting and monitoring jobs on the isolated node. In this case, we're using the main Ansible Tower cluster to manage this isolated group. 

Isolated groups can be seen in the same screens that show instance groups and Ansible Tower cluster configuration.

Isolated Groups in the Tower UI

Like other instance groups, isolated node groups can be assigned at the level of an organization, an inventory, or an individual job template.

Let’s see how you might use these in practice.

Example use case: managing remote offices

Say you’ve got a central office in Chicago and a large number of remote offices. Managing this with Ansible Tower Isolated nodes is easy. Just put an isolated node in each remote location and use it for inventories located there.

[tower]
chicago1.home.office
chicago2.home.office
chicago3.home.office

[isolated_group_nc]
cary.remote.office controller=tower

[isolated_group_il]
bridgeview.remote.office controller=tower

[isolated_group_nj]
piscataway.remote.office controller=tower

[isolated_group_ut]
sandy.remote.office controller=tower

...

In this case, we're managing inventory in these remote offices, so we'll assign the isolated groups at the inventory level.

[nobody@chicago-hq: ~]$ for location in nc il nj ut fl tx dc wa or ; do tower-cli inventory associate_ig --inventory $location --instance-group $location ; done
OK. (changed: true)
OK. (changed: true)
OK. (changed: true)
...

Example use case: multiple VPCs

Again, you’ve got multiple teams with multiple environments. But each one is secure in its own VPC, and you don’t want to open all the infrastructure in your VPCs to your Ansible Tower cluster. With isolated nodes, you can put a node in each VPC to handle automation for those environments. That way you only need to allow SSH to the isolated node, while the rest of the VPC remains isolated.

Tying It All Together: A Global Ansible Tower Deployment Example

With the combination of instance groups and isolated nodes, you can build a flexible Ansible Tower deployment for availability and redundancy.

For this example, let’s assume a enterprise with three locations - a local datacenter, Amazon AWS, and Microsoft Azure. In each of these, they have development, test, and production environments. In this setup, we want to run all automation through isolated nodes.

Here’s the inventory:

[tower]
tower1.nublar.mega.corp
tower2.nublar.mega.corp
tower3.nublar.mega.corp

[isolated_group_datacenter_dev]
dev-gw1.datacenter.mega.corp controller=tower
dev-gw2.datacenter.mega.corp controller=tower

[isolated_group_datacenter_test]
test-gw1.datacenter.mega.corp controller=tower
test-gw2.datacenter.mega.corp controller=tower

[isolated_group_datacenter_prod]
prod-gw1.datacenter.mega.corp controller=tower
prod-gw2.datacenter.mega.corp controller=tower

[isolated_group_aws_dev]
dev-gw1.aws.mega.corp controller=tower
dev-gw2.aws.mega.corp controller=tower

[isolated_group_aws_test]
test-gw1.aws.mega.corp controller=tower
test-gw2.aws.mega.corp controller=tower

[isolated_group_aws_prod]
prod-gw1.aws.mega.corp controller=tower
prod-gw2.aws.mega.corp controller=tower

[isolated_group_azure_dev]
dev-gw1.azure.mega.corp controller=tower
dev-gw2.azure.mega.corp controller=tower

[isolated_group_azure_test]
test-gw1.azure.mega.corp controller=tower
test-gw2.azure.mega.corp controller=tower

[isolated_group_azure_prod]
prod-gw1.azure.mega.corp controller=tower
prod-gw2.azure.mega.corp controller=tower

If we were to visualize how this deployment looks, it would look something like this:

Sample global deployment with isolated nodes

You’ll that the Ansible Tower cluster is not shown as being in one of the datacenters or clouds. That’s because with this sort of deployment, it actually doesn’t matter where Ansible Tower is located - it can be in any of the locations. As long as it can reach the isolated nodes, Ansible Tower can be hosted wherever makes the most sense. It can even be migrated between locations without any need to change how playbooks are run - if it needs to be failed over to a secondary Site B, nothing needs to be changed in how automation is defined and managed.

Wrap Up

We hope this helps explain that no matter how your infrastructure is laid out, and no matter where you need automation capacity, Ansible Tower Instance Groups and Ansible Tower Isolated Nodes give you the tools you need to manage your infrastructure. 

For more information on these topics, please see the Ansible Tower documentation.

To get started with Ansible Tower, go to ansible.com/tower-trial.

 


About the author

Bill Nottingham is a Product Manager, Ansible, Red Hat. After 15+ years building and architecting Red Hat’s Linux products he joined Ansible ... which then became part of Red Hat a year and a half later. His days are spent chatting with users and customers about Ansible and Red Hat Ansible Tower. He can be found on Twitter at @bill_nottingham, and occasionally doing a very poor impersonation of a soccer player.
Read full bio

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech