Getting Started: Installing a Tower Cluster

June 22, 2017 by Jake Jackson

Getting-Started-with-Tower-Installing-Cluster.png

In this Getting Started blog post, we cover how to install Ansible Tower by Red Hat as a clustered environment. If you haven’t already, check out our previous post that outlines the steps on how to install Tower on a single node.

What’s Different with Clusters?

With the addition of Clustering with Tower 3.1, Tower users now have the ability to install Tower as a clustered install rather than just doing an all-in-one install. Clustering is sharing load between hosts. Each node should be able to act as an entry point for UI and API access. This should enable Tower administrators to use load balancers in front of as many nodes as they wish and maintain good data visibility.

Installing Tower in a cluster only has two differences from a standard all-in-one Tower install:

  • A separate physical or virtual machine to house an external database
  • A different method of editing your inventory file

If you are preparing to install Tower, consider what function Tower will serve for you. If you are deploying Tower in a production environment, you should be using a clustered installation able to provide highly available Tower instances and use an external DB, either as a DB service like Relational Database Service (RDS), or as a fully deployed postgres cluster.

This Getting Started post will cover how to deploy postgres as a single external instance or as an already-deployed service below. Alternatively, for information on deploying postgres in a highly available cluster, see the Postgres documentation here

You may wonder what the optimal size of your individual Tower cluster members should be and how many Tower instances you may need in total. Consider that execution of any given Tower job template or workload is executed by election for whatever cluster member has enough resources to run the entirety of the job on that host. Workloads are not broken up among cluster members. This means you will need individual hosts with enough resources to run any given job individually, and be able to scale that out horizontally to accommodate parallel workloads.

Now that you have a basic understanding of setting up a clustered environment in Tower, it is time to learn the steps for standing up your clustered Tower installation using the provided installer.

Step 1: Download the Tarball

If you don't already have an Ansible Tower tarball, download a trial here.

The inventory file is found within the downloaded tarball once you have unpackaged it.

Compared to prior releases, Ansible Tower 3.1.3 does not come with a separate inventory file and the changes can be directly applied to the current file that comes with the installer. (Earlier releases of Ansible Tower 3.1 came with a inventory_cluster file but that has since been removed).

Step 2: Set up your external database

With a clustered Tower install, you will need an external database for Tower to use. This option is available for all flavors of Tower installs but is required for a clustered install.

When configuring your external database for use with Tower, there are two paths that you can take. You can either let Tower configure the instance during the installation of Tower or you can configure the instance yourself. I will cover both setup methods below.

Method 1: Letting Tower Configure Your Instance

If you have chosen to allow the Tower installer to configure your instance, there are only a few steps that you have to make to ensure that it will be done.

When filling out the inventory file, you will want to place the domain name of your selected server under the [database] header, just like you would in an all-in-one install.

This is a simple, easy and quick way to configure your instance if you are in a time crunch.

Method 2: Self Configuration

Configuring your own instance is perfectly acceptable, but parameters must be met so that Tower can accurately record and work with your database when you choose this method. (Note: these steps assume that postgres is already installed on your instance.)

To start:

  • Connect to the database host via ssh.
  • Become the postgres user. `su - postgres`
  • Invoke `psql` so that you can make changes to the database instance.
  • Create the awx user that Tower will use to interact with the database and set the password. `create user awx with password [`passwordhere`];`
  • Create the database awx. `create database awx;`
  • Grant all rights to the database to the awx user `grant all privileges on database awx to awx;`

If you elect to configure your own instance, note that your inventory file will look a little different than the one above. You will not have to enter your domain name for your instance under the [database] header for this type of configuration. You will still have to enter the domain name and port number (5432) in `pg_host` and `pg_port`, but that is it.

The Inventory File

When you edit the inventory file, a few steps will have to be made so that a clustered install can occur properly.

1. Determine how and where you will enter your servers that will be clustered.. Wth an all-in-one install, you entered your server information under the [Tower] header. You will also put all of your server domain names under the [Tower] header with a clustered install.

Additional Ansible variables that may be required for the install can be added [all:vars] to the header inside the inventory file. This may include setting connections variables or privilege escalation variables. Documentation on what those are when you might need them can be found here.

2. Enter the database information. Depending on how you chose to set up your postgres instance, you will need to enter the info in 1 of 2 ways.

If you are electing to let Tower configure it, you will need to place the information under the [database] header. If you have pre-configured your instance before installing Tower, then nothing needs to be placed under that header.

Note: ALL installs must have `pg_host` and `pg_port` filled out with the respective information.

3. Change the last line of the inventory file.

`rabbitmq_use_long_name=false`

You will have to change the boolean actor to `true` here. This is required for rabbitmq nodes that are clustered and as of 3.1.x, Ansible Tower now utilizes rabbitmq.

Once all that information is entered correctly or changed, save the inventory file.

Here is an example of an inventory file that has the Tower Installer configure your external database.

[tower]
clusternode1.example.com
clusternode2.example.com
clusternode3.example.com

[database]
dbnode.example.com

[all:vars]
ansible_become=true

admin_password='password'

pg_host='dbnode.example.com'
pg_port='5432'

pg_database='tower'
pg_username='tower'
pg_password='password'

rabbitmq_port=5672
rabbitmq_vhost=tower
rabbitmq_username=tower
rabbitmq_password=tower
rabbitmq_cookie=rabbitmqcookie

# Needs to be true for fqdns and ip addresses
rabbitmq_use_long_name=true

And here is an example of what your inventory file will look like if you configured your own external database.

[tower]
clusternode1.example.com
clusternode2.example.com
clusternode3.example.com

[database]


[all:vars]
ansible_become=true

admin_password='password'

pg_host='dbnode.example.com'
pg_port='5432'

pg_database='tower'
pg_username='tower'
pg_password='password'

rabbitmq_port=5672
rabbitmq_vhost=tower
rabbitmq_username=tower
rabbitmq_password=tower
rabbitmq_cookie=rabbitmqcookie

# Needs to be true for fqdns and ip addresses
rabbitmq_use_long_name=true

Running the Setup Script

You have now downloaded the tarball, configured your postgres instance, edited your inventory file to reflect your new servers and saved it. Congratulations, you can finally run the setup script now by invoking it from the command line. `./setup.sh`!

Maintaining the Cluster

You have now seen that a few changes to how you install Tower can have a Tower cluster up and running in no time! But now that it is up, is there any way that you can monitor that cluster with Tower?

Yes! Tower reports as much status as it can via the Browsable API at `/api/v1/ping` in order to provide validation of the health of the cluster. This includes:

  • The node servicing the HTTP request
  • The timestamps of the last heartbeat of all other nodes in the cluster
  • The state of the Job Queue, any jobs each node is running
  • The RabbitMQ cluster status

What's next?

Clustering with Ansible Tower by Red Hat is quick and with a few simple changes to your inventory file, it can be done in no time. Whether you are adding one extra server to make a cluster or adding three, clustering with Tower helps your team scale Ansible capacity in your organization.

Have any questions from this posts or previous posts in our Getting Started series? Ask them in one of our monthly "Getting Started Q&A" webinars.

Share:

Topics:
Ansible Tower, Getting Started


 

Jake Jackson

Jake is a Product Field Engineer at Ansible by Red Hat. Jake started out working as a Systems Analyst where he worked on supporting and maintaining production level application environments. At Ansible by Red Hat, he assists pre sales customers with standing up and getting started with Ansible Tower. He can be found in his spare time either watching soccer or somewhere on the internet. You can find him on Twitter and GitHub as @thedoubl3j.


rss-icon  RSS Feed

Ansible Tower by Red Hat
Learn About Ansible Tower