Provisioning an Autoscaling Infrastructure using Ansible

September 11, 2014 by James Martin

The concepts behind Amazon's Auto Scaling Groups (ASGs) are very promising. Who wouldn't want to have their infrastructure scale automatically with increases and decreases of demand?  Plenty of folks are using ASGs to do that today. ASGs do bring about their own challenges, which this series of blog posts will show solutions to by taking advantage of features in Ansible and Ansible Tower.

  • How do I manage my ASGs and the components that often go along with them ( Elastic Load Balancers, Launch Configurations, AutoScale Policies and Alarms).

  • How do I configure the newly spun up instances?

  • Should I pre-bake my docs/images or configure them at run time?

  • How do I provide rolling updates to an ASG?

This is the first of a series of articles designed to address these points.

Ansible and Autoscale Groups

Ansible supports these common components of EC2 AutoScale Infrastructure, such as:

EC2 Term

Ansible Module

Auto Scale Group

ec2_asg

Auto Scaling Policy

ec2_scaling_policy

CloudWatch Alarms

ec2_metric_alarams

Elastic Load Balancer

ec2_elb_lb

Launch Configuration

ec2_lc


These Ansible modules are part of Ansible core today and will be used extensively in our examples. We'll be using Ansible to manage each one of these components.

Stock AMI provisioning

The idea behind provisioning a stock AMI to your ASG is very appealing. You don't have to manage an ever-growing list of "Golden AMIs" or the environment to actually build one. You simply rely on Ansible Tower to configure a stock AMI at ASG instance launch time. Let's examine how that works with Ansible.

To demonstrate this concept, we're going to use Ansible Tower. Ansible Tower features a provisioning callback mechanism -- this allows for newly created servers to request an Ansible configuration run. Here's simple workflow describes how this would work in an AWS environment with ASGs.

  1. An instance launch in an Auto Scaling Group is triggered (due to ASG creation, event notification, parameter change, etc)

  2. When the instance boots, it executes a script that makes an authenticated request to the Tower server, asking the node to be put into a queue for configuration.

  3. Ansible Tower kicks off a job that configures the server.

Rather than just telling how what Ansible Tower can do with autoscaling, we'd much rather show you. In this Github project, you'll find Ansible code to test out this feature.

Setting Up Your AWS Environment for Ansible Tower

For those of you who are already running Tower in your environment, you can probably skip ahead to the next step.

If not, please follow these instructions on how to setup Tower in your AWS environment.

Setting up Tower to pull playbooks from GitHub

We’ll need to add some demo Ansible playbooks to Tower, which is an easy process.  To do so:

Click the Projects Tab, and click the + icon to add a new project. Fill in the form as described below using the following SCM Url:

https://github.com/ansible/autoscaling-blog

GitHub Ansible

This will configure Tower to automatically pull down code changes from the specified repository.  

Creating the Job Templates

Job Templates in Tower represent the configuration around the “launch button”, they say “run this project against this inventory with these parameters”.  We are going to use two separate job templates, one for provisioning the requisite EC2 infrastructure, and one for configure the instances in an autoscale group.

Creating the “Configuration” Job Template

This template will be used to configure the applications -- it is launched with config.yml playbook.

docs/images/configuration_template.png

 

By enabling the provisioning callback, we’re allowing the new autoscaling created instances to phone home and request configuration from Ansible Tower.  

Click on the Allow Callbacks check box. After doing so two additional textboxes will appear: Callback URL and Host Config Key.

Click on the Magic Wand icon to the right of the Host Config Key text box.

After hitting Save, a dialog will appear with the call back URL and Host Key. Make note of the host configuration key and the job template ID (highlighted below). These are unique to your Tower install, so please do not use the values below, they are only provided for example.

docs/images/callback_info.png

 

Host callbacks are enabled for this template. The callback URL is: /api/v1/job_templates/3/callback/ The host configuration key is: f1d8ab1d45b51be67afe372360f6c85c

When a node phones in for configuration, not only must these values match, but it must also be in the Tower inventory, so it’s not possible for other nodes to request configuration they do not deserve.

Creating the “Infrastructure Provisioning” Job Template

The next step is to define the autoscaling configuration, which we also do with Ansible.

This job template will be used to spin up the EC2 infrastructure using the infra.yml playbook.

docs/images/provisioning_template.png


In the Extra Variables section, paste in the following in, making sure you substitute the key_name, vpc_id, tower_group, tower_address, template_id, and host_config_key with the values we discovered earlier.

If you're using a region other than us-east1, you'll have to substitute the proper AMI for an Ubuntu 12.04 instance as well.

region: us-east-1
app_name: autoscale-blog
subnets:
- subnet-33e4ec1b
tower_callback_client_group_id: sg-c6246ea3
tower_client_group_id: sg-c6246ea3
vpc_id: vpc-b95cf2dc
ami: ami-8d756fe4
max_size: 4
min_size: 2
desired_capacity: 2
key_name: jmartin-autoscale-blog
tower_address: 10.0.1.131
template_id: 3
host_config_key: f1d8ab1d45b51be67afe372360f6c85c
instance_size: m1.small

Basically the above settings allow Ansible Tower to automatically synchronize it’s inventory with your cloud configuration, so it knows exactly what servers you have in AWS.

Assign the AWS credentials we created earlier using the Cloud Credentials dialog. When launched, template will create the following resources in your Amazon Cloud:

EC2 Infrastructure

Item

Description

App Security Group

Self-referencing group that allows members of this group to access all ports of members.

Load Balancer

A load balancer for the app servers.

Launch Configuration

A configuration used by the AutoScale group.

AutoScale Group

Defines the number of instances and the load balancer membership.

Scale Up & Down Policies

Policies that when alarmed will dictate change to occur on AutoScale group.

Scale Up & Down Alarms

Alarms that will trigger their relative scale up and down policies.

Running the Playbooks

The first step is to  configure the AWS infrastructure, and we'll kick that off via the GUI. Click the rocket icon next to the Provision Infrastructure job template.

docs/images/select_infrastructure_project.png

Tower features a real time output system so the status of each job will appear as the tasks pass by.

You'll be taken to a new screen that will show you the status of your job run. When that job run is complete, it should look something like this:

docs/images/provisioning_launch_status.png

 

Let’s verify that this worked.  To do this, let’s get the CNAME of the ELB that was launched and we'll use this to verify that instances are actually server web pages later.

To do so, click on the task that says "launch load balancer". Then Under Host Events, click the localhost item.

docs/images/select_provisioning_lb_task.png

 

Now a new window pops up, showing the result of that particular task. Click the Results tab and see find the dns_name. Make note of it. We'll use that in our browser later to make sure everything is working.

docs/images/view_lb_task_results.png

 

While you've been performing these actions, behind the scenes the following has happened, as configured by the Ansible Playbook:

  1. The EC2 infrastructure playbook has been run and all EC2 components described in the EC2 infrastructure table have been created.

  2. The AutoScale Group has been created and the initial two instances have been launched

  3. That script on each ASG instance phoned home to Tower and requested Tower to configure the server with the template ID (in this case, the Configure Application template) that is in the URL.

  4. Tower checked to make sure that the instances are part of the inventory specified in the job template and also validated the host configuration key.

  5. Tower then configured the instances with the Configure OS job template.

  6. Instances have been made reachable via the load balancer URL.

You can verify the callback happened by taking a look back at the Jobs tab. You should see two new jobs with the name of Configure OS. Those were the jobs that were launched via the callback mechanism.

docs/images/instances_configured.png

 

Now to top it all off and show this configuration is operational, let's curl the ELB CNAME we gathered earlier and verify that we're actually serving content:

myshell:~ curl autoscale-blog-78233852.us-east-1.elb.amazonaws.com
This is a test - Ubuntu 12.04  <br>
Current Host: ip-10-11-3-49 <br>

Additional Testing

You can tweak our ASG to increase the number of instances, or create traffic that causes a trigger to initiate an AutoScale event to test further.

Once again, you can find playbooks at https://github.com/ansible/autoscaling-blog 

That's all, folks!

 
Share:

Topics:
IT Automation


 

James Martin

James Martin, Red Hat practice lead for Ansible, has always had a passion for automation in the 22 years he's worked in IT. He has worked in various roles from systems administrator, to systems engineer, to consultant. James came to Red Hat through its acquisition of Ansible in 2015 and now leads of team of consultants focused on automation.


rss-icon  RSS Feed

Ansible Tower by Red Hat
Ansible In-Depth Whitepaper
Ansible Tower by Red Hat
Learn About Ansible Tower