Terraforming Clouds with Ansible

Terraforming Clouds with Ansible

diagram one

The wheel was invented in the 4th millennium BC. Now, in the 4th millennium, I am sure the wheel was the hottest thing on the block, and only the most popular Neolithic cool cats had wheels. Fast forward to the present day, and we can all agree that the wheel is nothing really to write home about. It is part of our daily lives. The wheel is not sexy. If we want the wheel to become sexy again we just need to slap a sports car together with all the latest gadgets and flux capacitors in a nice Ansible red, and voilà! We have something we want to talk about. 

Like the sports car, Red Hat Ansible Automation Platform has the same ability to turn existing resources into something a bit more intriguing. It can enhance toolsets and extend them further into an automation workflow. 

Let's take Terraform. Terraform is a tool used often for infrastructure-as-code. It is a great tool to use when provisioning infrastructure in a repeatable way across multiple large public cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Many organizations use Terraform for quick infrastructure provisioning every day, but if we combine it with the power of Ansible, we can see that it builds into an efficient workflow.

Don't replace tooling - reuse, enhance and master it

As I said, Ansible has a way of enhancing existing tools and giving them an overhaul. If an organization already uses Terraform, it would be a shame to waste all of the man-hours used in building their manifests and configurations. Instead, we can use what we have to create a workflow that builds more Terraform manifests, automates the provisioning, and provides a scalable method of triggering post-provisioning tasks. With Ansible taking the lead, we are able to extend the infrastructure provisioning with Terraform and allow for things like configuration-as-code, application deployment, and compliance automation. The list of possibilities is as endless as accessories on the latest German car. 

The first thing to consider is the automation execution environment we will need when using Terraform as part of our automation. Our execution environment needs to be able to perform Terraform tasks, therefore we need to make sure that Terraform is actually running on the execution environment.

I did this by downloading the binaries and simply copying them into a basic execution environment. 

I also embedded a keep_secrets file which we will use with Ansible vault.

[---
version: 1

build_arg_defaults:
    EE_BASE_IMAGE: < BASE EE >

dependencies:
  galaxy: requirements.yml
  python: requirements.txt
  system: bindep.txt

additional_build_steps:
  prepend: |
    ADD terraform /sbin
    ADD keep_secrets /opt
  append:
    - RUN echo This is a post-install command!
    - RUN ls -la /etc

Once I have pushed my execution environment to my private automation hub, we are ready to get building! 

I'm going to work on provisioning with Terraform with a simple use case using three files:

  • main.tf - This holds all the configuration information I need for my infrastructure
  • variables.tf - This will hold all the variables I use and reference in my main.tf file
  • cloud-init.conf - I use the cloud-init to inject configuration information, such as users to create and ssh keys to add to authorized_keys - so my automation controller can connect and do its magic.

All the components we need to deploy cloud infrastructure are part of these manifests - this is our infrastructure-as-code. Using Terraform to deploy them allows us to also destroy all the provisioned infrastructure quickly and easily. This can be beneficial by not leaving any configuration artifacts on your cloud platform and speeding up the whole life cycle. 

To create these manifests we can use Jinja templates and use surveys in our automation workflows. A survey in Ansible Automation Platform allows us to present consumers of automation with the opportunity to input data that we can use inside of our automation.  

diagram two

This means creating all the infrastructure-as-code components really becomes a dynamic mechanism for our teams, making the process even easier. With the Jinja templates, I create the variable manifest, and the main.tf will then use all of those components to build and plan the deployment. 

main.j2 > Summarized Example


resource "aws_instance" "ioc_basic" {
  for_each      = data.aws_subnet_ids.production.ids
  ami           = "${var.ami_number}"
  instance_type = "${var.instance_type}"
  subnet_id     = each.value
  key_name   = "${var.terraform_prov}"
  user_data = file("./cloud-init.conf")
  tags = {
      Name = "${var.instance_names}"

 variables.j2 > Summarized Example


variable "ami_number" {
  default = "{{ ami_number }}"
}
variable "secret_key" {
  default = "{{ secret_key }}"
}
variable "instance_names" {
  default = "{{ instance_names }}"
}
variable "instance_type" {
  default = "{{ instance_type }}"
}

Provision Infrastructure

diagram three

With the survey data provided, we can get Ansible to create a project folder for Terraform to work with. This should be stored in a source of truth, for my example, I am using a Git repository. Once we have our project folder, we will create all the manifests and configurations we need for Terraform to build and deploy the infrastructure. Ansible Automation Platform has modules we can use to trigger all the Terraform actions from our playbooks, and it will trigger Terraform to initialize this project folder during its build process to make sure it installs the correct provisioner. 

I am currently working on AWS; however, if you wanted to provide access to multiple providers for Terraform to use, this would be as simple as creating a Jinja2 template for it and giving your users the option in a workflow survey. In our playbook, we can now just use a Terraform module to trigger the initialization, planning, and deployment of the IoC manifest.

- name: Creating Terraform IoC
  block:
   - name: Initialize Terraform Provider
     community.general.terraform:
       project_path: /{{ working_dir }}/{{  my_terraform_build }}
       state: absent
       force_init: true

    - name: Deploy Terraform Instance
      community.general.terraform:
        project_path: /{{ working_dir }}/{{ my_terraform_build }}
        state: present
      register: deployed_tf

Once Terraform deploys the infrastructure, it creates a state file that is used to store your managed infrastructure configuration and map resources. If we want to modify infrastructure, we will reuse the state file. However, it can also be used as a source of information about that instance for post-provisioning tasks. If we need to make a change to a load balancer, for example, this file is a simple source of information we can harness. Since our execution environments are ephemeral, we will push these state files to our build repository once we have encrypted them.

Now, Terraform is great at creating infrastructure as well as destroying it. It simplifies the whole process and does a good job of cleaning things up. We will need the variables manifest used to de-provision our infrastructure, so It is best that we put these and our state files in our build repository to not only be able to destroy the instance later, but to also be able to reuse this configuration or modify the infrastructure. Since these files will have sensitive information, we can use Ansible to encrypt these files before we push them to our source of truth using the secrets file we embedded in our execution environment. 

The wheels are turning, but now what?

Ansible Automation Platform allows us to use dynamic inventory plugins, so we will use the relevant plugin to allow us to update the inventory to accommodate our newly provisioned host. One of the really cool things here is that we can provide the tags we want in our Terraform manifest files, and in Ansible we can narrow our inventory hosts with filters looking specifically at these tags. 

 example

regions:
  - "eu-west-2"

keyed_groups:
  - tag:Environment: terraform_dev

filters:
  instance-state-name: running

These filters in our dynamic inventory source allow the automation controller to harvest just the instances that match these criteria and simplify further tasks post provisioning. The last part of the provisioning process is to create and update a survey for the termination of the instance we created. To do this, we use Ansible to create a listing of all the projects in our Terraform repository, and we can pass this on to create a survey specification, which we update whenever we run a create or destroy job. 

Destroy Infrastructure

diagram four

Since we used Terraform to provision our infrastructure, de-provisioning it, is pretty straightforward. As I mentioned before, when Terraform creates the infrastructure, it establishes a source of truth that can be used as an easy way to de-provision infrastructure. We can use our automation workflow to grab the correct Terraform build details from our repository, make changes to any external systems that might be affected, like load balancers, and then trigger Terraform to destroy the instance it created from our playbook. 

- name: Destroy Terraform Instance
  community.general.terraform:
    project_path: /{{ working_dir }}/{{ my_terraform_build }}
    state: absent

Start your engines! Post-provisioning

We have created a renewable method of building and destroying infrastructure using Ansible and Terraform. To extend the automation further and do the important work of deploying workloads, system hardening, and compliance, we only need to rely on Ansible. Ansible Automation Platform allows us to create automation workflows that show us a visual logical progression of the steps in automation and allow us to combine tasks in an end-to-end process. Not only is this a great way to view and inspect your automation process, but I find it beneficial in pinpointing possible improvements or perhaps adding rollback features to the process should a step fail or encounter issues.

process diagram

Time to Terraform your clouds bringing infrastructure-as-code and configuration-as-code together with our centralized Ansible Automation Platform!







Using Ansible and GitOps to Manage the Lifecycle of a Containerized Application

Using Ansible and GitOps to Manage the Lifecycle of a Containerized Application

One of the great advantages of combining GitOps with Ansible is that you get to streamline the automation delivery and the lifecycle of a containerized application.

With the abilities of GitOps we get to:

  • Standardize configurations of our applications.
  • Inherit the benefits of version control of our configurations.
  • Easily track changes of the configuration settings making fixing issues easier.
  • Have one source of truth for our applications.

Combine the above with Ansible and you have everything you need to accomplish configuration consistency for a containerized app anywhere that you automate. 

That leads us to, "how do we combine Ansible and GitOps to manage the lifecycle of a containerized application?"

Simple. By creating an Ansible workflow that is associated with a Git webhook that is part of my application's repository.

What is a Git webhook you ask?

Git webhooks are defined as a method to deliver notifications to an external web server whenever certain actions occur on a repository.

For example, when a repository is updated, this could trigger an event that could trigger CI builds, deploy an environment, or in our case, modify the configuration of our containerized application. 

A webhook provides the ability to execute specified commands between apps over the web. Automation controller provides webhook integration with GitHub and GitLab, but for the purposes of this blog we will be integrating with GitHub. 

In the following sections of this blog, I'm going to provide the step-by-step process to:

  • Setup your Git webhook (using GitHub).
  • Setting up an Ansible workflow that triggers via push events from your GitHub repository.

Create a GitHub personal access token

The GitHub personal access token (PAT) is one of the credentials needed to associate the Ansible workflow with your Git repository.

Generate a personal access token (PAT) for use with automation controller.

  1. In the profile settings of your GitHub account, click Settings.
  2. Below the Personal settings, click Developer Settings.
  3. In the Developer settings, click Personal access tokens.
  4. From the Personal access tokens screen, click Generate new token button.
  5. When prompted, enter your GitHub account password to continue.
  6. In the Note field, enter a brief description about what this PAT will be used for.
  7. In the Expiration drop down, select No expiration.
  8. In the Scope fields, automation controller webhook only needs repo scope access, with the exception of invites. For information about other scopes, click the link right above the table to access the docs.

example personal access token

Click the Generate Token button at the bottom of the page.

Once we have our PAT in place, the next step is to create a Git repository that will be triggered by our GitHub webhooks when changes are made to the repository.

For the purposes of this blog, I'll be using my App Demo Repository. Feel free to use your own or fork this repository to follow along. 

Familiarizing ourselves with the App Demo Repository

The App Demo Repository is fairly simplistic, as it contains:

  • container_playbook.yml
  • group_vars/all.yml
  • requirements.yml

The container_playbook.yml is a simple playbook that creates a color container, starts it on a specific port and sets two environment variables, APP_COLOR and tree

A sample of that container_playbook.yml:

---
- name: Playbook to setup prereqs
  hosts: all
  become: true
  tasks:
    - name: Create a color container
      containers.podman.podman_container:
        name: colors
        image: docker.io/mmumshad/simple-webapp-color:latest
        state: started
        network: host
        ports:
            - "{{ host_port }}:{{ container_port }}"
        env:
            APP_COLOR: "{{ color }}"
            tree: "{{ tree }}"

The group_vars/all.yml is where I'll be making modifications to my Podman container that will trigger changes to the container.

A sample of that group_vars/all.yml file:

color: "BLUE"
tree: "trunk"
host_port: 8080
container_port: 8080

Finally, we have the requirements.yml file that ensures we have the containers.podman collection available to use within the playbook. 

A sample of the requirements.yml:

collections:
- name: containers.podman

With our repository in place and our GitHub PAT set, the next steps involve creating our Red Hat Ansible Automation Platform resources that will be triggered when GitHub push events happen in the App Demo Repository.

Creating our Ansible Automation Platform Resources

Within my automation controller dashboard, I first need to create my credential resources to ensure that when I create my new project, workflow and job template -- they can all easily attach my App Demo PAT credential. 

Within the automation controller dashboard: 

  1. Under Resources > Credentials click the blue Add button.
  2. Provide a Name, e.g. App Demo PAT.
  3. Select GitHub Personal Access Token as the Credential Type.
  4. Within Type Details, add the secret using the previously generated token from GitHub.
  5. Click Save.

Once my App Demo PAT credential is in place, I need an additional credential to access my host that will be running the Podman container. In my case, this is an AWS instance.

In order to access this host, I will create a new credential that stores my AWS private key.

  1. Under Resources > Credentials click the blue Add button.
  2. Provide a Name, e.g. My AWS Private Key.
  3. Select Machine as the Credential Type.
  4. Within Type Details, add the SSH Private Key in the text area.
  5. Click Save.

Once the credentials are in place, I need to create an inventory that stores the details of my AWS instance.

To add details of my AWS instance, I will create an inventory file.

  1. Under Resources > Inventories click the blue Add > Add inventory button.
  2. Provide a Name, e.g. App Demo Inventory.
  3. Click Save.
  4. Under Resources > Inventories click App Demo Inventory.
  5. Click the tab labeled Hosts and click the Add button.
  6. Provide a Name, e.g. App Demo Host.

Within Variables, provide the following YAML:

---
ansible_host:
ansible_user: ec2-user

With the credentials and inventory resources set, I will create my App Demo project. The purpose of this project is to create a workflow that contains a job template that automatically runs every time an update to the App Demo repository takes place. 

This ensures that as I make changes to my Podman container settings within my Git repository, the container_playbook.yml runs to make the appropriate changes. 

Within the automation controller dashboard:

  1. Under Resources > Projects click the blue Add button.
  2. Provide a Name, e.g. App Demo Project.
  3. Select Default as the Organization.
  4. Select Default execution environment as the Execution Environment.
  5. Select Git as the Source Control Credential Type.
  6. Within Type Details, add the Source Control URL (your GitHub repository).
  7. Within Options, select Clean, Delete, Update Revision on Launch.
  8. Click Save.

Next, create a workflow template.

  1. Under Resources > Templates click the blue Add > Add workflow template.
  2. Provide a Name, e.g. App Demo Workflow.
  3. Within Options, checkmark Enable Webhook.
  4. Within Webhook details, select GitHub as the Webhook Service.
  5. Within Webhook details, select your GitHub PAT token previously created as the Webhook Credential, e.g. App Demo PAT.
  6. Click Save.
  7. Within the Please click the Start button to begin window, click Save at the top right corner.
  8. Copy the Webhook URL and the Webhook Key as they will be used later.

Enabling GitHub Webhooks for the App Demo Repository

With the Ansible Automation Platform workflow template created and the GitHub repository with the required files in place, the next step is to enable webhooks for our repository, e.g. app_demo.

  1. At the homepage of your GitHub repository, select the Settings tab.
  2. Within the Settings tab, select Webhooks.
  3. Within the Webhooks section, select the Add webhook button.
  4. Enter the Payload URL (Webhook URL of the workflow).
  5. Change the Content type drop down to application/json.
  6. Enter the Secret (Webhook key of the workflow).
  7. Leave the defaults to use push events, and click the button Add webhook.

By default, GitHub verifies SSL certificates when delivering payloads. If your automation controller SSL certificates are not signed, ensure to disableSSL verification.

Creating the App Demo job template

The App Demo job template runs the container_playbook.yml file automatically every time an update to the Git repository takes place. 

To create the job template within your automation controller dashboard:

  1. Under Resources > Templates click the blue Add > Add job template.
  2. Provide a Name, e.g. App Demo Job.
  3. Select Run as the Job Type.
  4. Select App Demo Inventory as the Inventory.
  5. Select App Demo Project as the Project.
  6. Select Default execution environment as the Execution Environment.
  7. Select container_playbook.yml as the Playbook.
  8. Select Credentials and select My AWS Private Key.
  9. Within Options, select Enable webhook.
  10. Select GitHub as the Webhook Service.
  11. Select your GitHub PAT token previously created as the Webhook Credential, e.g. App Demo PAT.
  12. Click Save.

Updating the created App Demo Workflow

Previously, the App Demo workflow was created. The purpose of this workflow is to ensure that the App Demo Project is always in sync and that the App Demo Job runs the container playbook whenever changes are made to the App Demo repository.

  1. Under Resources > Templates, select your template. e.g App Demo Workflow.
  2. Within the Details section, select the Visualizer tab and click the green Start.
  3. For Node Type select Project Sync and select the appropriate project, e.g. App Demo Project and click Save.
  4. Hover over the App Demo Project and select the plus "+" symbol.
  5. Within the Add Node window, select On Success as to when this node should be executed and click Next.
  6. Select the App Demo Job as the Node Type and click Save.
  7. Once brought back to the Visualizer, select the Save button at the top right corner.

Verify App Demo Setup

To test if all is working correctly, head to your host that is running the Podman container. Once there, the following podman ps command can be run:

$ sudo podman ps
CONTAINER ID  IMAGE  COMMAND     CREATED   STATUS    PORTS      NAMES

NOTE: The first time you run podman ps, you should have no containers running as you haven't run the App Demo workflow.

Head over to your App Demo GitHub repository and modify the app_demo/group_vars/all.yml file where you change the color: "BLUE" to color: "YELLOW" and  git push your changes.

Head over to your automation controller dashboard and you should see the App Demo workflow running. Once complete, within your host, verify the container has the changes made:

$ ssh -i </path/to/private-key.pem> ec2-user@<IP>


$ sudo podman exec -it colors env

PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
TERM=xterm
container=podman
PYTHON_VERSION=3.7.0
PYTHON_PIP_VERSION=18.0
LANG=C.UTF-8
GPG_KEY=0D96DF4D4110E5C43FBFB17A2A347FA6AA65421D
APP_COLOR=YELLOW
tree=trunk
HOME=/root

Notice how the Podman container is now running and has the color YELLOW.

Going back to the App Demo repository, change the color from YELLOW to GREEN and git push your changes.

The automation controller dashboard will run the App Demo workflow and once complete, you can re-run the same exec command from your host and see the color has now changed to GREEN.

$ ssh -i </path/to/private-key.pem> ec2-user@<IP>

$ sudo podman exec -it colors env

PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
TERM=xterm
container=podman
PYTHON_VERSION=3.7.0
PYTHON_PIP_VERSION=18.0
LANG=C.UTF-8
GPG_KEY=0D96DF4D4110E5C43FBFB17A2A347FA6AA65421D
APP_COLOR=GREEN
tree=trunk
HOME=/root

Conclusion

The goal of this exercise was to show the power of Ansible and GitOps. Together, they can provide key automation to your containerized applications.

While in the demo we made a simplistic color value change of our application, but imagine we applied this for:

  • patching our application because of a security threat.
  • updating our application to a newer version.
  • managing containerized applications at the edge. 

And all this doesn't even mention the inherited benefits of:

  • Standardizing configurations of our applications.
  • Inheriting the benefits of version control of our configurations.
  • Easily tracking changes of the configuration settings making fixing issues easier.
  • Have one source of truth for our applications.

The use cases and abilities that both tools provide together are endless.