Learn about Event-Driven Ansible at Red Hat Summit and AnsibleFest 2023

Learn about Event-Driven Ansible at Red Hat Summit and AnsibleFest 2023

As you may have heard, AnsibleFest will be taking place at Red Hat Summit in Boston May 23-25. This change will allow you to harness everything that Red Hat technology has to offer in a single place and give you even more tools to address your automation needs. Join Ansible and automation-focused audiences to hear from Red Hat and Ansible leaders, customers, and partners while getting the latest on future Ansible product updates, community projects, and what's coming in IT automation. 

Event-Driven Ansible is a key component to address the complexities of managing varying assets at scale. We announced this product feature as a developer preview last October at AnsibleFest 2022, and we are excited to talk even more about it.  So what can you expect to see about Event-Driven Ansible at AnsibleFest and Red Hat Summit this year? 

  • Red Hat Summit keynote with a customer story around their use of Event-Driven automation
  • AnsibleFest keynote about why the next wave of automation will be event-driven 
  • Breakout sessions from Ansible experts and customers
  • Hands on labs
  • Discovery Theater mini sessions in the expo hall

Do you have questions about Event-Driven Ansible? Bring them to AnsibleFest and take advantage of the experts at the Ansible booth as well as the Ask the Expert area. We will also be running three different labs focused on Event-Driven Ansible, so this is the perfect opportunity to get hands-on experience while being able to ask questions in real time to Ansible experts. These labs include: 

  • Event-Driven Ansible and Red Hat OpenShift
  • Event-Driven Ansible and Red Hat OpenShift GitOps
  • Event-Driven Ansible and NetOps

Please refer to the session catalog for the most up to date room assignments; sessions subject to change.

Still hungry for more Event-Driven Ansible content? We have you covered. Check out these resources to learn more about how Event-Driven Ansible can help you:

  • Register for AnsibleFest at Red Hat Summit
  • See what the analysts are saying about Event-Driven Ansible in this research paper









A Deeper Look, Red Hat Named a Leader in the Forrester Wave

A Deeper Look: Red Hat Named a Leader in the Forrester Wave

This week, we announced that Red Hat has been named a leader in The Forrester Wave Infrastructure Automation, Q1 2023. In an effort to help explain this result from our point of view, the following blog answers some of the most frequently asked questions.  

What is The Forrester Wave?

"The Forrester Wave™ is a guide for buyers considering their purchasing options in a technology marketplace and is based on our analysis and opinion. To offer an equitable process for all participants, Forrester follows a publicly available methodology, which we apply consistently across all participating vendors source

Forrester has been a mainstay throughout people's automation journeys, and Red Hat is proud to be recognized as a leader in the results of this Q1 2023 report.

What were the results?

Red Hat, specifically focused on Ansible Automation Platform, has been named a leader in the Q1, 2023 Forrester Wave™ Infrastructure Automation report. 

Refer to the following graphic, that can be viewed in the final report.

Why is this significant to us?

We believe Forrester is one of the most recognized technology analyst firms in the IT space, and that having received a leader ranking furthers the narrative that Red Hat Ansible Automation Platform is driving value for organizations looking to make automation a strategic part of their IT and OT estates. 

How was The Forrester Wave judged? What criteria were used?

Forrester used a comprehensive method for administering the Wave with a deliberate, phased approach that took place over many months. Using the Forrester Wave Methodology, the research team invites vendors to participate, hosts a preliminary kickoff and planning meeting, provides evaluation criteria, submits feedback in written and demo formats, and finally provides scoring and feedback back to the vendor.

Why were some popular automation solutions not included in the Wave?

Based on The Forrester Wave Vendor Participation Policy page, the vendor either did not qualify or did not meet inclusion criteria to be considered. You'll notice that there were vendors that did qualify and were invited but that they declined to participate (noted with a gray circle on the graphic).







Providing Terraform with that Ansible Magic

Providing Terraform with that Ansible Magic

Late last year, we introduced a Red Hat Ansible Certified Collection Collection for Terraform. This was an important step in automation, as these two tools really are great together and leveraging Ansible\'s ability to orchestrate other tools in the enterprise made this a no-brainer. Terraform with its infrastructure as code (IaC) provisioning and Ansible's strength in configuration as code are a synergy that cannot be ignored - we are better together! Organizations are now in the position to utilize their existing infrastructure as code manifests and extend their automation with Terraform and Ansible together.  

Now, we are back with help from our partners at Kyndryl and XLAB and adding more value and magic to infrastructure as code - This time we have some extra muscle with an addition to the Red Hat Ansible Certified Content Collection: The Ansible provider for Terraform.

So what does the provider help us with?

Without a provider, we would need to rely on inventory plugins for the different cloud platforms and use filters to grab instance information from our freshly "Terraformed" infrastructure. This allows us to update our inventory so we can run automated tasks against these hosts. This is pretty smooth in a workflow especially if you are using the automation controller with a workflow. However, this scenario is not without complexity, and what about the Terraform users who are not working with automation controller? How can we leverage Ansible and bring these two tools together? The Ansible provider for Terraform is here to help us!

With the Ansible provider in the Collection, we are able to define the use of an Ansible inventory in the main.tf file and once the project is initialized and built by Terraform, we can gather Terraform resource information from the state file and push it into an inventory.

Let's look a bit closer:

main.tf

terraform {
  required_providers {                     #### ansible provider
    ansible = {
      version = "~> 0.0.1"
      source  = "terraform-ansible.com/ansibleprovider/ansible"
    }
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}


resource "ansible_host" "my_ec2" {          #### ansible host details
  name   = aws_instance.my_ec2.public_dns
  groups = ["nginx"]
  variables = {
    ansible_user                 = "ansible",
    ansible_ssh_private_key_file = "~/.ssh/id_rsa",
    ansible_python_interpreter   = "/usr/bin/python3"

Using the provider in the main.tf allows us to indicate that we want to use an Ansible inventory and allows us to specify Ansible host details for the inventory. Terraform can then initialize and plan the project and embed the details. If we look at the resulting Terraform state file we can see host details defined:

terraform.tfstate                      #### Inside main.tf


"mode": "managed",
      "type": "ansible_host",
      "name": "my_ec2",
      "provider": "provider[\"terraform-ansible.com/ansibleprovider/ansible\"]",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "groups": [
              "nginx"
            ],
            "id": "ec2-18-130-240-228.eu-west-2.compute.amazonaws.com",
            "name": "ec2-18-130-240-228.eu-west-2.compute.amazonaws.com",
            "variables": {
              "ansible_python_interpreter": "/usr/bin/python3",
              "ansible_ssh_private_key_file": "~/.ssh/id_rsa",
              "ansible_user": "ansible"
            }
          },

Taking a deeper look at the inventory, we can see that the plugin has populated instance data from the defined resource in the Terraform state file.

…inventory.yml
---
plugin: cloud.terraform.terraform_provider
ansible-inventory -i inventory.yml --graph --vars

@all:
  |--@nginx:
  |  |--ec2-18-130-240-228.eu-west-2.compute.amazonaws.com
  |  |  |--{ansible_python_interpreter = /usr/bin/python3}
  |  |  |--{ansible_ssh_private_key_file = ~/.ssh/id_rsa}
  |  |  |--{ansible_user = ubuntu}
  |--@ungrouped:

We are now able to run playbooks against this inventory and automate the configuration or additional post-provisioning tasks on our hosts without any hassle.

Step 1: terraform plan
Step 2: terraform apply

Deploying with Terraform


Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
++ ansible-playbook -i inventory.yml playbook.yml

PLAY [Install nginx on remote host] *****************************************************************************************

TASK [wait_for_connection] **************************************************************************************************
The authenticity of host 'ec2-18-130-240-228.eu-west-2.compute.amazonaws.com (18.130.240.228)' can't be established.
ECDSA key fingerprint is SHA256:jRqiAGPDzuYGe+l7jNsmQays2qb/C/SJqtnH6pc42ns.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
ok: [ec2-18-130-240-228.eu-west-2.compute.amazonaws.com]

TASK [setup] ****************************************************************************************************************
ok: [ec2-18-130-240-228.eu-west-2.compute.amazonaws.com]

TASK [Install nginx] ********************************************************************************************************
changed: [ec2-18-130-240-228.eu-west-2.compute.amazonaws.com]

TASK [Start nginx] **********************************************************************************************************
ok: [ec2-18-130-240-228.eu-west-2.compute.amazonaws.com]

PLAY RECAP ******************************************************************************************************************
ec2-18-130-240-228.eu-west-2.compute.amazonaws.com : ok=4    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

This new provider is extremely useful when you are using Terraform for deployments while leveraging Ansible for cloud operations like application deployments and CI/CD pipelines, Lifecycle management and enforcement, OS patching and maintenance. With this provider being part of the Red Hat Ansible Certified Content Collection, we also have ongoing maintenance and support available!




Kubernetes Meets Event-Driven Ansible

Kubernetes Meets Event-Driven Ansible

In today's fast-paced world, every second counts and the ability to react to activities in a timely fashion can mean the difference between satisfying the needs of consumers and meeting Service-Level Agreements. Each are goals of Event-Driven Ansible, which seeks to further the reach of Ansible based automation by responding to events that meet certain criteria. These events can originate from a variety of sources, such as from an HTTP endpoint, messages on a queue or topic, or from public cloud resources. Kubernetes has become synonymous with managing infrastructure and applications in cloud native architectures and many organizations are reliant on these systems for running their business critical workloads. Automation and Kubernetes go hand in hand and Ansible already plays a role within this ecosystem. A new capability leveraging the Event-Driven Ansible framework is now available that extends the integration between both Ansible and Kubernetes so that Ansible automation activities can be triggered based on events and actions occurring within a Kubernetes cluster.

Event-Driven Ansible is designed using a concept called Rulebooks which consists of three main components:

  • Actions - Triggering the execution of assets including an Ansible Playbook or module
  • Rules - Determination of whether received events match certain conditions
  • Sources - Origination of events from external entities that are consumed and processed within the Ansible eventing framework

There is a wide ecosystem of solutions available to manage Kubernetes from Ansible, which is provided primarily through the kubernetes.core collection. This Collection contains everything ranging from mechanisms to manage resources within a Kubernetes cluster, support for the Helm package manager to leveraging a Kubernetes cluster as an inventory source. There are capabilities made available now through the integration of Kubernetes and the Event-Driven Ansible framework. Event sources enable the consumption of changes originating from a Kubernetes cluster, which can be used to trigger automation to respond and act based on the received content and the configured rules. Let’s explore how to take advantage of this newly created capability to further the integration between Kubernetes and Ansible.

The assets related to Event-Driven Ansible and Kubernetes are located within the sabre1041.eda collection within Ansible Galaxy. Ensure that the control node where the Ansible automation will be executed has the necessary tooling installed and configured. This includes Ansible Core, the tooling associated with Event-Driven Ansible, and the Collection containing the Event-Driven Ansible Kubernetes integration. Consult the associated documentation for both Ansible Core and Event-Driven Ansible for the target Operating System and installation method.

Once both Ansible Core and Event-Driven Ansible have been installed and configured, install the sabre1041.eda collection by executing the following command:

ansible-galaxy collection install sabre1041.eda

This Collection also requires that the Python requests package be installed which can be facilitated by executing the following command:

pip install requests

Now that all of the prerequisite tooling has been met, attention can be turned to how a Rulebook can be configured to take advantage of the Kubernetes integration. Events in the Event-Driven Ansible architecture are configured within the sources section of a rulebook. One or more sources can be specified within a rulebook enabling a robust set of conditions and actions to be configured.

A basic rulebook that takes advantage of the k8s event source plugin from the Collection is shown below:

- name: Listen for newly added ConfigMap resources
  hosts: all
  sources:
    - sabre1041.eda.k8s:
        api_version: v1
        kind: ConfigMap
        namespace: default
  rules:
    - name: Notify
      condition: event.type == "ADDED"
      action:
        debug:

The k8s plugin is modeled in a similar manner to that of the k8s module from the kubernetes.core collection, so anyone with familiarity working with this module will feel at home when working with this k8s source plugin.

The logic of this rulebook is as follows:

  1. Connect to a remote Kubernetes cluster and consume changes to ConfigMap resources that occur within the default namespace
  2. The k8s source plugin includes the type and content to the event object whenever a change occurs within the Kubernetes cluster
  3. Execute the debug action which will print out the event and any other associated variables only when the type property on the event is equal to "ADDED" (whenever a ConfigMap is added to the cluster).

While this rulebook monitors for changes in a specific namespace, support is available for monitoring changes across an entire Kubernetes cluster by omitting the use of the namespace parameter in the source plugin. If access to the default namespace is forbidden, feel free to select another namespace where access is granted.

To demonstrate the use of this rulebook, add the previously provided content to a new rulebook file called k8s-eda-demo.yaml. In addition, ensure that the local machine is authenticated to a Kubernetes cluster or customize the plugin parameters to specify the location of the Kubernetes cluster that should be used. Consult the plugin documentation for the available options.

Create a simple inventory in a file called inventory with the following content:

localhost

Run the rulebook to begin consuming events by executing the following command:

ansible-rulebook -i inventory --rulebook k8s-eda-demo.yaml --verbose

With the rulebook monitoring for ConfigMap changes in the default namespace, create a new ConfigMap in the default namespace to demonstrate events are being captured appropriately. This task can be accomplished by using the Kubernetes CLI (kubectl) by executing the following command:

kubectl create configmap -n default eda-example --from-literal=message="Kubernetes Meets Event-Driven Ansible"

Observe the following has been captured and displayed in the window where the ansible-rulebook command is being executed.

kwargs:
{'facts': {},
 'hosts': ['all'],
 'inventory': 'localhost',
 'project_data_file': None,
 'ruleset': 'Listen for newly added ConfigMap resources',
 'source_rule_name': 'Notify',
 'source_ruleset_name': 'Listen for newly added ConfigMap resources',
 'variables': {'event': {'resource': {'apiVersion': 'v1',
                                      'data': {'message': 'Kubernetes Meets '
                                                          'Event-Driven '
                                                          'Ansible'},
                                      'kind': 'ConfigMap',
                                      'metadata': {'creationTimestamp': '2022-12-25T17:40:43Z',
                                                   'managedFields': [{'apiVersion': 'v1',
                                                                      'fieldsType': 'FieldsV1',
                                                                      'fieldsV1': {'f:data': {'.': {},
                                                                                              'f:message': {}}},
                                                                      'manager': 'kubectl-create',
                                                                      'operation': 'Update',
                                                                      'time': '2022-12-25T17:40:43Z'}],
                                                   'name': 'eda-example',
                                                   'namespace': 'default',
                                                   'resourceVersion': '119407',
                                                   'uid': '2862db59-8990-4a37-9433-50dcfbaa6d71'}},
                         'type': 'ADDED'},
               'fact': {'resource': {'apiVersion': 'v1',
                                     'data': {'message': 'Kubernetes Meets '
                                                         'Event-Driven '
                                                         'Ansible'},
                                     'kind': 'ConfigMap',
                                     'metadata': {'creationTimestamp': '2022-12-25T17:40:43Z',
                                                  'managedFields': [{'apiVersion': 'v1',
                                                                     'fieldsType': 'FieldsV1',
                                                                     'fieldsV1': {'f:data': {'.': {},
                                                                                             'f:message': {}}},
                                                                     'manager': 'kubectl-create',
                                                                     'operation': 'Update',
                                                                     'time': '2022-12-25T17:40:43Z'}],
                                                  'name': 'eda-example',
                                                  'namespace': 'default',
                                                  'resourceVersion': '119407',
                                                  'uid': '2862db59-8990-4a37-9433-50dcfbaa6d71'}},
                        'type': 'ADDED'}}}

As shown in the output above, the details associated with the newly created ConfigMap within the Kubernetes cluster and the event have been captured.

After confirming that the Kubernetes source plugin is capturing events successfully, let’s demonstrate how one could make use of these events within an Ansible Playbook. Create a new file called k8s-eda-demo-playbook.yaml with the following content.

- hosts: localhost
  connection: local
  tasks:
    - ansible.builtin.debug:
        msg: "ConfigMap in namespace '{{ event.resource.metadata.namespace }}' with name '{{ event.resource.metadata.name }} ‘{{ event.type | capitalize }}’ with the message ‘{{ event.resource.data.message}}'"

This playbook demonstrates how to obtain properties that are included on the captured event. The type property will display "Added" as the playbook will only execute when ConfigMaps that have been created. The ConfigMap object itself can be accessed by referencing the resource property on the event. The standard Kubernetes manifest for a ConfigMap can then be traversed, such as the namespace, name as well as specific data values.

Update the contents of the rulebook in the k8s-eda-demo.yaml file to invoke the newly created playbook instead of simply printing out the contents by using the run_playbook action as shown below:

- name: Listen for newly added ConfigMap resources
  hosts: all
  sources:
    - sabre1041.eda.k8s:
        api_version: v1
        kind: ConfigMap
        namespace: default
  rules:
    - name: Execute Playbook
      condition: event.type == "ADDED"
      action:
        run_playbook:
          name: k8s-eda-demo-playbook.yaml

Once again, execute the k8s-eda-demo-playbook.yaml rulebook to begin listing for ConfigMap’s added to the default namespace.

ansible-rulebook -i inventory --rulebook k8s-eda-demo.yaml --verbose

Delete and recreate the ConfigMap to trigger the playbook.

kubectl delete configmap -n default eda-example

kubectl create configmap -n default eda-example --from-literal=message="Kubernetes Meets Event-Driven Ansible"

Observe that the playbook has been triggered and produces output similar to the following:

TASK [ansible.builtin.debug] ***************************************************
ok: [localhost] => {
    "msg": "ConfigMap in namespace 'default' with name 'eda-example ‘Added’ with the message ‘Kubernetes Meets Event-Driven Ansible'"
}

While this example only prints out a simple message related to the content of the event received, it provides a demonstration of how to make use of the capabilities enabled by the Kubernetes integration. By adding Kuberentes as an event source and into the Event-Driven Ansible ecosystem, it becomes an essential integration to help support organizations that are leveraging Kubernetes to maintain crucial components of their business and to trigger automation as desired.