Introducing the Ansible API for ServiceNow ITSM

Introducing the Ansible API for ServiceNow ITSM

One of the most popular platform integrations available to Ansible Automation Platform subscribers in Ansible automation hub is the Red Hat Ansible Certified Content Collection for ServiceNow ITSM. This collection helps you create new automation workflows faster based on ServiceNow ITSM while establishing a single source of truth in the ServiceNow configuration management database (CMDB). You can help free teams from hours of manual effort and have greater data integrity within your ServiceNow ITSM instance. 

For ServiceNow users, we've launched a new native ServiceNow application, the API for Red Hat Ansible Automation Platform Certified Content Collection, available exclusively through the ServiceNow store to enhance and support the integration between the two platforms.   

What is the Ansible API for ServiceNow ITSM?

The API for Red Hat Ansible Automation Platform Certified Content Collection integrates Ansible's certified content with your ServiceNow instance. Prior to the launch of ServiceNow's Rome API, Ansible users could download the Red Hat Ansible Certified Content Collection for ServiceNow ITSM from the Ansible automation hub and directly manage ServiceNow resources using their REST API. 

With the release of Rome, the REST API no longer provided all of the support needed to automate ServiceNow using Ansible. To remedy this problem, Red Hat and our partner, XLAB, developed this new API to enhance and restore that functionality. 

While the need to develop the Ansible API for ServiceNow ITSM was a result of the release of Rome, it's also compatible with ServiceNow ITSM San Diego and Tokyo.

What can you automate in the ServiceNow ITSM?

Using both the API and the Certified Content Collection for ServiceNow ITSM, you can:

  • Automate change requests. Use Ansible Playbooks to automate ServiceNow ITSM service requests, including reporting change results and all information related to those changes. Your service representatives can simply kick off an Ansible Playbook to resolve common requests and reduce rote, repetitive tasks.
  • Automate incident response. Assets in the ServiceNow Certified Collection support automatic updates to incident tickets to provide a consistent audit trail. Your team can also streamline the required steps for issue remediation and apply them at scale.
  • Enable full "closed loop" automation. Simplify the opening, advancement, and resolution of IT service management workflow items while keeping relevant and accurate information flowing into the CMDB across disparate users, teams, and assets. Ensure that infrastructure information is always up to date, actionable, and auditable while work is completed by cross-domain teams that may or may not have access to ServiceNow.

Getting started with the Ansible API for ServiceNow ITSM

To get started:

  • Install the API for Red Hat Ansible Certified Content Collection for free from the ServiceNow store and consult the "Application Installation and Configuration Guide" for additional instructions. 
  • Download the Ansible Content Collection for ServiceNow ITSM from Ansible automation hub on the Red Hat Hybrid Cloud Console

Additional resources




Let Ansible keep an eye on your AWS environment

Let Ansible keep an eye on your AWS environment

In a cloud model, the security of the environment and compliance becomes the responsibility of both the end users and the cloud provider. This is what we call the shared responsibility model in which every part of the cloud, including the hardware, data, configurations, access rights, and operating system, are protected. Depending on the local legislation and the origin of the data that is handled (for instance laws like HIPAA, the GDPR in Europe, or the Californian CCPA),  you may have to enforce strict rules on your environment and log events for audit purposes. AWS CloudTrail will help you to achieve this goal. The service can collect and record any kind of information coming from your environment and store or send the events to a destination for audit. In addition to security and compliance, this service helps keep track of resource consumption.

Ansible's CloudTrail module is used to leverage the various features of the CloudTrail service to monitor and audit user activities and API calls in the AWS environment. A trail is a configuration that lets us describe an event filter and decide where the matching entries should be sent. The recent 5.0.0 release of the Amazon.aws collection comes with a new Cloudtrail module. This module helps create, configure, and delete a trail. The final destination of a trail can be an S3 bucket or a CloudWatch log. We have also paired the cloudtrail module with a cloudtrail_info module, which helps collect the information of all or a specific trail.

In this blog post, we are going to take a few configuration use cases and show how Ansible's CloudTrail module can be used to automate the same.

To download the amazon.aws collection, you can download it from

Use Case 1 - Get maximum visibility

Unless a trail is used for a specific activity in a specific region, it is the best practice to enable CloudTrail for all regions. By doing so, we maximize the visibility of the AWS environment so there is no weakness (unmonitored region) that can be exploited by an attacker. This will also make sure that we receive the event history for any new region that AWS will launch in the future. 

- name: create multi-region trail
  amazon.aws.cloudtrail:
    state: present
    name: myCloudTrail
    s3_bucket_name: mylogbucket
    region: us-east-1
    is_multi_region_trail: true
    tags:
      environment: dev

The cloudtrail_info module can be used to get all the information about a particular trail or all the trails present. If a trail name is not provided as input to this module, this module will get the information of all trails, including shadow trails, by default. The shadow trails can be skipped by setting [include_shadow_trails] to [False].

# Gather information about the multi-region trail
- amazon.aws.cloudtrail_info:
    trail_names:
      - arn:aws:cloudtrail:us-east-1:123456789012:trail/myCloudTrail
    include_shadow_trails: False
      register: trail_info

trail_info :
"trail_list": [
            {
                "has_custom_event_selectors": false,
                "has_insight_selectors": false,
                "home_region": "us-east-1",
                "include_global_service_events": true,
                "is_logging": true,
                "is_multi_region_trail": true,
                "is_organization_trail": false,
                "latest_delivery_attempt_succeeded": "",
                "latest_delivery_attempt_time": "",
                "latest_notification_attempt_succeeded": "",
                "latest_notification_attempt_time": "",
                "log_file_validation_enabled": false,
                "name": "myCloudTrail",
                "resource_id": "arn:aws:cloudtrail:us-east-1:123456789012:trail/myCloudTrail",
                "s3_bucket_name": "mylogbucket",
                "start_logging_time": "2022-09-29T11:41:41.752000-04:00",
                "tags": {"environment": "dev"},
                "time_logging_started": "2022-09-29T15:41:41Z",
                "time_logging_stopped": "",
                "trail_arn": "arn:aws:cloudtrail:us-east-1:123456789012:trail/myCloudTrail"
            }
        ]

Use Case 2 - Manage access to S3 buckets

For this use case, we will manage the access given to the S3 buckets where the trail logs are stored. As mentioned earlier, shared responsibility includes sharing the security of the resources as well.  S3 buckets are prone to incorrect configurations and are the major source of data leaks. S3 buckets configured with public access allow anyone on the internet to access the data. Ansible's s3_bucket  module can be used to set CloudTrail's S3 bucket permissions and policies. This S3 bucket can be passed to the CloudTrail module, which will be used as the destination for the trail-generated logs.

- amazon.aws.s3_bucket:
   name: mys3bucket
   state: present
   public_access:
       block_public_acls: true
       ignore_public_acls: true
       block_public_policy: false
       restrict_public_buckets: false

- name: Create trail with secured s3 bucket
  amazon.aws.cloudtrail:
    state: present
    name: myCloudTrail
    s3_bucket_name: mys3bucket
    region: us-east-1
    tags:
      environment: dev

Use Case 3 - Maintain CloudTrail logs integrity

CloudTrail logs are collected to verify the compliance and security of the AWS environment. It is always possible that an attacker can gain access and tamper with these logs to obscure their presence. By enabling log file validation, a digital signature of the log file is generated, which is used to check if the log files are valid and not tampered with.

- name: create a trail with log file validation
  amazon.aws.cloudtrail:
    state: present
    name: myCloudTrail
    s3_bucket_name: mylogbucket
    region: us-east-1
    log_file_validation_enabled: true
    tags:
      environment: dev

# Gather information about the trail
- amazon.aws.cloudtrail_info:
    trail_names:
      - arn:aws:cloudtrail:us-east-1:123456789012:trail/myCloudTrail
    include_shadow_trails: False
      register: trail_info

trail_info :
"trail_list": [
            {
                "has_custom_event_selectors": false,
                "has_insight_selectors": false,
                "home_region": "us-east-1",
                "include_global_service_events": true,
                "is_logging": true,
                "is_multi_region_trail": fail,
                "is_organization_trail": false,
                "latest_delivery_attempt_succeeded": "",
                "latest_delivery_attempt_time": "",
                "latest_notification_attempt_succeeded": "",
                "latest_notification_attempt_time": "",
                "log_file_validation_enabled": true,
                "name": "myCloudTrail",
                "resource_id": "arn:aws:cloudtrail:us-east-1:123456789012:trail/myCloudTrail",
                "s3_bucket_name": "mylogbucket",
                "start_logging_time": "2022-09-29T11:41:41.752000-04:00",
                "tags": {"environment": "dev"},
                "time_logging_started": "2022-09-29T15:41:41Z",
                "time_logging_stopped": "",
                "trail_arn": "arn:aws:cloudtrail:us-east-1:123456789012:trail/myCloudTrail"
            }
        ]

Use Case 4 - Encrypt the logs

By default, the S3 buckets are protected by an A[mazon server-side encryption method and Amazon S3-managed encryption keys. To add an extra layer of security, you can use the AWS Key Management Service. This is directly manageable and helps protect the log files from any attacker's survey of the environment.

- name: Create an LMS key using lookup for policy JSON
  amazon.aws.kms_key:
    alias: my-kms-key
    policy: "{{ lookup('template', 'kms_iam_policy_template.json.j2') }}"
    state: present
  register: kms_key_for_logs

- name: Create a CloudTrail with kms_key for encryption
  amazon.aws.cloudtrail:
     state: present
     name: myCloudTrail
     s3_bucket_name: mylogbucket
     kms_key_id: "{{ kms_key_for_logs.key_id }}"

Similar to the use cases mentioned above, many parameters allow the CloudTrail logs to be secure, compliant, and manageable. To get more information on how to configure CloudTrail and get the configuration information of an existing trail, please refer to amazon.aws.cloudtrail and amazon.aws.cloudtrail_info.

Now you can see four awesome use cases for Red Hat Ansible Automation Platform and CloudTrail and how they can easily and seamlessly work together to accomplish cloud automation tasks. If you want more blogs on Ansible and AWS, please let us know!







Best of Fest 2022

Best of Fest 2022

At AnsibleFest 2022, the power of automation was on full display. Through sessions, workshops, labs and more, we explored how to transform enterprise and industry through automation. There were a lot of exciting announcements made on both days, and in case you missed it, we are going to dive into what is new!

Ansible and AWS

We are thrilled to also announce a new AWS Marketplace offering, Red Hat Ansible Automation Platform. By offering Ansible Automation Platform as a pre-integrated service that can be quickly deployed from cloud marketplaces, we are meeting our customers where they are, while giving them the flexibility to deliver any application, anywhere, without additional overhead or complexity. Whether you are automating your hybrid cloud or multi-cloud environments, Ansible Automation Platform acts as a single platform. This platform provides consistency, visibility, and control to help  you manage these environments at scale. Ansible is the IT automation "glue" for bringing your cloud, network, bare-metal and cloud-native infrastructure together. This  provides the functionality to coordinate and manage across  hybrid cloud environments in a simple and efficient way. Interested in learning more? Check out the press release.

Automation at the Edge

Ansible Automation Platform provides a framework for building and operating IT automation at scale. What this means for edge, much like the data center, is that users across an entire organization can create, share, and manage automation. They can develop and apply guidelines for  using automation within individual groups. They can write tasks that use existing knowledge so they can be leveraged by nonIT staff, allowing end-to-end automation to be deployed. Ansible Automation Platform uses containerization to package, distribute, and execute automation across environments securely via automation execution environments. This enables organizations to rapidly and consistently extend IT services to the edge, while maintaining a focus on security. This helps organizations to simplify capacity scaling, increase resiliency, and improve consistency. Learn more about automation at the edge here.

Event-Driven Ansible

Event-Driven Ansible is a new capability that we're making available to the entire Ansible open source community in developer preview. With Event-Driven Ansible, you can eliminate low-level tasks from the day-to-day routine so you have more time to focus on innovations. This means a happier, more productive, and more engaged team. It is fast, accurate, and will free you (and your teams) to work on the things you WANT to be doing, without being dragged down by all the things you HAVE to do. Event--Driven Ansible will support a range of use cases, and here are few good ones to get started with:

  • Automating remediation of common problems, like resetting a network router that's out.
  • Gathering information to solve problems faster, like information about a server configuration or buffer pool size so when you get the service ticket, the information you need is already there.
  • Administering user requests ... like "I can't log in" or "I cannot access the application".

We are excited about the future of automation and what is possible with Event-Driven Ansible.

Project Wisdom

Project Wisdom is a Red Hat initiative, developed in close collaboration with IBM Research, to give Ansible artificial intelligence superpowers. The first goal is to bring together automation novices and Ansible experts while enabling new automators to drastically reduce the challenge of learning and mastering Ansible. The first capability we are using AI for is content generation. The AI models we use underneath Project Wisdom are able to generate Ansible Playbooks or roles that are both syntactically correct and functional. You can also head to redhat.com/wisdom for more information on how to get involved. 

Ansible Automation Platform 2

Ansible Automation Platform 2 is built to enable a trusted automation supply chain. In the upcoming  Ansible Automation Platform 2.3 release, digital signing will be supported for containers, playbooks and collections. We're also excited to introduce Ansible validated content, which complements the  existing  ecosystem of Red Hat Ansible Certified Content Collections. Ansible validated content helps your teams to start automating faster by following a trusted, expert-led, opinionated path for performing operations and tasks on both Red Hat and third party platforms. Initially, Ansible validated content will be pre-loaded into private automation hub.

Community

We are so fortunate that we are a part of one of the largest, most vibrant open source project communities in the world. So while the landscape may be shifting around us, Ansible continues to push forward and evolve with the times. Ansible is celebrating its 10th anniversary this year! Within our expansive community, the new Working Groups focus on expanding the Ansible ecosystem with the development of Ansible Content Collections. First spun up by our team last year, Matrix has made a huge difference in our ability to connect and engage with the Ansible community. So far we've spun up 32 unique chat rooms, with 4200+ members and nearly 80k messages sent in the past 6 months. Matrix's ability to bridge with IRC gave us a strong foundation upon which to build. Join the Working Groups and become a part of the conversation: https://matrix.to/#/#social:ansible.com




Getting Started with Event-Driven Ansible

Getting Started with Event-Driven Ansible

As one technology advances, it expands the possibilities for other technologies and offers the solutions of tomorrow for the challenges we face today. AnsibleFest 2022 brings us new advances in Ansible automation that are as bright as they are innovative. I am talking about the Event-Driven Ansible developer preview.

Automation allows us to give our systems and technology speed and agility while minimizing human error. However, when it comes to trouble tickets and issues, we are often left to traditional and manual methods of troubleshooting and information gathering. We inherently slow things down and interrupt our businesses. We have to gather information, try our common troubleshooting steps, confirm with different teams, and eventually, we need to sleep.

The following image illustrates a support lifecycle with many manual steps and hand-offs:

support lifecycle diagram

One application of Event-Driven Ansible is to remediate technology issues before near real-time, or at least trigger troubleshooting and information collection in an attempt to find the root cause of an outage while your support teams handle other issues.

The following image illustrates how event-driven automation is used in the support lifecycle: fewer steps, faster Mean-Time-To-Resolution.

Event-Driven Ansible in the support lifecycle

Event-Driven Ansible has the potential to change the way we respond to issues and illuminates many new automation possibilities. So, how do you take the next step with Event-Driven Ansible?

Let’s get started!

Event-Driven Ansible is currently in developer preview, however there is nothing stopping us from installing ansible-rulebook, which is the CLI component of Event-Driven Ansible, and building our first rulebook. Event-Driven Ansible contains a decision framework that was built using Drools. We need a rulebook to tell the system what events to flag and how to respond to them. These rulebooks are also created in YAML and are used like traditional Ansible Playbooks, so this makes it easier to understand and build the rulebooks we need. One key difference between playbooks and rulebooks is the If-this-then-that coding that is needed in a rulebook to make an event driven automation approach work.

A rulebook is comprised of three main components:

  • Sources define which event source we will use. These sources come from source plugins which have been built to accommodate common use cases. With time, more and more sources will be available. There are some source plugins that are available already, including: webhooks, Kafka, Azure service bus, file changes, and alertmanager.

  • Rules define conditionals we will try to match from the event source. Should the condition be met, then we can trigger an action.

  • Actions trigger what you need to happen should a condition be met. Some of the current actions are: run_playbook, run_module, set_fact, post_event, and debug.

getting-started-with-event-driven-ansible

Now, let's install ansible-rulebook and start with our very first event.

To install ansible-rulebook, we can install our Galaxy Collection, which has a playbook to install everything we need.

ansible-galaxy collection install ansible.eda

Once the Collection is installed, you can run the install-rulebook-cli.yml playbook. This will install everything you need to get started with ansible-rulebook on the command line. This is currently supported for Mac and Fedora.

Note: Now, you could also skip this method above and install ansible-rulebook with pip, followed by installing the ansible.eda collection. Java 11+ is required if you use this method and we suggest using openjdk. (This step is not required if you used the previous install method.)

pip install ansible-rulebook

ansible-galaxy collection install ansible.eda

If you want to contribute to ansible-rulebook, you can also fork the following GitHub repository. This repository also contains instructions for setting up your development environment and how to build a test container.

Let's build an example rulebook that will trigger an action from a webhook. We will be looking for a specific payload from the webhook, and if that condition is met from the webhook event, then ansible-rulebook will trigger the desired action. Below is our example rulebook:

---
- name: Listen for events on a webhook
 hosts: all

 ## Define our source for events

 sources:
   - ansible.eda.webhook:
       host: 0.0.0.0
       port: 5000

 ## Define the conditions we are looking for

 rules:
   - name: Say Hello
     condition: event.payload.message == "Ansible is super cool!"

 ## Define the action we should take should the condition be met

     action:
       run_playbook:
         name: say-what.yml

If we look at this example, we can see the structure of the rulebook. Our sources, rules and actions are defined. We are using the webhook source plugin from our ansible.eda collection, and we are looking for a message payload from our webhook that contains "Ansible is super cool". Once this condition has been met, our defined action will trigger, which in this case is to trigger a playbook.

One important thing to take note of ansible-rulebook is that it is not like ansible-playbook which runs a playbook and once the playbook has been completed it will exit. With ansible-rulebook, it will continue to run waiting for events and matching those events. It will only exit upon a shutdown action or if there is an issue with the event source itself, for example if a website you are watching with the url-check plugin stops working.

With our rulebook built, we will simply tell ansible-rulebook to use it as a ruleset and wait for events:

root@ansible-rulebook:/root# ansible-rulebook --rules webhook-example.yml -i inventory.yml --verbose

INFO:ansible_events:Starting sources
INFO:ansible_events:Starting sources
INFO:ansible_events:Starting rules
INFO:root:run_ruleset
INFO:root:{'all': [{'m': {'payload.message': 'Ansible is super cool!'}}], 'run': <function make_fn.<locals>.fn at 0x7ff962418040>}
INFO:root:Waiting for event
INFO:root:load source
INFO:root:load source filters
INFO:root:Calling main in ansible.eda.webhook

Now, ansible-rulebook is ready and it's waiting for an event to match. If a webhook is triggered but the payload does not match our condition in our rule, we can see it in the ansible-rulebook verbose output:

…
INFO:root:Calling main in ansible.eda.webhook
INFO:aiohttp.access:127.0.0.1 [14/Oct/2022:09:49:32 +0000] "POST /endpoint HTTP/1.1" 200 158 "-" "curl/7.61.1"
INFO:root:Waiting for event

But once our payload matches what we are looking for, that’s when the magic happens, so we will simulate a webhook with the correct payload:

curl -H 'Content-Type: application/json' -d "{\"message\": \"Ansible is super cool\"}" 127.0.0.1:5000/endpoint

INFO:root:Calling main in ansible.eda.webhook
INFO:aiohttp.access:127.0.0.1 [14/Oct/2022:09:50:28 +0000] "POST /endpoint HTTP/1.1" 200 158 "-" "curl/7.61.1"
INFO:root:calling Say Hello
INFO:root:call_action run_playbook
INFO:root:substitute_variables [{'name': 'say-what.yml'}] [{'event': {'payload': {'message': 'Ansible is super cool'}, 'meta': {'endpoint': 'endpoint', 'headers': {'Host': '127.0.0.1:5000', 'User-Agent': 'curl/7.61.1', 'Accept': '*/*', 'Content-Type': 'application/json', 'Content-Length': '36'}}}, 'fact': {'payload': {'message': 'Ansible is super cool'}, 'meta': {'endpoint': 'endpoint', 'headers': {'Host': '127.0.0.1:5000', 'User-Agent': 'curl/7.61.1', 'Accept': '*/*', 'Content-Type': 'application/json', 'Content-Length': '36'}}}}]
INFO:root:action args: {'name': 'say-what.yml'}
INFO:root:running Ansible playbook: say-what.yml
INFO:root:Calling Ansible runner

PLAY [say thanks] **************************************************************

TASK [debug] *******************************************************************
ok: [localhost] => {
    "msg": "Thank you, my friend!"
}

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

INFO:root:Waiting for event

We can see from the output above, that the condition was met from the webhook and ansible-rulebook then triggered our action which was to run_playbook. The playbook we defined is then triggered and once it completes we can see we revert back to "Waiting for event".

Event-Driven Ansible opens up the possibilities of faster resolution and greater automated observation of our environments. It has the possibility of simplifying the lives of many technical and sleep-deprived engineers. The current ansible-rulebook is easy to learn and work with, and the graphical user interface EDA-Server will simplify this further.

What can you do next?

Whether you are beginning your automation journey or a seasoned veteran, there are a variety of resources to enhance your automation knowledge:




Introducing the Event-Driven Ansible developer preview

Introducing the Event-Driven Ansible developer preview

Today at AnsibleFest 2022, Red Hat announced an exciting new developer preview for Event-Driven Ansible. Most customers are on a journey toward full end-to-end automation and there are many paths you take along this journey.  Event-Driven Ansible is a new way to enhance and expand automation. It improves IT speed and agility, while enabling consistency and resilience. 

By fully automating necessary but routine tasks, you and your team will have more time to focus on interesting engineering challenges and new innovations. For example, what if you no longer needed to pause critical work to manually add technical detail to  a service ticket?  Or address a user password reset request? Or reset a router as a first troubleshooting step? With Event-Driven Ansible, the friction in your day can be dramatically reduced, leaving more time to work on important projects, with some added work-life balance.  

Why a developer preview?

The Event-Driven Ansible technology was developed by Red Hat and is available on GitHub as a developer preview. Community input is essential. Since we are building a solution to best meet your needs, we're providing an opportunity for you to advocate for those needs. We ask that both technology providers and end users give it a try and tell us what you think. There are several ways you can give feedback - via comments in GitHub, during our office hours on November 16, 2022, or via the event-driven-automation@redhat.com email. 

Event-driven automation is part of an ecosystem

Any event-driven solution must be able to work within multi-vendor environments. So, we ask technology partners to not only try the Event-Driven Ansible developer preview, but also begin building Ansible Content Collections so that our solutions complement each other and make it faster and easier for joint customers to use them.

Designed for simplicity

Event-Driven Ansible is designed for simplicity and flexibility, much like we offer today in Red Hat Ansible Automation Platform.  What do we mean by this for Event-Driven Ansible? 

Until now, most event driven and "self-healing" automation projects have been complex and time-consuming to deliver because much of the solution is custom developed to meet a singular need. For example, automatically shut down network firewalls when certain activity patterns occur, then notify responsible teams.  This is a great and essential solution--for this one need. 

Event-Driven Ansible is designed to be more flexible with faster and more cost-effective ways to stand up new automation projects across any use case.  By writing an Ansible Rulebook (similar to Ansible Playbooks, but more oriented to "if-then"  scenarios) and allowing Event-Driven Ansible to subscribe to an event listening source, your teams can more quickly and easily automate a variety of tasks across the organization.

Think of it like a crescent wrench: a single tool that is easy to adjust to different size bolts.  Same idea here - a single automation tool that addresses a broad variety of IT automation needs. 

What is Event-Driven Ansible?

Event-Driven Ansible is a highly scalable, flexible automation capability that works with event sources such as other software vendors'  monitoring tools. In an automatic remediation use case, these vendor tools watch your IT solutions and identify "events," such as an outage. Event-Driven Ansible documents your team's technical information on how you want to act on the identified event (an outage in our example) as rules in Ansible Rulebooks. When this event (outage) occurs, Event-Driven Ansible  matches the rule to the "event" (the outage), and automatically implements the documented changes or response in the rulebook to handle the event. In our outage example, this may be an action such as resetting or rebooting the non-responding asset.

EDA diagram

There are three major building blocks in the Event-Driven Ansible model, sources, rules and actions that play key roles in completing the workflow described above:

  • Sources are third party vendor tools that provide the events. They define and identify where events occur, then pass them to Event-Driven Ansible.  Current source support includes Prometheus, Sensu, Red Hat solutions, webhooks and Kafka, as well as custom "bring your own" sources.
  • Rules document your desired handling of the event via Ansible Rulebooks. They use familiar YAML-like structures and follow an "if this then that" model.  Ansible Rulebooks may call Ansible Playbooks or have direct module execution functions.
  • Actions are the result of executing the Ansible Rulebook's instructions when the event occurs.

More about integrations

Event-Driven Ansible allows you to subscribe to sources, listen for events and then act on those events. Currently we have a number of source plugins that have been created and can be used. 

We are enabling events from partner technologies by providing event source plugins for webhooks and for Kafka. Many partner tools can utilize Kafka and webhooks for integration into the Event-Driven Ansible ecosystem. Once Event-Driven Ansible receives events from these sources, it can allocate rules against them from the instructions you have specified in Ansible Rulebooks. Technology providers can also develop event source plugins, which more directly integrate their tools with Event-Driven Ansible and distribute them via Content Collections.

Open source plugins are also supported. These plugins enable Event-Driven Ansible to process a number of different events. They include: 

  • Kafka for event streams
  • webhooks
  • watchdog, a file system watcher
  • url_check to check the status of a url 
  • Range, an event generation plugin
  • File, which loads facts from YAML
  • Roadmap integrations support processing from the cloud service providers

In addition to all these integrations that enable events to prompt action, it is important to note that Red Hat Ansible Automation Platform does not require an agent to be present on a target solution receiving an automated action.  This is convenient and ideal for technologies that cannot host an agent, such as an edge device or network router--- and it makes Event-Driven Ansible a simpler solution to deploy.

Starting small, thinking big: recommended use cases

Red Hat often recommends a "start small, think big" approach to growing your automation maturity and Even-Driven Ansible is no exception. We think IT service management is a great place to start and we suggest you look for simple tasks that get repeated very often to see the most benefit.  

You can use Event-Driven Ansible Rulebooks to enhance service tickets, do basic remediation of tickets and issues and also manage the variety of end-user requests that you receive everyday, like password resets.  

Additionally, you can automate use cases across all of the common areas where you automate today -- infrastructure, network, cloud, security, and edge -- for service management and other tasks.  Once you get the basics down, growing the number, scope and sophistication of your Ansible Rulebooks is easy.

Getting started and sharing feedback

Start by reviewing this web page where you will find more details on Event-Driven Ansible and can access additional resources such as a self-paced lab, how-to-video and more details about this solution. You will also find a registration link to our first Office Hours event, where you can ask questions and learn tips and techniques.  

Once you have some familiarity, use the developer preview code found here. In summary, your basic steps will be to download and install Event-Driven Ansible from the GitHub repository, configure sources of events so Event-Driven Ansible is subscribed, write your Ansible Rulebook(s) and start listening to events.

As a community project, we ask for your feedback through GitHub comments, on our Office Hours, or via email at event-driven-automation@redhat.com

Looking ahead for Event-Driven Ansible

While this technology is a community project, we have bigger ideas to shape this capability to meet your needs.  In addition, we hope to integrate Event-Driven Ansible as a component in Red Hat Ansible Automation Platform in the future. With Red Hat Ansible Automation Platform, you could gain access to all the platform has to offer, including RBAC and other controls, and the ability to use a single automation platform even more flexibly for both manually-initiated automation via Ansible Playbooks and also for your fully automated actions via Ansible Rulebooks.   

I hope this has provided a good overview of Event-Driven Ansible.










Using Ansible and Packer, From Provisioning to Orchestration

Using Ansible and Packer, From Provisioning to Orchestration

Red Hat Ansible Automation Platform can help you orchestrate, operationalize and govern your hybrid cloud deployments.  In my last public cloud blog, I talked about "Two Simple Ways Automation Can Save You Money on Your AWS Bill" and similarly to Ashton's blog "Bringing Order to the Cloud: Day 2 Operations in AWS with Ansible", we both wanted to look outside the common public cloud use-case of provisioning and deprovisioning resources and instead look at automating common operational tasks.  For this blog post I want to cover how the Technical Marketing team for Ansible orchestrates a pipeline for demos and workshops with Ansible and how we integrate that with custom AMIs (Amazon Machine Images) created with Packer.  Packer is an open source tool that allows IT operators to standardize and automate the process of building system images.

For some of our self-paced interactive hands-on labs on Ansible.com, we can quickly spin up images in seconds.  In an example automation pipeline we will:

  1. Provision a virtual instance.
  2. Use Ansible Automation Platform to install an application; in my case, I am literally installing our product Ansible Automation Platform (is that too meta?).
  3. After the application install, set up the lab guides, pre-load automation controller with some job templates, create inventory and credentials and even set up SSL certificates.  

While this is fast, it might take a few minutes to load, and web users are unlikely to be patient.  The Netflix era means that people want instant gratification!  Installing automation controller might take five to 10 minutes, so I need a faster method to deploy.

cloud automation pipeline diagram

What I can do is combine our normal Ansible automation pipeline with Packer and pre-build the cloud instances so they already have the application installed, and are configured and ready to go as soon as it boots.  Packer will provision a specific machine image on my public cloud (Azure, AWS, GCP), run the commands and changes I need, and then publish a new image with all the changes I made to the base image.  In my case I use Ansible the same way.  In my packer HCL (HashiCorp Configuration Language ) file I have an Ansible provisioner:

 provisioner "ansible" {
      command = "ansible-playbook"
      playbook_file = "pre_build_controller.yml"
      user = "ec2-user"
      inventory_file_template = "controller ansible_host={{ .Host }} ansible_user={{ .User }} ansible_port={{ .Port }}"
      extra_arguments = local.extra_args

    }

Red Hat Ansible Tech Marketing Example can be found on Github

This simple provisioner plugin is executing the Ansible Playbook pre_build_controller.yml.  I can also use Ansible Automation Platform to orchestrate the whole process by kicking off Packer and then continuing on.  Anything that I can do ahead of time, I can pre-build into the image.  This means there is less automation I need to do at boot time (or what is sometimes referred to as "automation just in time").  The new process looks like this diagram:

create pre-built image diagram

These two processes, building images and serving a demo environment, are actually independent of each other.  Depending on how often a pre-built image needs to be executed, we can schedule that in automation controller, or even generate them on-demand via webhooks. On-demand generation means as soon as someone changes an Ansible Playbook relevant to anything pre_build, we can have Ansible Automation Platform create the new image immediately, and even test it!

Sharing and copying cloud instances

Once we create a pre_built AMI, we need to make sure we can use it in multiple regions, and on other accounts. With public marketplace instances you can use cool automation tricks like using the ec2_ami_info module for dynamic lookups, but we have now essentially created private AMIs we can copy to other regions, or share to other AWS accounts so they have access to these pre_built images.  To solve this problem we can use automation, and I have created an Ansible Content Collection for ansible_cloud.share_ami.  

This Collection currently has two roles available that will assist cloud administrators, copy and share.

Copy

This role will copy an AMI from one region, to any other specified regions.  This means you can use Packer to create it just once, and have Ansible take care of copying it to any other regions and return you with a list of new AMIs per region.

- name: copy ami
    include_role:
      name: ansible_cloud.share_ami.copy
    vars:
      ami_list: "{{ my_ami_list }}"
      copy_to_regions: "{{ my_copy_to_regions }}"

Where your variable file looks like this:

my_ami_list:
  ap-northeast-1: ami-01334example
  ap-southeast-1: ami-0b3f3example
  eu-central-1: ami-03a5732example
  us-east-1: ami-01da94de9cexample
my_copy_to_regions:
  - us-west-1
  - us-east-2

In this case, there will be four AMIs copied to us-west-1 and us-east-2 with a new AMI identifier returned to your terminal window or the automation controller console.

Share

This role will share an AMI from one account and region to another account (in the same region).  This allows you to share your pre_built AMIs to as many accounts as you want really quickly.

- name: share ami
  include_role:
    name: ansible_cloud.share_ami.share
  vars:
    user_id_list: "{{ account_list }}"
    ami_list: "{{ my_ami_list }}"

Where your variable file looks like this:

my_ami_list:
  ap-northeast-1: ami-01334example
  ap-southeast-1: ami-0b3f3example
  eu-central-1: ami-03a5732example
  us-east-1: ami-01da94de9cexample
  us-east-2: ami-009f8b2c6dexample
account_list:
  - "11463example"
  - "90073example"
  - "71963example"
  - "07923example"

This would share these five AMIs to the four accounts listed.  There are also two optional variables for share AMI, new_ami_name and new_tag which will name (e.g. add the tag name: "whatever you put") and add a hard coded ansiblecloud tag (e.g. add the tag ansiblecloud: "whatever you put").  This could be further customized to add as many tags as you want to your AMIs to help keep track of them.

new_ami_name: "RHEL 8.6 with automation controller"
new_tag: "my test"

Now you can see one of the many ways that Ansible Automation Platform and Packer can easily and seamlessly work together to accomplish cloud automation tasks.  If you want more blogs on Ansible and Packer or Ansible and Terraform, please let us know!