Cloud computing has become an essential factor in IT transformation and business innovation. The highly dynamic nature of cloud environments, where new resources are constantly being added and removed, poses new challenges. One of the main challenges organizations face is the lack of visibility into the cloud environment. As cloud computing continues to grow in complexity, it can be challenging to keep track of all the different resources and applications that make up the infrastructure. This lack of visibility can make it difficult to maintain security policies and configurations, making the infrastructure vulnerable to attacks.
In this context, another challenge is the need to maintain compliance with industry regulations and standards. Depending on the industry and location, there may be specific regulations that organizations must comply with when storing and processing sensitive data in the cloud. Ensuring compliance can be a time-consuming and costly process.
Without automation and proactive monitoring, cloud environments are difficult and complex to manage. In this context, Ansible offers a plethora of tools, such as Ansible validated content and Event-Driven Ansible, that can help you to successfully mitigate security threats while also streamlining your operations and reducing costs.
In this blog post, we will show you how to leverage Ansible validated content for cloud.aws_ops
and Event-Driven Ansible to master your cloud computing journey.
Event-Driven Ansible at a glance
Event-Driven Ansible refers to a method of running Ansible that allows it to respond automatically to events occurring within a system. This approach allows Ansible to react to changes in real-time and automate responses to events such as configuration changes, application failures, or security breaches.
Event-Driven Ansible relies on using tools like webhooks or other notification mechanisms to trigger the automatic execution of Ansible Playbooks in response to events. For example, if a server goes down, Ansible can be configured to automatically start up a new server to replace it.
To use Event-Driven Ansible, it needs to be configured to listen for specific events and set appropriate triggers to execute playbooks in response.
Overall, Event-Driven Ansible can be an extremely powerful tool for automating and managing complex systems, but it does require a good deal of planning and configuration to set up and maintain. More information on this can be found here.
Ansible Validated Content at a glance
Ansible validated content is a collection of pre-tested, validated, and trusted Ansible Roles and playbooks. This content is designed to make it easier to provide a secure, reliable, and a consistent way across deployments to manage infrastructure. The validated content can be used out-of-the-box, reducing the time and effort required to create custom Ansible content from scratch.
Let’s take a deep look at the Ansible validated content collection cloud.aws_ops
. This collection includes a variety of Ansible Roles to help with day-two operations of AWS resources. The collection’s content has been highlighted by Nuno Martins in the blog post titled “Crank up your automation with Ansible validated content”.
For this blog post, we will be focusing on two roles of this Ansible validated content collection:
cloud.aws_ops.enable_cloudtrail_encryption_with_kms
encrypts an AWS CloudTrail trail using the AWS Key Management Service (AWS KMS) customer managed key you specify.cloud.aws_ops.awsconfig_multiregion_cloudtrail
creates or deletes an AWS CloudTrail trail for multiple regions.
Next, let's put the Ansible validated content collection cloud.aws_ops
and Event-Driven Ansible to the test using a typical cloud scenario of encrypting CloudTrail logs.
Encrypting CloudTrail logs
Suppose you have a large AWS account with multiple users and services that make API calls to your resources.
AWS CloudTrail is a service that logs all the API calls made in your AWS account, including API calls made by other AWS services. By default, CloudTrail logs are stored in an S3 bucket in an unencrypted form. However, you want to check that your CloudTrail logs are secured and tamper-proof. To achieve this, you can enable encryption for CloudTrail logs using AWS KMS.
To enable encryption for CloudTrail logs, you would create a KMS key that is used to encrypt the S3 bucket where your CloudTrail logs are stored. You would then configure CloudTrail to use this key to encrypt the logs.
With encryption enabled, all CloudTrail logs are automatically encrypted when they are written to the S3 bucket. The logs can only be decrypted using the KMS key that you specified. This establishes that your logs are secure and tamper-proof, and can only be accessed by authorized users and services.
Encrypting AWS CloudTrail logs is important for several reasons:
- Protect sensitive information: CloudTrail logs contain a wealth of information about the AWS account, including API calls, user identities, and resource information. Encrypting CloudTrail logs helps protect this sensitive information from unauthorized access or tampering.
- Compliance requirements: Many compliance standards, such as HIPAA and PCI DSS, require log encryption to protect sensitive information. By encrypting CloudTrail logs, you can help your organization remain compliant with these standards.
- Prevent tampering: CloudTrail’s log encryption helps prevent logs from being tampered with. This helps maintain log integrity and an accurate record of all API calls made to your AWS account.
- Secure data: CloudTrail log’s encryption provides an additional layer of security for data. In the event that your S3 bucket is compromised, the encrypted logs cannot be accessed without the encryption key.
Set up automation
Let’s deploy the above-highlighted cloud scenario. This playbook allows you to deploy and configure the AWS infrastructure. In this post we're going to run the playbook multiple times, using different values for the operation parameter to modify the trail after the initial deployment. This will allow us to introduce configuration drift to the system and see how Event-Driven Ansible can help manage and react to drift.
disable_encryption
: Disables trail encryption.delete_trail
: Deletes the AWS CloudTrail.disable_key
: Disable the KMS key used to encrypt the S3 bucket.delete_key
: Deletes the KMS key used to encrypt the S3 bucket.
Next, let’s roll up our AWS infrastructure with the following command:
ansible-playbook manage_playbook.yml --extra-vars operation=deploy
We should now have our AWS infrastructure up and running. To enable Event-Driven Ansible in the cloud.aws_ops
collection, we created two folders:
rulesbooks
hosts the rulebook that tells the system what events to flag and how to respond to them.playbooks/eda
hosts the playbooks that implement the logic to mitigate the drift. Each of the playbooks insideplaybooks/eda
handle a specific drift.
Let’s take a look at the rulebook we wrote for this use case:
- name: Rules for cloud.aws_ops to ensure the CloudTrail exists and is encrypted
hosts: all
sources:
- ansible.eda.aws_cloudtrail:
region: 'us-east-1'
delay_seconds: 5
rules:
- name: Enable Trail encryption
condition: event.CloudTrailEvent.eventName=="UpdateTrail" and event.CloudTrailEvent.requestParameters.kmsKeyId=="" and event.CloudTrailEvent.requestParameters.name==vars.cloudtrail_name
action:
run_playbook:
name: playbooks/eda/aws_restore_cloudtrail_encryption.yml
- name: Re-create the CloudTrail
condition: event.CloudTrailEvent.eventName=="DeleteTrail" and event.CloudTrailEvent.requestParameters.name==vars.cloudtrail_name
action:
run_playbook:
name: playbooks/eda/aws_restore_cloudtrail.yml
- name: Cancels the deletion of the KMS key and re-enables it
condition: event.CloudTrailEvent.eventName=="ScheduleKeyDeletion" or event.CloudTrailEvent.eventName=="DisableKey"
action:
run_playbook:
name: playbooks/eda/aws_restore_kms_key.yml
As already explained by Joe Pisciotta in one of his previous blog posts “Introducing the Event-Driven Ansible developer preview”, a rulebook is comprised of three main components:
- sources define which event source we will use
- rules define conditionals we will try to match from the event source
- actions trigger what you need to happen should a condition be met
In our case, we used the ansible.eda.aws_cloudtrail
event source plugin for getting events from an AWS CloudTrail. This plugin polls events from an AWS CloudTrail every 5 seconds. Next, this rulebook implements a ruleset with three rules as follows:
Rule #1: Enable trail encryption
This rule handles the case when trail encryption is disabled. It is triggered when an UpdateTrail
operation is performed on the trail and the parameters contained in the UpdateTrail
request match these conditions: event.CloudTrailEvent.requestParameters.kmsKeyId==""
and event.CloudTrailEvent.requestParameters.name==vars.cloudtrail_name
. The action that is taken to mitigate this drift will run the playbooks/eda/aws_restore_cloudtrail_encryption.yml
playbook. The playbook is shown below.
---
- name: Include 'cloud.aws_ops.enable_cloudtrail_encryption_with_kms' role
hosts: localhost
tasks:
- name: Include 'cloud.aws_ops.enable_cloudtrail_encryption_with_kms' role
ansible.builtin.include_role:
name: cloud.aws_ops.enable_cloudtrail_encryption_with_kms
vars:
enable_cloudtrail_encryption_with_kms_trail_name: "{{ cloudtrail_name }}"
enable_cloudtrail_encryption_with_kms_kms_key_id: "{{ kms_key_alias }}"
This playbook runs the Ansible validated role cloud.aws_ops.enable_cloudtrail_encryption_with_kms
that re-enables the trail’s encryption, restoring the system to its status quo.
Rule #2: Re-create the trail
This rule handles the case when the trail is deleted. Basically, when these conditions event.CloudTrailEvent.eventName=="DeleteTrail"
and event.CloudTrailEvent.requestParameters.name==vars.cloudtrail_name
are met, the action to be taken is running the playbooks/eda/aws_restore_cloudtrail.yml
playbook. Let’s take a quick look at the playbook.
---
- name: Include 'cloud.aws_ops.awsconfig_multiregion_cloudtrail' and 'cloud.aws_ops.enable_cloudtrail_encryption_with_kms' roles
hosts: localhost
tasks:
- name: Include 'cloud.aws_ops.awsconfig_multiregion_cloudtrail' role
ansible.builtin.include_role:
name: cloud.aws_ops.awsconfig_multiregion_cloudtrail
vars:
operation: create
bucket_name: "{{ s3_bucket_name }}"
key_prefix: "{{ key_prefix }}"
trail_name: "{{ cloudtrail_name }}"
- name: Include 'cloud.aws_ops.enable_cloudtrail_encryption_with_kms' role
ansible.builtin.include_role:
name: cloud.aws_ops.enable_cloudtrail_encryption_with_kms
vars:
enable_cloudtrail_encryption_with_kms_trail_name: "{{ cloudtrail_name }}"
enable_cloudtrail_encryption_with_kms_kms_key_id: "{{ kms_key_alias }}"
This playbook runs the Ansible validated content cloud.aws_ops.awsconfig_multiregion_cloudtrail
role first which re-creates the trail and then the cloud.aws_ops.enable_cloudtrail_encryption_with_kms
role to enable the encryption on the newly created trail.
Rule #3: Cancels the deletion of the KMS key and re-enables it
This rule handles the case when the KMS key is deleted or disabled. This results in the condition event.CloudTrailEvent.eventName=="ScheduleKeyDeletion"
or event.CloudTrailEvent.eventName=="DisableKey"
that should be met to trigger this rule. When someone attempts to delete a KMS key intentionally or accidentally, a ScheduleKeyDeletion
event is displayed in AWS CloudTrail. The KMS key is not deleted immediately; because deleting a KMS key is destructive and potentially dangerous, AWS KMS requires setting a 7-30 day waiting period. This situation is handled promptly by running playbooks/eda/aws_restore_kms_key.yml
playbook which cancels the deletion of the KMS key. Similarly, when the KMS key is disabled, the playbook reactivates it to restore the original state of the system. The playbook is shown below:
---
- name: Cancel the deletion of the KMS key and re-enable the KMS key
hosts: localhost
tasks:
- name: Gather information about the KMS key
amazon.aws.kms_key_info:
alias: "{{ kms_key_alias }}"
register: __kms_key_info
- name: Set 'kms_key_arn' variable
ansible.builtin.set_fact:
kms_key_arn: "{{ __kms_key_info.kms_keys.0.key_arn }}"
- name: Cancel the deletion of the KMS key and re-enable the KMS key
block:
- name: Cancel the deletion of the KMS key and re-enable the KMS key
amazon.aws.kms_key:
state: present
alias: "{{ kms_key_alias }}"
enabled: true
register: __kms_key_restore
- name: Assert that key has been re-enabled
ansible.builtin.assert:
that:
- __kms_key_restore.key_state == "Enabled"
when: event.CloudTrailEvent.requestParameters.keyId == kms_key_arn
The playbook sets the KMS key ARN and uses it to determine whether to both cancel the KMS key deletion and to re-enable it.
Let’s start our rulebook by using:
ansible-rulebook --inventory /home/alinabuzachis/dev/inventory-eda.yml --rulebook rulebooks/aws_manage_cloudtrail_encryption.yml --vars vars.yml
The inventory file used sets that look like:
all:
hosts:
localhost:
ansible_python_interpreter: /path/to/python
ansible_connection: local
While, vars.yml
file instead looks like:
_resource_prefix: ansible-cloudtrail-demo-eda
cloudtrail_name: "{{ _resource_prefix }}-trail"
s3_bucket_name: "{{ _resource_prefix }}-bucket"
kms_key_alias: "{{ _resource_prefix }}-key"
key_prefix: "{{ _resource_prefix }}"
At this point, we can start introducing some drift into the system. Let’s suppose that someone intentionally or accidentally disables the CloudTrail encryption. To simulate this action we run the following command:
ansible-playbook manage_trail_encryption_play.yml --extra-vars operation=disable_encryption
Once this playbook has finished and after moving into the terminal window where the rulebook is executed, we will notice that the log generated by the execution of the aws_cloudtrail_encryption.yml
playbook as an action is shown.
[Partial log]
…
TASK [cloud.aws_ops.enable_cloudtrail_encryption_with_kms : Assert that AWS CloudTrail trail was successfully encrypted] ***
ok: [localhost] => {
"changed": false,
"msg": "AWS CloudTrail trail was successfully encrypted"
}
By running the corresponding playbook, the drift has been mitigated and the system has been restored to its initial state.
Similarly, let’s suppose someone deletes the CloudTrail. This can be achieved by running:
ansible-playbook manage_trail_encryption_play.yml --extra-vars operation=delete_trail
Once this playbook has finished and shortly after, the output of the action that has been taken to mitigate this drift is shown in the terminal where the ansible-rulebook
is running. As we expect, the cloud.aws_ops.awsconfig_multiregion_cloudtrail
is first run to recreate the trail and then the cloud.aws_ops.enable_cloudtrail_encryption_with_kms
role enables the CloudTrail encryption. A partial log is shown below:
[Partial log]
…
TASK [cloud.aws_ops.awsconfig_multiregion_cloudtrail : Verify that trail has been created/updated] ***
ok: [localhost] => {
"msg": "Trail 'ansible-cloudtrail-demo-eda-trail' successfully created/updated."
}
…
TASK [cloud.aws_ops.enable_cloudtrail_encryption_with_kms : Assert that AWS CloudTrail trail was successfully encrypted] ***
ok: [localhost] => {
"changed": false,
"msg": "AWS CloudTrail trail was successfully encrypted"
}
Lastly, let’s introduce some additional drift by deleting the AWS KMS key.
ansible-playbook manage_trail_encryption_play.yml --extra-vars operation=delete_key
As we already expect, logs from the action taken to mitigate this drift is shown in the terminal where ansible-runner
is running.
PLAY [Cancel the deletion of the KMS key and re-enable the KMS key] ************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [Gather information about the KMS key] ************************************
can be disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [localhost]
TASK [Set 'kms_key_arn' variable] **********************************************
ok: [localhost]
TASK [Cancel the deletion of the KMS key and re-enable the KMS key] ************
ok: [localhost]
TASK [Assert that key has been re-enabled] *************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
Ansible validated content for cloud.aws_ops
and Event-Driven Ansible create many opportunities for automated issue resolution and observation of cloud computing environments, helping you to easily automate, mitigate security issues, and maximize your mastery of cloud environments.
Where to go next
- Come visit us at AnsibleFest, now a part of Red Hat Summit 2023.
- Missed out on AnsibleFest 2022? Check out the Best of AnsibleFest 2022.
- Self-paced lab exercises - We have interactive, in-browser exercises to help you get started with Ansible Automation Platform.
- Try Ansible Automation Platform free for 60 days.
About the author
Browse by channel
Automation
The latest on IT automation that spans tech, teams, and environments
Artificial intelligence
Explore the platforms and partners building a faster path for AI
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
Explore how we reduce risks across environments and technologies
Edge computing
Updates on the solutions that simplify infrastructure at the edge
Infrastructure
Stay up to date on the world’s leading enterprise Linux platform
Applications
The latest on our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Developer resources
- Customer support
- Red Hat value calculator
- Red Hat Ecosystem Catalog
- Find a partner
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit