Last year, we made available an experimental alpha Ansible Content Collection of generated modules using the AWS Cloud Control API to interact with AWS services. Although the Collection is not intended for production, we are constantly trying to improve and extend its functionality and achieve its supportability in the future.
In this blog post, we will go over what else has changed and highlight what’s new in the 0.3.0 release of this Ansible Content Collection.
Much of our work in release 0.3.0 focused on releasing several new enhancements, clarifying supportability policies, and extending the automation umbrella by generating new modules. Let’s deep dive into it!
New boto3/botocore Versioning
The amazon.cloud Collection has dropped support for botocore<1.28.0 and boto3<1.25.0. Most modules will continue to work with older versions of the AWS Software Development Kit (SDK), however, compatibility with older versions of the AWS SDK is not guaranteed and will not be tested.
New Ansible Support Policy
This Collection release drops support for ansible-core<2.11. In particular, Ansible Core 2.10 and Ansible 2.9 are not supported. For more information, visit Ansible release documentation.
New Modules Highlights
This release brings a set of newly supported modules. They provide new exciting capabilities facilitating automation for various workloads and use cases.
Scaling your applications faster with the new AutoScaling modules
Perhaps you have a web application that experiences a surge in traffic at certain times of the day. Without autoscaling, you would need to manually add more EC2 instances to handle the extra traffic, which can take time and money. However, with amazon.cloud.autoscaling_launch_configuration, policies can be set to automatically add or remove EC2 instances based on specific criteria. An example is shown below:
- name: Create an AWS AutoScaling LaunchConfiguration
echo "Hello, world!" >> /var/log/myapp.log
- device_name: /dev/sda1
Suppose you have an AutoScaling group that launches EC2 instances to handle requests for your web application. When a new instance is launched, you need to perform some custom actions, such as configuring the instance with the correct software and joining it to a load balancer. To perform these actions, you can configure the lifecycle hook with amazon.cloud.autoscaling_lifecycle_hook to trigger a notification when a new instance is launched. The notification can trigger an AWS Lambda function or an Amazon SNS topic that performs the custom actions. An example is shown below:
- name: Create an AWS AutoScaling LifecycleHook
Perhaps you have a web application that experiences a sudden surge in traffic during certain times of the day and you need to quickly add more instances to handle the load. In addition, you want to minimize the time it takes for new instances to become available and start serving traffic. To achieve this, amazon.cloud.autoscaling_warm_pool allows you to configure the warm pool to add pre-warmed instances to the group as soon as the demand increases, rather than waiting for new instances to be launched and configured. An example is shown below:
- name: Create an AWS AutoScaling WarmPool
Easy and simple deployment of your containers with the new Elastic Container Services (ECS) modules
Suppose you have a microservices-based application that runs in containers and you need to deploy and manage the containers efficiently. You also need to scale the containers according to the application load to meet user demand. amazon.cloud.ecs_cluster helps you to create a cluster and register instances (EC2 instances or Fargate) in the cluster. You can then deploy your containers onto the cluster and manage them using ECS. Here's an example:
- name: Create an ECS cluster
Suppose you have a large-scale application that requires a lot of computational resources to handle variable demand. You want to be able to scale up or down the computational capacity of your ECS cluster based on the demand, but you also want to reduce costs when demand is low. To achieve this, you can use AWS ECS capacity providers using amazon.cloud.ecs_capacity_provider. Capacity providers are used to manage the capacity of the ECS cluster infrastructure. You can associate a capacity provider to your ECS cluster using amazon.cloud.ecs_cluster_capacity_provider_association. Here's an example:
- name: Create an ECS CapacityProvider
- name: Create an ECS Cluster CapacityProvider Association
- capacity_provider: my-capacity-provider
Ease your multi-cloud deployment with the new Elastic Container Repository (ECR) modules
Perhaps you have a multi-cloud deployment where you need to use the same container images on multiple cloud providers. You can easily create an ECR repository using amazon.cloud.ecr_repository to store container images. Here’s an example:
- name: Create an ECR Repository
Enhance your web application with the new WAFv2 modules
Suppose you have a web application that is prone to malicious attacks. To protect the web application, you want to deploy an AWS WAFv2 firewall in front of it to filter out malicious traffic. For this purpose, you can create an AWS WAFv2 web ACL and associate it with a load balancer or API gateway using amazon.cloud.wafv2_web_acl_association. An example is shown below:
- name: Create a WAFv2 WebACLAssociation
You might also want to block traffic from address ranges such as 192.0.2.0/24 and 203.0.113.0/24 using amazon.cloud.wafv2_ip_set.
- name: Create a WAFv2 IPSet
description: A set of IP addresses to block
You can also monitor and analyze the traffic blocked or allowed by the web ACL using the amazon.cloud.wafv2_logging_configuration module. This module allows you to specify which AWS resource to send the logs to, as well as the format and fields to include in the logs. Additional protection can be implemented using amazon.cloud.wafv2_regex_pattern_set. The AWS WAFv2 RegexPatternSet blocks traffic that matches regular expressions. An example is shown below:
- name: Create a WAFv2 RegetPatternSet
description: A set of regex patterns to block
Lighten up your Log processing by using metric filters
One of the most common uses of Amazon CloudWatch is monitoring EC2 instances. Amazon CloudWatch logs can accumulate large amounts of data, so it is important to be able to filter the log data according to your needs. Filtering is achieved through the use of metric filters that can be set using amazon.cloud.logs_metric_filter as shown below.
- name: Create a Logs Metric Filter
filter_pattern: "[timestamp=*Z, request_id=\"*\", event]"
- metric_name: Requests
Automate operational tasks across your AWS resources using the new SSM modules
Suppose you have an EC2 instance that requires specific software to be installed and configured before it can be used. amazon.cloud.ssm_document creates an SSM document that defines the steps to install and configure the software. Once the SSM document is created, it can be used to automate the installation and configuration process across multiple EC2 instances. For this purpose, you can create an SSM Run Command, which executes the SSM document on the targeted EC2 instances. An example is shown below:
- name: Create an SSM Document
description: "My SSM Document"
- action: "aws:runShellScript"
- "echo 'Hello World'"
Perhaps you also need to manage resource disaster recovery to maintain business continuity and minimize downtime. amazon.cloud.ssm_resource_data_sync can be used to back up and restore resource data in case of a disaster or outage.
Improve your applications reliability and availability with the new EC2 placement group module
Perhaps you need to set up a workload that requires low-latency, high-bandwidth communication between instances. A cluster placement group can help you do this by placing instances close together in a single cluster within an availability zone to provide high-bandwidth, low-latency network performance. An example is shown below:
- name: Create an EC2 PlacementGroup
Simplify access management for AWS services with the new IAM instance profile module
Suppose you have an EC2 instance that needs to access an S3 bucket to upload and download files. You can create an IAM role with permissions to access the S3 bucket and attach it to an IAM instance profile created using amazon.cloud.iam_instance_profile. Then you can launch the EC2 instance with the instance profile associated with it. An example is shown below:
- name: Create an IAM InstanceProfile
Managing Your Data with Ease with the new Amazon RDS modules
This release also brings a bunch of new RDS modules such as:
- amazon.cloud.rds_db_instance - Creates and manages an Amazon DB instance.
- amazon.cloud.rds_db_subnet_group - Creates and manages a database subnet group.
- amazon.cloud.rds_global_cluster - Creates and manages an Amazon Aurora global database spread across multiple AWS Regions.
- amazon.cloud.rds_option_group - Creates and manages an RDS option group.
- amazon.cloud.rds_db_cluster_parameter_group - Creates and manages an RDS cluster option group.
One of our upcoming blog posts will be dedicated to RDS and will cover some detailed use case scenarios. Stay tuned!
Where to go next
We hope you found this blog helpful! But, more importantly, we hope it inspired you to try out the latest amazon.cloud Collection release and let us know what you think. Please stop by at the Ansible AWS IRC channel #ansible-aws on Libera.Chat to provide your valuable feedback or receive assistance with the amazon.cloud Collection.
- Come visit us at AnsibleFest, now a part of Red Hat Summit 2023.
- Missed out on AnsibleFest 2022? Check out the Best of AnsibleFest 2022.
- Self-paced lab exercises - We have interactive, in-browser exercises to help you get started with Ansible Automation Platform.
- Try Ansible Automation Platform free for 60 days.