What’s New: Cloud Automation with amazon.aws 4.0.0

July 20, 2022 by Alina Buzachis

When it comes to Amazon Web Services (AWS) infrastructure automation, the latest release of the amazon.aws Collection brings a number of enhancements to improve the overall user experience and speed up the process from development to production.

This blog post goes through changes and highlights on what’s new in the 4.0.0 release of this Ansible Content Collection.

 

Forward-looking Changes

With the recent release, we have included numerous bug fixes and features that further solidify the amazon.aws Collection. Let's go through some of them!

 

New Features Highlights

Some of the new features available in this Collection release are listed below.

 

EC2 Subnets in AWS Outposts

AWS Outposts is a fully managed service that extends AWS infrastructure to on-premise locations, reducing latency and data processing needs. EC2 subnets can be created on AWS Outposts by specifying the AWS Outpost Amazon Resource Name (ARN) of the in the creation phase.

The new outpost_arn option of the ec2_vpc_subnet module allows you to do that.

- name: Create an EC2 subnet on an AWS Outpost
  amazon.aws.ec2_vpc_subnet:
    state: present
    vpc_id: vpc-123456
    cidr: 10.1.100.0/24
    outpost_arn: "{{ outpost_arn }}"
    tags:
      "Environment": "production"

 

New EC2 Instance Metadata Options

The new EC2 instance metadata options allow you to configure new or existing EC2 instances to do the following:

  • http_put_response_hop_limit -  Specify the PUT response hop limit. Note that when specifying a value for http_put_response_hop_limit, you must also set http_endpoint to enabled.
  • http_protocol_ipv6 - To enable the IPv6 endpoint for your instance, set it to enabled. The IPv6 endpoint is disabled by default.
  • instance_metadata_tags - When set to enabled, this allows access to instance tags from the instance metadata. When set to disabled, it turns off access to instance tags from the instance metadata. 
- name: Create t3.nano EC2 instance with metadata_options
  amazon.aws.ec2_instance:
    state: present
    name: "instance-01"
    image_id: "{{ ec2_ami_id }}"
    tags:
      TestId: "{{ ec2_instance_tag_TestId }}"
    vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
    instance_type: t3.nano
    metadata_options:
      http_put_response_hop_limit: 3
      http_protocol_ipv6: enabled
      instance_metadata_tags: enabled
      http_endpoint: enabled
      http_tokens: required

 

New boto3/botocore Versioning

The amazon.aws Collection has dropped support for botocore<1.19.0 and boto3<1.16.0. Most modules will continue to work with older versions of the AWS Software Development Kit (SDK), however, compatibility with older versions of the AWS SDK is not guaranteed and will not be tested. When using older versions of the AWS SDK, a warning will be displayed by Ansible.

Therefore, all the amazon.aws Collection modules use boto3/botocore and have been fully tested against Python 3. Support for the original, end-of-life AWS SDK boto has been completely removed, including all relevant helper functions.

Individual modules may require a more recent library version to support specific features or require the boto library. Check out the module documentation for the minimum required version for each module. 

 

Tags Management

To establish consistent behavior and reinforce the declarative nature of Ansible, we have made some improvements regarding tag management. Some modules in the Collection that support tagging had purge_tags=False by default. We decided to change the default value of the purge_tags option to True and deprecate the old default value in this release. The default value will change to True in the upcoming 5.0.0 release of the Collection.

In addition, as tag keys prefixed with aws: are reserved for AWS use and cannot be edited or deleted, we have adapted to this behavior. As such, they will be ignored for the purpose of the purge_tags option. Further details on AWS tagging can be found in the AWS documentation.

 

Modules Renamed

As naming might be generally tedious, a misleading module’s name may complicate the user experience.

We decided to rename the aws_s3 module in this Collection release. This decision stems from the fact that the aws_s3 naming is very generic and unintuitive, but is primarily used to manage S3 objects. Although it provides minimal support for S3 bucket creation, the feature set is very limited. Support for advanced S3 bucket management features is provided by the s3_bucket module (e.g., managing encryption). 

Therefore, the aws_s3 module has been renamed to s3_object. We also decided to deprecate the support for creation and deletion of S3 buckets using the aws_s3 module because it is out of the module’s scope and the s3_bucket module will cover this functionality.

Other modules are planned to be renamed, as well as promoted from community supported to Red Hat supported,in the next version 5.0.0 of this Collection. Keep an eye on it!

 

Modules Removed

The following modules have been removed from this Collection release.

 

ec2

This was officially deprecated in the 2.0.0 release of the amazon.aws Collection because it was based on the deprecated boto AWS SDK. Most of the functionality was replaced by the ec2_instance module, and the ec2 module has been completely removed from this Collection release. Please update your playbooks by using the ec2_instance module instead. Some useful information about the migration from ec2 to ec2_instance can be found in one of our previous blogs by Sean Cavanaugh - How to Migrate your Ansible Playbooks to Support AWS boto3

 

aws_az_facts

The aws_az_facts alias for the aws_az_info module was deprecated in Ansible 2.9 and then removed in this Collection release. Please update your playbooks by using the aws_az_info module instead.

 

What’s next?

In this blog, we highlighted what’s new in the 4.0.0 release of the amazon.aws Collection and how these new enhancements improve the overall user experience and speed up the process from development to production.

That said, using Red Hat Ansible Automation Platform and the latest amazon.aws Collection to automate your deployments on AWS can greatly increase the chances that your cloud initiative will succeed. 

We hope you found this blog helpful! But, more importantly, we hope it inspired you to try out the latest amazon.aws Collection release and let us know what you think. 

For further reading and information, visit the other blogs related to AWS automation. If you are unfamiliar with Ansible Content Collections, check out our YouTube playlist for everything on Collections. The videos will get you up to speed quickly.

Also, don’t forget to check out our Automate infrastructure workflows e-book if you want to learn more about building a unified, automated pipeline for infrastructure operations.

Share:

Topics:
Cloud Automation


 

Alina Buzachis

Alina Buzachis, PhD, is a software engineer at Red Hat Ansible, where she works primarily on cloud technologies. Alina received her PhD in Distributed Systems in 2021, focusing on advanced microservice orchestration techniques in the Cloud-to-Thing continuum. In her spare time, Alina enjoys travelling, hiking, and cooking.


rss-icon  RSS Feed

rh-ansiblefest-2022-promo-registration-blog-card