Bullhorn #12

Ansible Bullhorn banner

The Bullhorn

A Newsletter for the Ansible Developer Community
Issue #12, 2020-10-21


Welcome to The Bullhorn, our newsletter for the Ansible developer community. If you have any questions or content you’d like to share, please reach out to us at the-bullhorn@redhat.com, or comment on this GitHub issue.
 

KEY DATES

 

ANSIBLE 2.10.1 NOW GENERALLY AVAILABLE

The Ansible Community team announced the general availability of Ansible 2.10.1 on October 13th. This first minor release of the ansible-2.10 package should be a drop-in replacement for Ansible 2.9; the roles and playbooks that you currently use should work out of the box with ansible-2.10.1. For more information on what’s new, how to get it, plus caveats and known bugs, read Toshio Kuratomi’s announcement to the ansible-devel mailing list.
 

ANSIBLE-BASE 2.10.2 NOW GENERALLY AVAILABLE

The Ansible Base team announced the general release of Ansible 2.10.2 on October 5th. This ansible-base package consists of only the Ansible execution engine, related tools (e.g. ansible-galaxy, ansible-test), and a very small set of built-in plugins, and is also bundled with the larger Ansible distribution. For more information on how to download, test, and report issues, read Rick Elrod’s announcement to the ansible-devel mailing list.
 

ANSIBLE 2.9.14 AND 2.8.16 RELEASED

The Ansible Core team announced the availability of Ansible 2.9.14 and Ansible 2.8.16 on October 5th, both of which are maintenance releases. Follow this link for Rick Elrod’s email to the ansible-devel mailing list, to obtain details on what’s new, installation instructions, and links to the full changelogs.
 

PROPOSALS FOR ANSIBLE COLLECTIONS

Feedback is needed from the wider Ansible community on the following:
  • Moving content from collections in Ansible 2.10 to collections currently outside, or: should we add new collections to 2.10.x? For example, should we allow Postgres modules to be migrated from community.general to a new community.postgres collection to be included in Ansible 2.10? #117
  • Should docs be included in collection and Ansible package tarballs? #120
  • Should tests be included in collection tarballs? #111
Please add your feedback via commenting on the GitHub discussions. You can see the full list of proposals here.
 

CHANGES IMPACTING COLLECTION CONTRIBUTORS

  • ansible-test sanity now validates more semantic versioning properties for collections (details)
You can stay up-to-date with the changes as they happen by subscribing to this GitHub issue.
 

NEW/UPDATED COMMUNITY COLLECTIONS

These collections have been released on September 30: community.general 1.2.0 and community.network 1.2.0.

For VMware, community.vmware 1.3.0 was released on October 2. This blog post introduces the new VMware REST collection.

The community.kubernetes collection is going to move to kubernetes.core shortly (see why). Also, the community.okd collection just got a new 0.3.0 release that is ready for testing, and soon, inclusion on Automation Hub under the name ‘redhat.openshift’.

Ansible Podman collection released 1.3.1 version on October 8, which includes bug fixes and enhancements for podman_container, podman_network, podman_volume and other modules. The dependency on PyYAML python package has finally been removed from all podman modules, so now the collection doesn't require anything but Podman installed on the host.

Several more collections were updated on October 13: community.crypto 1.2.0 (fixes CVE-2020-25646), community.mysql 1.1.0, and cloudscale_ch.cloud 1.2.0.

Last but not least, Ansible Openstack Cloud collection released the newest 1.2.0 version on October 13. Modules for management of Openstack volumes and snapshots were added: volume_snapshot_info, volume_backup, volume_backup_info.
 

ANSIBLE CONTRIBUTOR SUMMIT (OCTOBER 2020) RECAP

The third fully virtual Ansible Contributor Summit of 2020 was held on October 12th and 15th. More than 700 participants joined us over the course of the 2 days, which is 10 times the previous event in July. We are still processing the videos and recordings from the summit, and they will be made available on the Ansible Community Youtube channel, and shared with the attendees along with notes and Q&A responses.

In the meantime, you can check out the summary (Day 1, Day 2) and full logs (Day 1, Day 2) from the IRC sessions.

We would appreciate your feedback to help us improve the Contributor Experience, whether or not you were able to attend the event. Here is the Contributor Survey; please take a couple of minutes to fill this in. Thank you!
 

ANSIBLE HACKTOBERFEST 2020

We usually follow up issues discussed in the Ansible Contributor Summit with an Ansible Hackathon. Since this is also the month of Hacktoberfest, we will have an Ansible Hacktoberfest event on October 30th, 2020. Please register and join us!
 

ANSIBLE COMMUNITY STATS UPDATE

Following our Contributor Summit last week, Greg will be digging into the participation stats from Bluejeans, and results from post-event surveys as usual. A cursory glance over the data shows 707 attendees from 67 countries across the two days! We can also see some indicators that the event helped to get people contributing, and especially that the Katacoda sessions helped with this - 19 of the 23 newcomers (83%) said Katacoda was useful, and 9/10 attendees who said they didn't know how to contribute earlier now say they do.

More to come!
 

CONTENT FROM THE ANSIBLE COMMUNITY

 

ANSIBLE VIRTUAL MEETUPS

The following virtual meetups are being held in the Ansible community over the next month:

Ansible NOVA Note: For these virtual events, the links to participate will be visible once you RSVP to attend. If you’re interested in the topics presented, you can join from anywhere in the world as long as the time zone and language works for you!
 

NETDEVOPS SURVEY 2020 NOW OPEN

The NetDevOps Survey 2020 is now live, and looking for respondents! The goal of this survey is to collect information to understand how network operators and engineers are using automation to operate their network today. The survey has been designed to be vendor neutral, collaborative, and community-focused. All network professionals are welcome to participate in the survey, please complete the survey here October 23.
 

FEEDBACK

Have any questions you’d like to ask, or issues you’d like to see covered? Please send us an email at the-bullhorn@redhat.com.

 

 




Best of AnsibleFest 2020

Best of AnsibleFest 2020

Thank you to everyone who joined us over the past two days for the AnsibleFest 2020 virtual experience. We had such a great time connecting with Ansible lovers across the globe. In case you missed some of it (or all of it), we have some event highlights to share with you! If you want to go see what you may have missed, all the AnsibleFest 2020 content will be available on demand for a year. 

Community Updates

This year at AnsibleFest 2020, Ansible Community Architect Robyn Bergeron kicked off with her keynote on Tuesday morning. We heard how with Ansible Content Collections, it's easier than ever to use Ansible the way you want or need to, as a contributor or an end user. Ansible 2.10 is now available, and Robyn explained how the feedback loop got us there. If you want to hear more about the Ansible community project, go watch Robyn's keynote on demand

Product Updates

Ansible's own Richard Henshall talked about the Red Hat Ansible Automation Platform product updates and new releases. In 2018, we unveiled the Ansible certified partner program and now we have over 50 platforms certified. We are bridging traditional platforms, containers and edge with a new integration between Red Hat Advanced Cluster Management for Kubernetes and Ansible Automation Platform. Learn more about the new integration from our press release. This year at AnsibleFest, we also introduced private Automation Hub, where users can now manage and curate Ansible content privately, from trusted sources. You can learn more about this and other Ansible Automation Platform updates from our press release. You can also listen to Richard's full keynote in the AnsibleFest platform on demand now.

Channel Content

AnsibleFest 2020 showcased six channels of content; with something for everyone. Some popular talks included the Ansible Automation Platform Roadmap, Managing your own Private Ansible Content, Ansible Automation Platform Technical Content Strategy, How to manage your Ansible automation and how your automation is performing with analytics, and much more! All 70+ breakout sessions are available on demand now.

We hope everyone enjoyed our first virtual AnsibleFest. Thank you to all our attendees who helped make AnsibleFest 2020 the largest and most successful AnsibleFest to date. To see more highlights of the event, you can visit the AnsibleFest homepage. Don't forget, all the content will be available until October 2021, so you can go back and watch the content whenever you would like. Thank you for connecting with us this year and happy automating!




Deep Dive, ACL Configuration Management Using Ansible Network Automation Resource Modules

Deep Dive, ACL Configuration Management Using Ansible Network Automation Resource Modules

In October 2019 as part of the Red Hat Ansible Engine 2.9 release, the Ansible Network Automation team introduced the first resource modules.

These opinionated network modules make network automation easier and more consistent for those automating various network platforms in production. The goal for resource modules is to avoid creating and maintaining overly complex jinja2 templates for rendering and pushing network configuration.

This blog post covers the newly released ios_acls resource module and how to automate manual processes associated with switch and router configurations. These network automation modules are used for configuring routers and switches from popular vendors (but not limited to) Arista, Cisco, Juniper, and VyOS. The access control lists (ACLs) network resource modules are able to read ACL configuration from the network, provide the ability to modify and then push changes to the network device. These opinionated network resource modules make network automation easier and more consistent for those automating various network platforms in production. I'll walk through several examples and describe the use cases for each state parameter (including three newly released state types) and how these are used in real world scenarios.

The Certified Content Collection

This blog uses the cisco.ios Collection maintained by the Ansible team, but there are other platforms that also have ACL resource modules, such as arista.eos, junipernetworks.junos, and vyos.vyos.

How to obtain the certified (supported) and upstream (community) Collection

The upstream community Collection can be found on Ansible Galaxy: https://galaxy.ansible.com/cisco/ios**

The downstream supported Collection can be found on Automation Hub: https://cloud.redhat.com/ansible/automation-hub/cisco/ios**

For more information on Ansible Content Collections, please refer to the following documentation:

https://docs.ansible.com/ansible/latest/user_guide/collections_using.html

Before starting, let's quickly explain the rationale behind the naming of the network resource modules. The newly added ACLs modules will be plural eos_acls, ios_acls, junos_acls, nxos_acls, iosxr_acls.  The older singular form modules (e.g. ios_acl, nxos_acl) will be deprecated over time. This naming change was done so that those using existing network modules would not have their Ansible Playbooks stop working and have sufficient time to migrate to the new network automation modules.

Platform support

This module is also available for the following Ansible-maintained platforms on both Automation Hub (supported) and Galaxy (community):

Platform Full Collection path Automation Hub Link (requires subscription) Ansible Galaxy Link
Arista EOS arista.eos.eos_acls Automation Hub Galaxy
Cisco IOS cisco.ios.ios_acls Automation Hub Galaxy
Cisco IOS-XR cisco.iosxr.iosxr_acls Automation Hub Galaxy
Cisco NX-OS cisco.nxos.nxos_acls Automation Hub Galaxy
Juniper Junos junipernetworks.junos.junos_acls Automation Hub Galaxy
VyOS vyos.vyos.vyos_firewall_rules Automation Hub Galaxy

Getting started - Managing the ACL configuration with Ansible

An access control list (ACL) provides rules that are applied to port numbers and/or IP addresses  permitted to transit or reach that network device. ACL order of access control entry (ACE) is critical because the ACEs sequence/order route decides which rules are applied to inbound/outbound network traffic.

An ACL resource module provides the same level of functionality that a user can achieve when configuring manually on the Cisco IOS device. But combined with Ansible facts gathering and resource module approach, this is more closely aligned with how network professionals work day to day.

I'll be using an IOS router with version 15.6(3)M2 for all the configuration of this post. Below is the initial state of router ACLs configuration and currently there are already active ACLs configured on the device.

Network device configuration

cisco#sh access-lists
Extended IP access list 110
  10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
  20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list test_acl
  10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
  deny tcp any eq www any eq telnet ack dscp af11 sequence 10

Using state gathered - Building an Ansible inventory

Resource modules allow the user to read in an existing network configuration and convert that into a structured data model. The state: gathered is the equivalent for gathering Ansible facts for this specific resource. This example will read in the existing network configuration and store it as a flat file.

Ansible Playbook Example

Here is an Ansible Playbook example of using state: gathered and storing the result as YAML into host_vars.  If you are new to the concept of Ansible inventory and want to learn more about group_vars and host_vars, please refer to the Ansible User Guide: Inventory.

---
- name: convert configured ACLs to structured data
  hosts: cisco
  gather_facts: false
  tasks:


    - name: Use the ACLs resource module to gather the current config
       cisco.ios.ios_acls:
           state: gathered
           register: acls

    - name: Create inventory directory
      file:
       path: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}"
       state: directory

    - name: Write the ACL configuration to a file
      copy:
        content: "{{ {‘acls’: acls['gathered']} | to_nice_yaml }}"
        dest: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}/acls.yaml"

Execute the Ansible Playbook with the ansible-playbook command: ansible-playbook example.yml

Examine File contents

Here is the data structure that was created from reading in an existing configuration:

# lab_inventory/host_vars/rtr2/acls.yaml
acls:
- acls:
     - aces:
         - destination:
             address: 192.0.3.0
             wildcard_bits: 0.0.0.255
           dscp: ef
           grant: deny
           protocol: icmp
           protocol_options:
             icmp:
               traceroute: true
           sequence: 10
           source:
             address: 192.0.2.0
             wildcard_bits: 0.0.0.255
           ttl:
             eq: 10
         - destination:
             host: 198.51.110.0
             port_protocol:
               eq: telnet
           grant: deny
           protocol: tcp
           protocol_options:
             tcp:
               ack: true
           sequence: 20
           source:
             host: 198.51.100.0
       acl_type: extended
       name: '110'
     - aces:
         - destination:
             address: 192.0.3.0
             port_protocol:
                 eq: www
             wildcard_bits: 0.0.0.255
           grant: deny
           option:
               traceroute: true
           protocol: tcp
           protocol_options:
               tcp:
                   fin: true
           sequence: 10
           source:
             address: 192.0.2.0
             wildcard_bits: 0.0.0.255
           ttl:
               eq: 10
       acl_type: extended
       name: test_acl
   afi: ipv4
 - acls:
     - aces:
         - destination:
             any: true
             port_protocol:
               eq: telnet
           dscp: af11
           grant: deny
           protocol: tcp
           protocol_options:
             tcp:
               ack: true
           sequence: 10
           source:
             any: true
             port_protocol:
               eq: www
       name: R1_TRAFFIC
   afi: ipv6

In the above output (and future reference):

  • afi refers to address family identifier, either IPv4 or IPv6
  • acls refers to access control lists, and returns a list of dictionaries (ACEs)
  • aces refers to access control entry, or the specific rule and sequence

Using state merged - Pushing configuration changes

The state merged will take your Ansible configuration data (for example Ansible variables) and merges them into the network device's network configuration. This will not affect existing configuration not specified in your Ansible configuration data. Let's walk through an example.

Modify stored file

We will modify the flat file created in the first example. We will then create an Ansible Playbook to merge this new configuration into the network device's running configuration.

Reference link:

https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Merged.txt

acls:
- afi: ipv4
  acls:
   - name: std_acl
     acl_type: standard
     aces:
       - grant: deny
         source:
           address: 192.168.1.200
       - grant: deny
         source:
           address: 192.168.2.0
           wildcard_bits: 0.0.0.255
   - name: 110
     aces:
       - grant: deny
         sequence: 10
         protocol_options:
           icmp:
             traceroute: true
         source:
           address: 192.0.2.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
         dscp: ef
         ttl:
           eq: 10
       - grant: deny
         protocol_options:
           tcp:
             ack: true
         source:
           host: 198.51.100.0
         destination:
           host: 198.51.110.0
           port_protocol:
             eq: telnet
   - name: test
     acl_type: extended
     aces:
       - grant: deny
         protocol_options:
           tcp:
             fin: true
         source:
           address: 192.0.2.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         option:
           traceroute: true
         ttl:
           eq: 10
   - name: 123
     aces:
       - grant: deny
         protocol_options:
           tcp:
             ack: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 198.51.101.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         tos:
           service_value: 12
      - grant: deny
         protocol_options:
           tcp:
             ack: true
         source:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.4.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         dscp: ef
         ttl:
           lt: 20
- afi: ipv6
  acls:
   - name: R1_TRAFFIC
     aces:
       - grant: deny
         protocol_options:
           tcp:
             ack: true
         source:
           any: true
           port_protocol:
             eq: www
         destination:
           any: true
           port_protocol:
             eq: telnet
         dscp: af11

Ansible Playbook Example:

---
- name: Merged state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Merge ACLs config with device existing ACLs config
      cisco.ios.ios_acls:
        state: merged
        config: "{{ acls }}"

Once we run the respective Merge play, all of the provided parameters will be configured on the Cisco IOS router with Ansible changed=True

Network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

If we dig slightly into the device output, we make the following observations:

  • Based on the AFI value, it's decided by the module to call IP/IPV6 access-lists. 
  • The 'acl_type' key is required for named ACLs.
  • For ACLs identified by a number rather than a name, the 'acl_type' is derived from the platform's documented ACL number ranges. (eg Standard = 1--99 and 1300--1999, Extended = 100--199 and 2000--2699, etc)
  • If the sequence number is not mentioned in the ACE, it will be configured based on the order provided in the play. 
  • With the second run, the respective Merge Play runs again and Ansible charm of Idempotency comes to picture, and if nothing's changed, play run results into changed=False, which confirms to the user that all of the provided configurations in the play are already configured on the IOS device.

Using state replaced - Pushing configuration changes

The replaced parameter enforces the data model on the network device for each configured ACL/ACE. If we modify any of the ACL/ACEs, it will enforce all the parameters this resource module is aware of. To think of this another way, the replaced parameter is aware of all the commands that should and shouldn't be there.

For this scenario, an ACL with some ACEs is already configured on the Cisco IOS device, and now the user wants to update the ACL with a new set of ACEs and discard all the already configured ACL ACEs. The resource module that replaced "s" will replace ACL existing ACEs with a new set of ACEs given as input by the user.

Ref gist link:

https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Replaced.txt

acls:
- afi: ipv4
  acls:
   - name: 110
     aces:
       - grant: deny
         protocol_options:
           tcp:
             syn: true
         source:
           address: 192.0.2.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         dscp: ef
         ttl:
           eq: 10
   - name: 150
     aces:
       - grant: deny
         sequence: 20
         protocol_options:
           tcp:
             syn: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         destination:
           address: 198.51.110.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         dscp: ef
         ttl:
           eq: 10

Ansible Playbook Example

---
- name: Replaced state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Replace ACLs config with device existing ACLs config
      cisco.ios.ios_acls:
        state: replaced
        config: "{{ acls }}"

With the above play, the user is replacing the 123 extended ACL with the provided ACL ACEs configuration and also configuring the 150 extended new ACL ACEs.

Before running the replaced play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With replaced Play run, commands that are fired:

- ip access-list extended 110
- no 10
- no 20
- deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www syn dscp ef ttl eq 10
- ip access-list extended 150
- 20 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn  dscp ef ttl eq 10

After running the replaced play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www syn dscp ef ttl eq 10
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list 150
   20 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn dscp ef ttl eq 10
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

If we dig the output briefly, we may have following observation:

  • replaced will negate all the pre-existing ACEs under the input ACL and then apply the configuration provided as input in the play. The same behaviour can be seen in the commands output above for numbered ACL 123, where the pre-existing ACEs at sequence 10 and 20 are negated first before applying the changes for newer ACE configuration.
  • For the 150 extended ACL ACEs, since it wasn't already pre-configured on the device, the module goes ahead and applies the ACE configuration provided as input in the play. One thing to note here is that in the play input configuration value to sequence is mentioned as 20, and as a result ACE is configured on sequence 20 instead of 10, which would have been the case if value to sequence wasn't provided by the user.

With the second run of the above play, changed comes as false, which satisfies the Ansible idempotency.

Using state overridden - Pushing configuration changes

For this example, we will mix it up slightly. Pretend you are a user making a bespoke configuration on the network device (making a change outside of automation).  The state: overridden will circle back on enforcing the data model (configuration policy enforcement) and remove the bespoke change.

If the user wants to re-configure the Cisco IOS device entirely pre-configured ACLs, then resource module overridden state is the most appropriate. When using the overridden state, a user can override all ACLs with user provided ACLs.

To show the difference between replaced and overridden state working, we will be using the same play that we used for the replaced scenario, keeping the pre-existing configuration the same as well.

Ref gist link:

https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Overridden.txt

ACLs configuration:

acls:
- afi: ipv4
  acls:
   - name: 110
     aces:
       - grant: deny
         sequence: 20
         protocol_options:
           tcp:
             ack: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         destination:
           address: 198.51.110.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         dscp: ef
         ttl:
           eq: 10
   - name: 150
     aces:
       - grant: deny
         sequence: 10
         protocol_options:
           tcp:
             syn: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         destination:
           address: 198.51.110.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         dscp: ef
         ttl:
           eq: 10

Ansible Playbook Example

---
- name: Overridden state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Override ACLs config with device existing ACLs config
      cisco.ios.ios_acls:
        state: overridden
        config: "{{ acls }}"

With the above play, the user is replacing the 123 extended ACL with the provided ACL ACEs configuration and also configuring the 150 extended new ACL ACEs.

Before running the Overridden play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With Overridden play run, commands that are sent:

- no ip access-list standard std_acl
- no ip access-list extended 110
- no ip access-list extended 123
- no ip access-list extended 150
- no ip access-list extended test
- no ipv6 access-list R1_TRAFFIC
- ip access-list extended 150
- 10 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn dscp ef ttl eq 10
- ip access-list extended 110
- 20 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq www ack dscp ef ttl eq 10

After running the Overridden play network device configuration:

cisco#sh access-lists
Extended IP access list 110
   20 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq www ack dscp ef ttl eq 10
Extended IP access list 150
   10 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn dscp ef ttl eq 10

Now, again if we dig the overridden play output:

  • Overridden negates all of the pre-existing ACLs and deletes those configurations, which are not present inside the provided config.
  • For the ACL configurations that are pre-existing and also in the play, ios_acls overridden state will try to delete/negate all the pre-existing ACEs and then configure the new ACE as mentioned in the play
  • For any non-existing ACLs, overridden state will configure the ACL in a manner same as merged

Now that we talked about how we can configure ACLs and the ACEs on the CISCO IOS device by using ios_acls resource module merged, replaced and overridden state, it's time we talk about how we can delete the pre-configured ACLs and ACEs and what level of granularity is available with the deleted operational state for the user.

Deleting configuration changes

If the user wants to delete the Cisco IOS device pre-configured ACLs with the provided ACL configuration, then use the resource module delete state.

Method 1: Delete individual ACLs based on ACL number (which means if the user needs to delete any specific ACLs configured under IPV4 or IPV6)

Ref gist link: 

https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Deleted.txt

ACLs that need to be deleted

acls:
- afi: ipv4
  acls:
    - name: test
      acl_type: extended
    - name: 110
    - name: 123
- afi: ipv6
  acls:
    - name: R1_TRAFFIC

Ansible Playbook Example

---
- name: Deleted state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Delete ACLs based on ACL number
      cisco.ios.ios_acls:
        state: deleted
        config: "{{ acls }}"

Before running the Deleted play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With Delete by ACLs play run, commands that are sent:

- no ip access-list extended test
- no ip access-list extended 110
- no ip access-list extended 123
- no ipv6 access-list R1_TRAFFIC

After running the Deleted play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
cisco#

Method 2: Deleting individual ACLs based on it's AFI (i.e. Address Family Indicator) which means if the user needs to delete all of the ACLs configured under IPV4 or IPV6

Ref gist link:

https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/8c65946eae561ff569cfc5398879c51598ae050c/Deleted_by_AFI

Ansible Playbook Example

---
- name: Deleted state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Delete ALL IPV4 configured ACLs
      cisco.ios.ios_acls:
        config:
          - afi: ipv4
        state: deleted

Before running the Deleted play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With Delete by ACLs Play run, commands that are fired:

- no ip access-list standard std_acl
- no ip access-list extended test
- no ip access-list extended 110
- no ip access-list extended 123
- no ip access-list extended test

After running the Deleted play network device configuration:

cisco#sh access-lists
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10
cisco#

Method 3: Delete ALL ACLs at once

Note: this is a very critical delete operation and if not used judiciously, it has the power of deleting all pre-configured ACLs

Ref gist link: https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Deleted_wo_config.txt

Ansible Playbook Example

---
- name: Deleted state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Delete ALL configured ACLs w/o passing any config
      cisco.ios.ios_acls:
        state: deleted

Before running the Deleted play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With Delete by ACLs Play run, commands that are fired:

- no ip access-list standard std_acl
- no ip access-list extended test
- no ip access-list extended 110
- no ip access-list extended 123
- no ip access-list extended test
- no ipv6 access-list R1_TRAFFIC

After running the Overridden play network device configuration:

cisco#sh access-lists
cisco#

Using state rendered - Development and working offline

The state rendered transforms the provided structured data model into platform specific CLI commands. This state does not require a connection to the end device. For this example, it will render the provided data model into the Cisco IOS syntax commands.

Ref gist link:

https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/8c65946eae561ff569cfc5398879c51598ae050c/Rendered.txt

ACLs Config that needs to be rendered

acls:
- afi: ipv4
  acls:
   - name: 110
     aces:
       - grant: deny
         sequence: 10
         protocol_options:
           tcp:
             syn: true
         source:
           address: 192.0.2.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         dscp: ef
         ttl:
           eq: 10
   - name: 150
     aces:
       - grant: deny
         protocol_options:
           tcp:
             syn: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         destination:
           address: 198.51.110.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         dscp: ef
         ttl:
           eq: 10

Ansible Playbook Example

---
- name: Rendered state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Render the provided configuration
      cisco.ios.ios_acls:
        config: "{{ acls }}"
        state: rendered

With Render state module execution results:

"rendered": [
   "ip access-list extended 110",
   "10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www syn dscp ef ttl eq 10",
   "ip access-list extended 150",
   "deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn dscp ef ttl eq 10"
]

NOTE: Render state won't change anything from configuration end

Using state parsed - Development and working offline

This state reads the configuration from running_config option and transforms it into structured data (i.e. JSON). This is helpful if you have off-line configurations, such as a backup text file, and want to transform it into structured data. This is helpful for experimenting, troubleshooting or offline creation of a source of truth for your data models.

Ref gist link:

https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/8c65946eae561ff569cfc5398879c51598ae050c/Parsed.txt

ACLs Config that needs to be Parsed

Ansible Playbook Example

---
- name: Parsed state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Parse the provided ACLs configuration
      cisco.ios.ios_acls:
        running_config:
           "ipv6 access-list R1_TRAFFIC
           deny tcp any eq www any eq telnet ack dscp af11"
        state: parsed

With Parsed state module execution results:

"parsed": [
       {
           "acls": [
               {
                   "aces": [
                       {
                           "destination": {
                               "any": true,
                               "port_protocol": {
                                   "eq": "telnet"
                               }
                           },
                           "dscp": "af11",
                           "grant": "deny",
                           "protocol_options": {
                               "tcp": {
                                   "ack": true
                               }
                           },
                           "source": {
                               "any": true,
                               "port_protocol": {
                                   "eq": "www"
                               }
                           }
                       }
                   ],
                   "name": "R1_TRAFFIC"
               }
           ],
           "afi": "ipv6"
       }
   ]

Conclusion

The ACLs resource modules provide an easy way for network engineers to begin automating access lists on multiple network platforms. While some configuration can remain static on network devices, ACLs might need constant updates and verification. These resource modules allow users to adopt automation in incremental steps to make it easy for organizations to adopt.  As soon as you have transformed your ACLs into structured data, any resource module from any network platform can read. Imagine reading in ACLs for your Cisco IOS box and transforming them into Cisco IOS-XR commands. 




Getting Started With AWS Ansible Module Development and Community Contribution

Getting Started With AWS Ansible Module Development and Community Contribution

We often hear from cloud admins and developers that they're interested in giving back to Ansible and using their knowledge to benefit the community, but they don't know how to get started.  Lots of folks may even already be carrying new Ansible modules or plugins in their local environments, and are looking to get them included upstream for more broad use.

Luckily, it doesn't take much to get started as an Ansible contributor. If you're already using the Ansible AWS modules, there are many ways to use your existing knowledge, skills and experience to contribute. If you need some ideas on where to contribute, take a look at the following:

  • Creating integration tests: Creating missing tests for modules is a great way to get started, and integration tests are just Ansible tasks!
  • Module porting: If you're familiar with the boto3 Python library, there's also a backlog of modules that need to be ported from boto2 to boto3.
  • Repository issue triage: And of course there's always open Github issues and pull requests. Testing bugs or patches and providing feedback on your use cases and experiences is very valuable.

The boto3

Starting with Ansible 2.10, the AWS modules have been migrated out of the Ansible GitHub repo and into two new Collection repositories.

The Ansible-maintained Collection, (amazon.aws) houses the modules, plugins, and module utilities that are managed by the Ansible Cloud team and are included in the downstream Red Hat Ansible Automation Platform product.

The Community Collection (community.aws) houses the modules and plugins that are supported by the Ansible community.  New modules and plugins developed by the community should be proposed to community.aws. Content in this Collection that is stable and meets other acceptance criteria has the potential to be promoted and migrated into amazon.aws.

For more information about how to contribute to any of the Ansible-maintained Collections, including the AWS Collections, refer to the Contributing to Ansible-maintained Collections section on docs.ansible.com.

AWS module development basics

For starters, make sure you've read the Guidelines for Ansible Amazon AWS module development section of the Ansible Developer Guide. Some things to keep in mind:

If the module needs to poll an API and wait for a particular status to be returned before proceeding, add a waiter to the waiters.py file in the amazon.aws collection rather than writing a loop inside your module. For example, the ec2_vpc_subnet module supports a wait parameter. When true, this instructs the module to wait for the resource to be in an expected state before returning. The module code for this looks like the following:

if module.params['wait']:
    handle_waiter(conn, module, 'subnet_exists', {'SubnetIds': [subnet['id']]}, start_time)

And the corresponding waiter:

        "SubnetExists": {
            "delay": 5,
            "maxAttempts": 40,
            "operation": "DescribeSubnets",
            "acceptors": [
                {
                    "matcher": "path",
                    "expected": True,
                    "argument": "length(Subnets[]) > `0`",
                    "state": "success"
                },
                {
                    "matcher": "error",
                    "expected": "InvalidSubnetID.NotFound",
                    "state": "retry"
                },
            ]
        },

This polls the EC2 API for describe_subnets(SubnetIds=[subnet['id']]) until the list of returned Subnets is greater than zero before proceeding. If an error of InvalidSubnetID.NotFound is returned, this is an expected response and the waiter code will continue.

Use paginators when boto returns paginated results and build the result from the .build_full_result() method of the paginator, rather than writing loops.

Be sure to handle both ClientError and BotoCoreError in your except blocks.

except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
    module.fail_json_aws(e, msg="Couldn't create subnet")

All new modules should support check_mode if at all possible.

Ansible strives to provide idempotency. Sometimes though, this is inconsistent with the way that AWS services operate. Think about how users will interact with the service through Ansible tasks, and what will happen if they run the same task multiple times.  What API calls will be made?  What changed status will be reported by Ansible on subsequent task executions?

Whenever possible, avoid hardcoding data in modules. Sometimes it's unavoidable, but if your contribution includes a hardcoded list of instance types or a hard-coded partition, this will likely be brought up in code review - for example, arn:aws: will not match the GovCloud or China regions, and your module will not work for users in these regions. If you've already determined there's no reasonable way to avoid hard-coding something, please mention your findings in the pull request.

Module Utilities

There's a substantial collection of module_utils available for working with AWS located in the amazon.aws collection:

$ ls plugins/module_utils/
acm.py  batch.py  cloudfront_facts.py  cloud.py  core.py  direct_connect.py  ec2.py  elb_utils.py  elbv2.py  iam.py  __init__.py  rds.py  s3.py  urls.py  waf.py  waiters.py

Of particular note, module_utils/core.py contains AnsibleAWSModule(), which is the required base class for all new modules. This provides some nice helpers like client() setup, the fail_json_aws() method (which will convert boto exceptions into nice error messages and handle error message type conversion for Python2 and Python3), and the class will handle boto library import checks for you.

AWS APIs tend to use and return Camel case values, while Ansible prefers Snake case. Helpers for converting between these in are available in amazon.aws.module_utils.ec2, including ansible_dict_to_boto3_filter_list(), boto3_tag_list_to_ansible_dict(), and a number of tag and policy related functions.

Integration Tests

The AWS Collections primarily rely on functional integration tests to exercise module and plugin code by creating, modifying, and deleting resources on AWS. Test suites are located in the Collection repository that contains the module being tested.  The preferred style for tests looks like a role named for the module with a test suite per module. Sometimes it makes sense to combine the tests for more than one module into a single test suite, such as when a tightly coupled service dependency exists. These will generally be named for the primary module or service being tested.  For example, *_info modules may share a test with the service they provide information for. An aliases file in the root of the test directory controls various settings, including which tests are aliased to that test role.

tests/integration/targets/ecs_cluster$ ls
aliases  defaults  files  meta  tasks

tests/integration/targets/ecs_cluster$ cat aliases
cloud/aws
ecs_service_info
ecs_task
ecs_taskdefinition
ecs_taskdefinition_info
unsupported

In this case, several modules are combined into one test, because an ecs_cluster must be created before an ecs_taskdefinition can be created. There is a strong dependency here.

You may also notice that ECS is not currently supported in the Ansible CI environment.  There's a few reasons that could be, but the most common one is that we don't allow unrestricted resource usage in the CI AWS account. We have to create IAM policies that allow the minimum possible access for the test coverage. Other reasons for tests being unsupported might be because the module needs resources that we don't have available in CI, such as a federated identity provider. See the CI Policies and Terminator Lambda section below for more information.

Another test suite status you might see is unstable. That means the test has been observed to have a high rate of transient failures. Common reasons include needing to wait for the resource to reach a given state before proceeding or tests taking too long to run and exceeding the test timer. These may require refactoring of module code or tests to be more stable and reliable. Unstable tests only get run when the module they cover is modified and may be retried if they fail. If you find you enjoy testing, this is a great area to get started in!

Integration tests should generally check the following tasks or functions both with and without check mode:

  • Resource creation
  • Resource creation again (idempotency)
  • Resource modification
  • Resource modification again (idempotency)
  • Resource deletion
  • Resource deletion (of a non-existent resource)

Use module_defaults for credentials when creating your integration test task file, rather than duplicating these parameters for every task. Values specified in module_defaults can be overridden per task if you need to test how the module handles bad credentials, missing region parameters, etc.

- name: set connection information for aws modules and run tasks
  module_defaults:
    group/aws:
      aws_access_key: "{{ aws_access_key }}"
      aws_secret_key: "{{ aws_secret_key }}"
      security_token: "{{ security_token | default(omit) }}"
      region: "{{ aws_region }}"

  block:

  - name: Test Handling of Bad Region
    ec2_instance:
    region: "us-nonexistent-7"
      ... params …

  - name: Do Something
    ec2_instance:
      ... params ...

  - name: Do Something Else
    ec2_instance:
      ... params ...

Integration tests should make use of blocks with test tasks in one or more blocks and a final always: block that deletes all resources created by the tests.

Unit Tests

While most modules are tested with integration tests, sometimes this is just not feasible.  An example is when testing AWS Direct Connect. The community.aws.aws_direct_connect* modules can be used to establish a network transit link between AWS and a private data center. This is not a task that can be done simply or repeatedly in a CI test system. For modules that cannot practically be integration tested, we do require unit tests for inclusion into any AWS Ansible Collection.  The placebo Python library provides a nice mechanism for recording and mocking boto3 API responses and is preferred to writing and maintaining AWS fixtures when possible.

CI Policies and Terminator Lambda

The Ansible AWS CI environment has safeguards and specific tooling to ensure resources are properly restricted, and that test resources are cleaned up in a reasonable amount of time. These tools live in the aws-terminator repository. There are three main sections of this repository to be aware of:

  1. The aws/policy/ directory
  2. The aws/terminator/ directory
  3. The hacking/ directory

The aws/policy/ directory contains IAM policies used by the Ansible CI service. We generally attempt to define the minimum AWS IAM Actions and Resources necessary to execute comprehensive integration test coverage. For example, rather than enabling ec2:*, we have multiple statement IDs, Sids that specify different actions for different resource specifications.

We permit ec2:DescribeImages fairly broadly in the region our CI runs in:

  Resource:
      - "*"
    Condition:
      StringEquals:
        ec2:Region:
          - '{{ aws_region }}'

But are more restrictive on which instance types can be started or run via CI:

- Sid: AllowEc2RunInstancesInstanceType
    Effect: Allow
    Action:
      - ec2:RunInstances
      - ec2:StartInstances
    Resource:
      - arn:aws:ec2:us-east-1:{{ aws_account_id }}:instance/*
    Condition:
      StringEquals:
        ec2:InstanceType:
          - t2.nano
          - t2.micro
          - t3.nano
          - t3.micro
          - m1.large  # lowest cost instance type with EBS optimization supported

The aws/terminator/ directory contains the terminator application, which we deploy to AWS Lambda.  This acts as a cleanup service in the event that any CI job fails to remove resources that it creates.  Information about writing a new terminator class can be found in the terminator's README.

The hacking/ directory contains a playbook and two sets of policies that are intended for contributors to use with their own AWS accounts.  The aws_config/setup-iam.yml playbook creates IAM policies and associates them with two iam_groups. These groups can then be associated with your own appropriate user:

  • ansible-integration-ci: This group mirrors the permissions used by the the AWS collections CI
  • ansible-integration-unsupported: The group assigns additional permissions on top of the 'CI' permissions required to run the 'unsupported' tests

Usage information to deploy these groups and policies to your AWS user is documented in the setup-iam.yml playbook.

Testing Locally

You've now written your code and your test cases, but you'd like to run your tests locally before pushing to GitHub and sending the change through CI.  Great!  You'll need credentials for an AWS account and a few setup steps. 

Ansible includes a CLI utility to run integration tests.  You can either set up a boto profile in your environment or use a credentials config file to authenticate to AWS.  A sample config file is provided by the ansible-test application included with Ansible.  Copy this file to tests/integration/cloud-config-aws.ini in your local checkout of the collection repository and fill in your AWS account details for @ACCESS_KEY, @SECRET_KEY, @SECURITY_TOKEN, @REGION.

NOTE: Both AWS Collection repositories have a tests/.gitignore file that will ignore this file path when checking in code, but you should always be vigilant when storing AWS credentials to disk or in a repository directory.

If you already have Ansible installed  on your local machine, ansible-test should already be in your PATH.  If not, you can run it from a local checkout of the Ansible project.

git clone https://github.com/ansible/ansible.git
cd ansible/
source ansible/hacking/env-setup

You will also need to ensure that any Collection dependencies are installed and accessible in your COLLECTIONS_PATHS.  Collection dependencies are listed in the tests/requirements.yml file in the Collection and can be installed with the ansible-galaxy collection install command.

You can now run integration tests from the Collection repository:

cd ~/src/collections/ansible_collections/amazon/aws
ansible-test integration ec2_group

Tests that are unstable or unsupported will not be executed by default.  To run these types of tests, there are additional flags you can pass to ansible-test:

ansible-test integration ec2_group --allow-unstable  --allow-unsupported

If you prefer to run the tests in a container, there is a default test image that ansible-test can automatically retrieve and run that contains the necessary Python libraries for AWS tests.  This can be pulled and run by providing the --docker flag.  (Docker must already be installed and configured on your local system.)

ansible-test integration ec2_group --allow-unstable  --allow-unsupported --docker

The test container image ships with all Ansible-supported versions of Python.  To specify a particular Python version, such as 3.7, test with:

ansible-test integration ec2_group --allow-unstable  --allow-unsupported --docker --python 3.7

NOTE: Integration tests will create real resources in the specified AWS account subject to AWS pricing for the resource and region.  Existing tests should make every effort to remove resources at the end of the test run, but make sure to check that all created resources are successfully deleted after executing a test suite to prevent billing surprises.  This is especially recommended when developing new test suites or adding new resources not already covered by the test's always cleanup block.  

NOTE: Be cautious when working with IAM, security groups, and other access controls that have the potential to expose AWS account access or resources.

Submitting a Change

When your change is ready to submit, open a pull request (PR) in the GitHub repository for the appropriate AWS Collection.  Shippable CI will automatically run tests and report the results back to the PR.  If your change is for a new module or tests new AWS resources or actions, you may see permissions failures in the test.  In that case, you will also need to open a PR in the mattclay/aws-terminator repository to add IAM permissions and possibly a Terminator class to support testing the new functionality, as described in the CI Policies and Terminator Lambda section of this post.  Members of the Ansible AWS community will triage and review your contribution, and provide any feedback they have on the submission.  

Next Steps and Resources

Contributing to open source projects can be daunting at first, but hopefully this blog post provides a good technical resource on how to contribute to the AWS Ansible Collections. If you need assistance with your contribution along the way, you can find the Ansible AWS community on Freenode IRC in channel #ansible-aws.

Congratulations and welcome, you are now a contributor to the Ansible project!




Bullhorn #11

Ansible Bullhorn banner

The Bullhorn

A Newsletter for the Ansible Developer Community
Issue #11, 2020-09-30


Welcome to The Bullhorn, our newsletter for the Ansible developer community. If you have any questions or content you’d like to share, please reach out to us at the-bullhorn@redhat.com, or comment on this GitHub issue.
 

KEY DATES

 

ANSIBLE 2.10.0 NOW GENERALLY AVAILABLE

The Ansible Community team announced the general availability of Ansible 2.10.0 on September 22nd! This new Ansible package should be a drop-in replacement for Ansible 2.9; the roles and playbooks that you currently use should work out of the box with ansible-2.10.0. For more information on what’s new, how to get it, plus caveats and known bugs, read Toshio Kuratomi’s announcement to the ansible-devel mailing list.
 

ANSIBLE-BASE 2.10.2 RC1 NOW AVAILABLE

The Ansible Base team announced a release candidate of Ansible 2.10.2 on September 28th. This ansible-base package consists of only the Ansible execution engine, related tools (e.g. ansible-galaxy, ansible-test), and a very small set of built-in plugins, and is also bundled with the larger Ansible distribution. For more information on how to download, test, and report issues, read Rick Elrod’s announcement to the ansible-devel mailing list.
 

ANSIBLE 2.9.14 RC1 AND 2.8.16 RC1 AVAILABLE

The Ansible Core team announced the availability of Ansible 2.9.14 rc1 and Ansible 2.8.16 rc1 on September 28th, both of which are maintenance releases. Follow this link for Rick Elrod’s email to the ansible-devel mailing list, to obtain details on what’s new, installation instructions, and links to the full changelogs.
 

CHANGES IMPACTING COLLECTION CONTRIBUTORS AND MAINTAINERS

Follow this GitHub issue to track changes that Collection maintainers and contributors should be aware of.
 

ANSIBLE CONTRIBUTOR SUMMIT - PART OF ANSIBLEFEST 2020

Due to overwhelming interest in the AnsibleFest 2020 edition of the Ansible Contributor Summit, we are planning 2 days of program for you, depending on where you are in your contribution journey. These will be held on October 12 and 15, 2020.

Take a look at the wiki page to find out how the two days will be structured, and register via the corresponding links. We look forward to your participation at the Contributor Summit!
 

NEW/UPDATED COMMUNITY COLLECTIONS

Foreman collection 1.3.0 has been released. Check out what's new, how to obtain it, and more in this blog post by Evgeni Golov.
 

ANSIBLE COMMUNITY STATS UPDATE

Following productive discussions in recent community meetings (see the minutes around #539 (comment)), Greg Sutcliffe has put together an alpha version of a Collections Dashboard. Eventually it will have summary stats across our community, but for now you can query a given collection and get some useful data.

Greg will be pre-recording a talk on this for the upcoming Contributor Summit, but for now you can play with it here and the source code (along with the list of collections to index) is here. Please do raise issues & feature requests!


 

CONTENT FROM THE ANSIBLE COMMUNITY

 

NETDEVOPS SURVEY 2020 NOW OPEN

The NetDevOps Survey 2020 is now live, and looking for respondents! The goal of this survey is to collect information to understand how network operators and engineers are using automation to operate their network today. The survey has been designed to be vendor neutral, collaborative, and community-focused. All network professionals are welcome to participate in the survey, please complete the survey here October 23.
 

OPEN SOURCE AUTOMATION DAYS

We will be a part of Open Source Automation Days, which will be an online event from October 19-21, 2020. Check out the speakers and topics, as well as workshops, and get tickets here if you’re interested.
 

FEEDBACK

Have any questions you’d like to ask, or issues you’d like to see covered? Please send us an email at the-bullhorn@redhat.com.