We want to hear from you! Help us gain insights into the state of the Ansible ecosystem.
Take the Ansible Project Survey 2024




Getting Started with Ansible.utils Collection for Playbook Creators

Part 1: The Ansible.utils Collection for Playbook Creators

The Ansible ansible.utils collection includes a variety of plugins that aid in the management, manipulation and visibility of data for the Ansible playbook developer. The most common use case for this collection is when you want to work with the complex data structures present in an Ansible playbook, inventory, or returned from modules. See each plugin documentation page for detailed examples for how these utilities can be used in tasks. In this two-part blog we will overview this collection in part one and see an example use case of using the utils collection in detail in part two.

Plugins inside ansible.utils

Plugins are code which will augment ansible core functionality. This code executes on control node.it and gives options and extensions for the core features of Red Hat Ansible Automation Platform. This ansible.utils plugin collection includes:

  • Filter plugins
  • Lookup plugins
  • Test plugins
  • Modules

Filter plugins

Filter plugins manipulate data. With the right filter you can extract a particular value, transform data types and formats, perform mathematical calculations, split and concatenate strings, insert dates and times, and do much more. Ansible Automation Platform uses the standard filters shipped with Jinja2 and adds some specialized filter plugins. You can create custom Ansible filters as plugins. Please refer to the docs for more information.

The ansible.utils filter plugins include the following:

Lookup plugins

Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all templating, lookups execute and are evaluated on the Ansible Automation Platform control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources. You can alsocreate custom lookup plugins. Please refer to the docs for more information.

The ansible.utils lookup plugins include:

Note: In ansible.utils some plugins were implemented as both filter and lookup plugins to give the playbook developer flexibility depending on their use case.

Test plugins

Test plugins evaluate template expressions and return a value of True or False. With test plugins you can create conditionals to implement the logic of your tasks, blocks, plays, playbooks, and roles. Ansible Automation Platform uses the standard tests shipped as part of Jinja, and adds some specialized test plugins. Please refer to the docs for more information.

The ansible.utils test plugins include:

Modules

Modules are the main building blocks of Ansible playbooks. Although we do not generally speak of "module plugins", a module is a type of plugin. For a developer-focused description of the differences between modules and other plugins, see Modules and plugins: what is the difference?. Please refer to the docs for more information.

The ansible.utils modules include:

Accessing and using the ansible.utils Collection

To download the utils Collection, refer to Automation hub (fully supported, requires a Red Hat Ansible Automation Platform subscription) or Ansible Galaxy (upstream):

​​Ansible.utils is also available in the Supported Execution environment along with its required python libraries. Please refer to docs for more details about Execution Environments.

Different use cases of Utils

As we know, ansible.utils has a variety of plugins and it has various use cases. The following are the most common use cases of ansible.utils:

  • Validating business logic before pushing configurations using validate and test plugins
  • Auditing architectural deposition and layouts using test plugins
  • Managing complex data structure in ansible playbook using get_path, to_path plugins
  • Conducting minor checks related to network address using test plugins
  • Operational state assessment using cli_parse, validate plugins

Future scope

Here are some additional ansible.utils capabilities that are on the horizon:

  • Ipaddr filter plugin supports:

    • The Ipaddr filter is designed to provide an interface to the netaddr Python package from within Ansible.
    • It can operate on strings or lists of items, test various data to check if they are valid IP addresses, and manipulate the input data to extract requested information.
    • ipaddr() works with both IPv4 and IPv6 addresses in various forms. 
    • There are also additional functions available to manipulate IP subnets and MAC addresses.
    • We are currently working on supporting the ipaddr filter as part of ansible.utils collection.
  • Support of more number of validate engines in ansible.utils.validation plugin:

    • Currently the validate plugin is supporting only ansible.utils.jsonschema validation engines, but there is plan to add more validation engines.
  • Support different filter plugins to manipulate input data:

    • Recursive plugins: remove_keys, replace_keys, keep_keys

Contributing to this collection

This collection is intended for plugins that are not platform or discipline specific. Simple plugin examples should be generic in nature. More complex examples can include real world platform modules to demonstrate the utility of the plugin in a playbook.

We welcome community contributions to this collection. If you find problems, please open an issue or create a PR against the ansible.utils collection repository. See Contributing to Ansible-maintained collections for complete details. See the Ansible Community Guide for details on contributing to Ansible.

Takeaways and next steps

  • ansible.utils plugins makes playbook writing experience simple and smooth
  • Implementation of ansible.utils plugins is very fast as they executed locally
  • Easy to understand, code, use, and integrate with other modules
  • As its plugins ecosystem, it is so easy to add new plugins in ansible.utils



Getting Started with Ansible.utils Collection for Playbook Creators

Part 2: Use case of operational state assessment using ansible.utils collection

In ansible.utils, there are a variety of plugins which we can use for operational state assessment of network devices. I overviewed the ansible.utils collection in part one of this two part blog series. If you have not reviewed part one, I recommend you do so, since I will build on this information in this part two blog. We will see how the ansible.utils collection can be useful in operational state assessment as an example use case.

In general, state assessment workflow has following steps:

  • Retrieve (Source of Truth)

    • Collect the current operational state from the remote host. 
    • Convert it into normalized structured data. Structured data can be in json, yaml format or any other format.
    • Store is an inventory variable.
  • Validate

    • Define the desired state criteria in a standard based format, for example, as defined in a json schema format.
    • Retrieve operational state at runtime.
    • Validate the current state data against the pre-defined criteria to identify if there is any deviation.
  • Remediate

    • Implement required configuration changes to correct drift. 
    • Report on the change as an audit trail.

How can ansible.utils collection help in this workflow?

The ansible.utils collection makes it easier to retrieve and parse the data so it can then be further assessed from a structured format.

Retrieving operational state in structured format using Ansible.utils.cli_parse

This module is available as ansible.utils collection. It has a variety of parsers which help to parse CLI output or text output. It can work with multiple remote hosts like network, Linux, or windows.it. It supports multiple parsing engines and it is extensible which means you can create your own parsing engine. It is a single task to run a command, parse and set facts.

Before the utils collection was available, we would need to write three different tasks to run commands, parse the output from command and set facts. But thanks to cli_parse we only have one task which will return structured data from the "show command" output.

Let's see an example of ansible.utils.cli_parse task:

  tasks:

  - name:  Run a command and parse results
    ansible.utils.cli_parse:
        command: show interfaces
        parser:
            name: ansible.utils.xxxx
            set_fact: interfaces

In this task we need to provide a command which will execute on the device. Parser, which is a subplugin of cli_parse, plugin.set_fact sets the converted structure in the interfaces key. We can then refer to the interfaces key in our playbook. 

The above task will perform following operation:

  • Run the command on the device 
  • Parse using the 'xxxx' engine 
  • Use a default template folder 
  • Set parsed data as fact
  • Return command output as stdout

Currently ansible.utils.cli_parse plugin has following parsers:

  • ansible.utils.textfsm: Python module for parsing semi-formatted text
  • ansible.utils.ttp: Template based parsing, low regex use, jinja-like DSL
  • ansible.netcommon.native: Internal jinja, regex, yaml without additional 3rd party libraries required
  • ansible.netcommon.ntc_templates: Predefined textfsm templates packaged as python library 
  • ansible.netcommon.pyats: Cisco Test Automation & Validation Solution (11 OSs/2500 parsers) 
  • ansible.utils.from_xml: convert XML to json using xmltodict 

All of the generic parsers are part of the ansible.utils collection and all network-related parsers are part of the ansible.netcommon collection.

Validating structured data and report errors using ansible.utils.validate

The Ansible.utils.validate module is a new module available as part of the ansible.utils collection which works with all platforms. It has extensible engine support and currently works with the jsonschema validation engine which uses the jsonschema python library underneath. It is a single task, which reads structured data and validates it against the data model provided in the task. This task will report success or error in case the data is valid or invalid as per the schema. 

Let's see an example of ansible.utils.validate task:

tasks:

- name:  Validate structured data
   ansible.utils.validate:
       data: "{{ input_data }}"
       criteria:
       - "{{ lookup('file', './criteria.json') | from_json }}"
       engine: ansible.utils.xxxx

In this task we need to provide data which is supposed to be structured data. Criteria is a list of criteria. Since currently we are using jsonschema, we have criteria in json format. Engine is a sub-plugin of the top level validate plugin. Here it is ansible.utils.jsonschema. Again, you can write your own engine as it is extensible. 

The above task will perform following operation:

  • Reads the input JSON data and the criteria for data (schema mode)
  • Validate using the 'xxxx' engine
  • Returns list of errors if data does not conform to the schema criteria

Currently ansible.utils.validate plugin supports following validation engine:

  • ansible.utils.jsonschema: Python module to validate json data against a schema. 

Now let's use the above plugins from ansible.utils to see how we can use them in actual scenarios. In this example we will see how to use ansible.utils to fetch BGP operational state data, validate it against predefined json schema and also remediate configuration drift when detected. 

For this scenario consider we have three CSRv routers which are running cisco ios xe. All of them are BGP neighbors to each other and advertise three networks each.

utils blog 2 image one

Lets check running configuration and operation state data related to BGP.

utils blog 2 image two

Let's check the CSRv1 node. Lets execute command show running-config | section bgp. As you can see it has two neighbors configured, where both of them have the same Remote AS, so they are IBGP neighbors. The neighbors are activated and soft reconfiguration inbound is enabled on them. This node also advertises three networks.

Now let's execute the command show bgp summary.

utils blog 2 image three

The above screenshot tells us the neighbor relationships with the other two nodes established and the current node receiving 3 prefixes from the other two nodes.

Now let's validate it using routing table entries. Lets execute command show ip route bgp.

utils blog 2 image four

The above screenshot shows route table entries from node 1. As you can see, this node is aware of six routes, with the next hop being the respective BGP neighbors advertising them.

Similarly we have configured CSRv2 and CSRv3.

Now let's check the playbooks which we are using in this example with detailed steps.

Checkout this code if you want to learn more details.

Playbooks are divided into two parts:

  • Gather facts and store them in a yaml file as the Source of Truth (SOT)
  • Validate structured data against SOT and rectify drift if it is detected

Gather facts and store it in a yaml file for SOT

Let's check the content of facts.yaml.

utils blog 2 image five

In the first task we are gathering bgp_global and bgp_address_family facts from target devices. In the second task we are storing them in a flat file under the hostvars directory. These files will act as SOT (source of truth) for BGP config on target devices.

Lets run the above playbook with ansible-navigator run playbooks/facts.yaml using this command. See ansible-navigator docs for more details.

How does this data look after execution of the playbook? Let's check playbooks/host_vars/csrv-1.yaml.

utils blog 2 image six

Validate structured data against SOT and rectify drift if it is detected

In this step we will inspect BGP operational state data for all the nodes in our topology and then determine if they are running as expected or if there is any configuration drift.

Now let's see the playbooks/verify.yaml playbook which will validate and rectify drift if it is present.

utils blog 2 image seven

In the first task we have used the ansible.utils.cli_parse plugin to execute the show ip route bgp command on the target device and then pass the output of this command to pyats parser.

The pyats parser then converts the output to structured data which is stored in the result variable.

In the next task we pass the value in the result variable along with a predefined schema to the ansible.utils.validate plugin. The plugin then compares the data against the schema or the criteria using the jsonschema engine. Each node has a schema file that defines the prefix ranges which they received from the other two neighbors .

As we saw from the topology and the CLI that these nodes are supposed to receive six routes (three in total) from each of the neighbor nodes. Now these prefixes are represented as patterns in the schema along with other properties like source_protocol, route_preferences, metric and active state.

The schema also sets the additional properties to false and defines minimum and maximum number of properties as six. This ensures that validating against the schema will always tell us whether the devices are receiving exactly the set of routes which they are supposed to receive or not.

Following is the example of a schema file for node CSRv1.

utils-blog-eight

Let's check further tasks of verify.yaml

If the schema validation in the second task fails, the playbook enters the rescue section. This is where we used the BGP resource modules to enforce the SOT which we have saved previously in yaml files on the target devices. The end result will be remediating any configuration drift which causes failure in operation state validation.

If we execute the verify.yaml playbook with the ansible-navigator run playbooks/verify.yaml command, as we have not made changes to any of the target devices, we see that they are working as expected and schema validation passes. See ansible-navigator docs for more details.

utils-blog-nine

Let's manually introduce erroneous changes on all devices. Then we will run the same playbook again and see how it behaves. Let's remove the routes to make the erroneous changes they are advertising.

utils-blog-ten

Now we have made changes in all the routers. Let's run the playbook again with ansible-navigator run playbooks/verify.yaml.

This time the schema validation fails. Remediation tasks are executed and they add facts that are missing prefixes on all three nodes. Let's take a detailed look into this.

utils-blog-eleven

The first task, as usual, fetches the output of the show command and converts it to structured data.

The second task fails because the schema validation fails with multiple errors. The data doesn't match the constraints defined in the schema. This causes the remediation tasks to execute one by one. 

After the remediation tasks are complete, the configure BGP task showed no changes because we did not make any changes to the BGP global attribute.

The second one is where the BGP address family detects the drifts and reconfigures the missing prefixes. 

As we can see in the commands sent for all the target devices, the playbook adds facts to the routes that are deleted.

If we run this playbook once again it will be idempotent and report no changes, thereby indicating everything working as expected. 

In a production environment, this playbook can be triggered based on external events or also can be scheduled as a periodic job in Red Hat Ansible Automation Platform's Automation controller to ensure compliance with the expected operational state.

Takeaways & Next Steps

As shown above, the ansible.utils collection: 

  • Makes operational state assessment easier, complementing Ansible Automation Platform's configuration management capabilities.
  • Acts as a single entry point for gathering the operational state of the entire inventory.
  • Provides a standardized way to define and validate the operational criteria as a structured data model.
  • Adds the steps for operational state assessment as a workflow template in Automation controller which can trigger other events, like running a playbook for automated remediation or reporting to an external tool, etc. 






Bullhorn #41

Ansible Bullhorn banner

A Newsletter for the Ansible Developer Community Issue #41, 2022-01-07 (Past Issues)

Welcome to the Bullhorn, our newsletter for the Ansible developer community. If you have any questions or content you’d like to share, please reach out to us at the-bullhorn@redhat.com, or chat with us in the Ansible Social room on Matrix.

Happy New Year to our wonderful Ansible Developer Community! We look forward to collaborating more with all of you in 2022!

Read more…