Pump up your ITIL with Automation

Pump up your ITIL with Automation

In the world of automation and agility, it seems that Information Technology Infrastructure Library (ITIL) doesn't have a role to play anymore, being marked as an "old school" framework. Can it be the end of the methodology after it served numerous IT organizations for so long as a guideline and blueprint for their processes?

This series of articles shows how automation, and more specifically Red Hat Ansible Automation Platform and the principles of Infrastructure as Code (IaC), can help bring some of the ITIL topics into the agile and automated bliss:

So let's step into the topic of configuration management and what everybody still knows as CMDB (Configuration Management Database) even if ITIL has since long titled it as CMS (Configuration Management System). This name change was meant to highlight the fact that the function can be fulfilled by a combination of multiple databases and tools, but it won't matter here, so we'll stick to the infamous CMDB term.

Do you love your CMDB? Probably not, according to my experience with numerous customers. The data is generally outdated and wrong, considered useless, which means that its maintenance is considered a chore. This means that it's maintained with as little effort as possible, in a careless manner, making it even less up-to-date, and in a downward spiral you go.

To avoid the crash, we need first to understand that a CMDB and the related Configuration management have two main purposes:

  1. Document the desired state of your environment - this is too often done manually, with the admins required to maintain the configuration once in the "real world" and once in the CMDB. To do this, companies often populate the CMDB from the discovery of the environment, which leads to a database that documents the current state. Note that there is no clarity if it corresponds to the desired state or not.
  2. Support the change management process by allowing an analysis of the environment, e.g. to validate that there is enough disk space free on each server before installing the new bloated software. Based on the noted lack of trust in the data quality, it is generally ignored as part of the process.

Looking at the above shortcoming, we need first to more clearly structure our database, as it contains multiple kinds of data:

  1. Desired state data - this is information that comes from a service or change request and represents what one needs to have in one's environment.
  2. Actual state data - this is information discovered from the environment and representing its current state.

As data can be only desired, only actual or both, we have three categories, which we'll reference with A to C for sake of simplicity:

database category diagram

Because admins don't want to maintain the desired state twice, you use the desired state in your CMDB (type A and B) as inventory source for Ansible Automation Platform to configure your environment from it. Admins know that the better the data in the CMDB, the better the result in the real world, which leads to less work. That should be enough motivation to quickly improve the data quality of your CMDB.

Because the CMDB doesn't mix-up desired state and actual state for the data of category A, you can detect discrepancies, make a decision on how to fix it, and use Ansible again for automated remediation. This should help you quickly align reality with desired state, and have the right data to make decisions.

The data of type C isn't of much use for automation, and is meant for decision making in your change management process, though you could decide to skip a patch cycle if the disks are too full. That said, you shouldn't confuse this aspect with monitoring; monitoring a disk full situation and correcting it quickly belongs to incident management, not to configuration management.

Once you've reached this first stage, you can go to the next level and use Ansible Automation Platform to automatically populate the desired state in your CMDB.

database population

Let's assume you have a service portal where customers can order new services or modify and decommission them, using a service catalogue and a dialog driving them through the choices they need to make. Using the input variables grabbed through the dialog, the service portal can, using the automation controller's API, trigger a workflow to fulfil the service. One of the first steps of the workflow is then to enter those input variables as desired state (type A and B) into the CMDB. It has the advantage that, should the workflow job fail, you still have the desired state documented and could trigger the action again once the root cause for the failure has been fixed.

It would now be nice to have commit, branch and tag functions like in Git to roll-back such changes easily. But perhaps someone will invent a CMDB with such functionality based on this article. In the meantime, connect Ansible Automation Platform to your CMDB and add value and quality to your CMDB with automation.

Learn more about using Ansible Automation Platform for configuration management

Take a video tour

This eight-minute overview video highlights the components and features found in the latest version of Ansible Automation Platform---and how they come together to deliver a comprehensive enterprise automation experience.







Automation content navigator releases with Ansible Automation Platform 2.2

Automation content navigator releases with Ansible Automation Platform 2.2

What is it?

Automation content navigator was released alongside Red Hat Ansible Automation Platform 2.0 and changed the way content creators build and test Ansible automation. Navigator 1.0 drew together multiple Ansible command line tools like ansible-playbook, ansible-doc, ansible-config, etc. and continues to accrue seriously useful new features to help deliver greater flexibility to automation creators.

Coinciding with the release of Ansible Automation Platform 2.2, navigator 2.0 introduces improvements to existing functionality alongside additional features to aid in the development of automation content.

Within navigator 2.0, you will find:

  • Automation execution environment image build support 
  • Ability to interact in real-time with automation execution environments 
  • Settings subcommand to view active configuration of local environment 
  • Generate a sample configuration file that can be used for new projects
  • Automatic mode selection (stdout vs. interactive) 
  • Technology preview lint support, UI improvements, Collections view support for Ansible built-ins, time zone support, color enhancements, and more!

Looking closer

Image builder support

Before the release of navigator 2.0, a separate command line application (ansible-builder)  was needed to build execution environment images from human readable YAML files. With this release, ansible-navigator installs ansible-builder and includes a new build command that is used to pass through arguments to ansible-builder allowing content creators to create images from a single familiar interface.

Why should I care?

All enhancements to ansible-builder can be leveraged from ansible-navigator. This functionality helps to cement navigator's role within the content creators workflow to allow not only content creation and environment introspection, but also execution environment build support from within navigator.

Things to try:

  • Add the arista.avd Collection to the supported execution environment:

==> ./builder/execution-environment.yml

---
version: 1
build_arg_defaults:
  EE_BASE_IMAGE: "registry.redhat.io/ansible-automation-platform-21/ee-supported-rhel8:latest"
dependencies:
  galaxy: requirements.yml
  system: ""
  python: ""

==> ./builder/requirements.yml

---
collections:
  - arista.avd
$ ansible-navigator builder build --workdir builder

Introducing the exec command

With a new subcommand, exec, automation creators now have the ability to open a shell in the default execution environment. This allows creators to further inspect the execution environment and leverage utilities installed within the execution environment without installing them on a local workstation.

For example, imagine you're creating some new workflows and you need to leverage an additional Collection from Ansible automation hub. Instead of installing the ansible-galaxy command-line tool on the local workstation, you can run a command within navigator to install the Collection in a directory alongside the new workflows. Because the current working directory is bind mounted to the running container, the installed Collection is placed on the local filesystem.

ansible-navigator exec -- ansible-galaxy collection install servicenow.itsm -p ./collections

After running the above command, a new directory called "Collections" should exist in your current working directory (CWD). This directory will be made available to the execution environment at runtime because the CWD is bind mounted at runtime. This allows you to always tell which Collections are installed within the execution environment and which have been bind mounted to the container.

Why should I care?

Navigator lowers the barrier for creating new content! A creator now only needs to install ansible-navigator to begin creating new automation. Leveraging execution environments, the content creator doesn't even need to install ansible-core! Navigator pulls in a default execution environment that contains ansible-core and common Ansible command line utilities such as ansible-galaxy. The exec command allows these to be leveraged from within the default execution environment instead of relying on workstation configuration.

Things to try:

  • Encrypt a secret using a vault password file:
$ echo secret_vault_password > password_file
$ ansible-navigator exec -- ansible-vault encrypt_string --vault-password-file password_file 'secret'
!vault |
          $ANSIBLE_VAULT;1.1;AES256
          64323039613737313538666239363032396361613464393033343165663631653835356232373139

Encryption successful
  • Scaffold a new Collection in the CWD, in the playbook adjacent Collection directory:
$ ansible-navigator exec -- env
ANSIBLE_CACHE_PLUGIN=jsonfile
DESCRIPTION=Red Hat Ansible Automation Platform Minimal Execution Environment
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh

Navigator settings command

The settings command surfaces the configuration of the local environment from within navigator. From the settings screen, creators are able to view default values and values changed by local configuration parameters. Leveraging this within an integrated development environment (IDE) such as VS Code is especially helpful using features like command+click to open a file path within the editor. For example, a creator is able to see that a local ansible.cfg or ansible-navigator.yml file is being sourced by navigator and can open that file within the configured editor directly from the navigator settings screen.

Why should I care?

Ansible is flexible! System-wide configuration files can be sourced for multiple automation projects. It's very helpful to the content creator to be able to view default configuration, which configuration parameters have been defined in local configuration files and which files are being sourced by the current project. All of this enhances a streamlined creator workflow that becomes more predictable for content creators.

Navigator sample settings

Imagine you are an automation content creator starting a new project. You know that this new project will:

  • use a newly built execution environment
  • require navigator to have reasonable configuration defaults

In addition, you know you want to customize navigator to use your preferred code editor.

Navigator sample settings allow creators to display a sample ansible-navigator.yml configuration file with all parameters commented out. This allows the creator to pick and choose which settings to adjust for the new project. Things like default execution environment image name, image pull policy, which code editor to use when opening files from navigator, etc. are all configured from ansible-navigator.yml. Additionally, this sample settings file can be written to the local filesystem where, once edited for the new project, can be sourced by navigator.

$ ansible-navigator settings --sample > my.yaml

Why should I care?

Multiple automation projects usually mean multiple execution environments that need to be defined as the default execution environment for the corresponding project. By allowing settings files to be created from navigator, creators do not need to rely on memory to define the parameters necessary to customize and deploy their projects.

Things to try:

  • Use the TUI to review the current settings:
$ ansible-navigator settings
  • Review the effective setting for ansible-navigator:
$ ansible-navigator settings --effective
  • Show the source for each of the current settings:
$ ansible-navigator settings --sources

Automatic mode selection

Navigator consists of a textual user interface (TUI) that operates in interactive mode by default. In interactive mode, creators run commands and navigate the interface by using a series of keystrokes. Navigator 1.0 supported standard out mode for some commands. This means that instead of opening up the full interactive user interface, creators could run commands and query information about the local environment without opening up the TUI. Standard out mode is helpful, for instance, in CI/CD pipelines where there is no need to run commands interactively.

With navigator 2.0, more commands are supported in standard out mode. For example, the collections subcommand can now run in standard out mode and interactive mode. It's very useful to automation creators to see which Collections are available in the environment to figure out which modules can be leveraged in automated workflows.

Additionally, navigator now supports automatic mode selection for commands that are only offered in a single mode. Previously the --mode command line argument was necessary for commands that only supported mode stdout.

Why should I care?

Navigator is easily adapted to individual creators' workflows and preferences. Even more, by adding standard out support for more commands, navigator can now be utilized in automated build environments.

Things to try:

  • Show the help for the ansible-playbook command without specifying --mode stodut
$ ansible-navigator run --help-playbook
  • Show the help for the ansible-builder command:
$ ansible-navigator builder --help-builder

Lint functionality (technology preview)

One very nice use of interactive mode is in the newly added and experimental feature for linting Ansible content. The lint subcommand, when coupled with a path to an Ansible Playbook or directory of Ansible content, opens a new screen in navigator where problems and suggestions are displayed for the file(s) passed into the lint command. As the problem files are corrected and saved, the list of problems and suggestions shrinks. Coupled with a code editor's ability to control+click to open a file path, editing files with potential issues is quick and fits in well with the rest of the creator experience.

Why should I care?

Consistent content produces reliable automation. Lint support allows creators the ability to ensure that the content produced adheres to best practices.

Things to try:

  • Lint a playbook using the latest creator execution environment
$ ansible-navigator lint site.yaml --eei quay.io/ansible/creator-ee:latest

What now?

Automation content navigator 2.0 is available for use today! Navigator offers improvements to the authoring and testing experience. As a result, automation content creators have more tools on hand to assist in the creation and maintenance of automated workflows. 







Exploring New Possibilities with the AWS Cloud Control Collection

Exploring New Possibilities with the AWS Cloud Control Collection

We recently made available an experimental alpha Collection of generated modules using the AWS Cloud Control API for interacting with AWS Services. This content is not intended for production in its current state. We are making this work available because we thought it was important to share our research and get your feedback.

In this post, we'll highlight how to try out this alpha release of the new amazon.cloud content Collection

The AWS Cloud Control API

Launched in September 2021 and featured at AWS re:Invent, AWS Cloud Control API is a set of common application programming interfaces (APIs) that provides five operations for developers to create, read, update, delete, and list (CRUDL) resources and make it easy for developers and partners to manage the lifecycle of AWS and third-party services in a standard way.

The Cloud Control API provides support for hundreds of AWS resources today with support for more existing AWS resources across services such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) in the coming months.

AWS delivers a broad and deep portfolio of cloud services. It started with Amazon Simple Storage Service (Amazon S3) and grew over 200+ services. Each distinct AWS service has a specific API with its own vocabulary, input parameters, and error reporting. As these APIs are unique to each service, developers have to understand the behavior (input, responses, and error codes) of each API they use. As applications have become increasingly sophisticated and developers work across more AWS services, it can become challenging to learn and manage distinct APIs for developers.

With the launch of AWS Cloud Control API, developers have a consistent method to manage supported services that are defined as part of their cloud infrastructure throughout their lifecycle, so there are fewer APIs to learn as developers add new services to their infrastructure.

Why AWS Cloud Control API is important to Ansible

While not directly affecting Ansible content authors automating AWS services, we believe the Cloud Control API will be beneficial in providing a better cloud automation experience.

The most noteworthy is that it enables the rapid introduction of new AWS services and implementation of new features to existing ones. This will also enable more comprehensive coverage of the vast number of AWS services available. This can be further extended to include third-party services running in the AWS cloud that have adopted the Cloud Control API.

The modules contained in this Collection are generated using a tool called amazon_cloud_code_generator - developed and open sourced by the Ansible Cloud team.

amazon.cloud collection generation flow diagram

As you can see in the flow diagram, the Collection can be easily deployed using tox -e refresh_modules, and it is generated in the cloud subdirectory by default.

Basically, the generation process leverages some Python utility scripts that wrap the AWS CloudFormation client to scrape Resource Type Definition Schema or meta-schema for each Amazon-supported resource and performs the necessary processing to generate module  documentation.

Additional processing logic generates all utilities including modules, modules_utils, and tests.

For example, module_utils contains a base class that can be used by all resource modules to provide all the necessary methods to create, update, delete, describe and list with the appropriate logic to wait, paginate, and gracefully handle botocore exceptions. 

Using the amazon.cloud Collection

All the modules of this Collection use boto3 Amazon Web Services (AWS) Software Development Kit (SDK) for Python and AWS Cloud Control API (CloudControlApi) client. It requires:

The basic task example

Let's take a look at a practical example of how to use the amazon.cloud Collection. Perhaps you need to provision a simple AWS S3 bucket and then describe it.

If you are already using the amazon.aws and community.aws Collections, you can see the tasks syntax is pretty much similar.

You may notice that we no longer have info modules, but the "get" or "describe" and "list" features that the info modules were doing are handled in the main module. This certainly simplifies the Collection usage and improves user experience.

- name: Create a simple S3 bucket with public access block configuration
  amazon.cloud.s3_bucket:
    state: present
    bucket_name: {{ local_bucket_name }}
    public_access_block_configuration:
      block_public_acls: true
      block_public_policy: true
      ignore_public_acls: true
      restrict_public_buckets: true
  register: _result_create

- name: Gather information about the S3 bucket
  amazon.cloud.s3_bucket:
    state: get
    bucket_name: {{ local_bucket_name }}
  register: _result_info
- name: Create a simple S3 bucket with public access block configuration
  amazon.aws.s3_bucket:
    state: present
    name: {{ local_bucket_name }}
    public_access:
      block_public_acls: true
      block_public_policy: true
      ignore_public_acls: true
      restrict_public_buckets: true
  register: _result_create

- name: Gather information about the S3 bucket
  community.aws.aws_s3_bucket_info:
    name: {{ local_bucket_name }}
  register: _result_info

Another relevant feature of the amazon.cloud content Collection is the structure of the returned result. Particularly, the result returned by all the available operations (present, absent, list and get or describe) is well-structured and uniform across all the modules. It always contains the identifier of the resource and a dictionary of resource-specific properties. 

In this way, we can straightforwardly get the identifier of each resource and re-use it in multiple dependent resources.

This feature has definitely a positive impact on the user experience.

[
{
    "identifier": "090ba2aa-cc0c-5a40-9b5f-a2d2b8fc6ceb",
    "properties": {
        "arn": "arn:aws:s3:::090ba2aa-cc0c-5a40-9b5f-a2d2b8fc6ceb",
        "bucket_name": "090ba2aa-cc0c-5a40-9b5f-a2d2b8fc6ceb",
        "domain_name": "090ba2aa-cc0c-5a40-9b5f-a2d2b8fc6ceb.s3.amazonaws.com",
        "dual_stack_domain_name": "090ba2aa-cc0c-5a40-9b5f-a2d2b8fc6ceb.s3.dualstack.us-east-1.amazonaws.com",
        "regional_domain_name": "090ba2aa-cc0c-5a40-9b5f-a2d2b8fc6ceb.s3.us-east-1.amazonaws.com",
        "website_url": "http://090ba2aa-cc0c-5a40-9b5f-a2d2b8fc6ceb.s3-website-us-east-1.amazonaws.com"
    }
}
]

Known issues and shortcomings

  • Generated modules like these are only as good as the API and its schema. Documentation may not be complete for all the modules options.

  • Missing supportability for important AWS resources like, EC2 instance, volume and snapshot, RDS instance and snapshot, Elastic Load Balancer, etc. Resources from some of these AWS services are expected to be supported in the coming months.

  • Idempotency (desired state) is a function of the API and may not be fully supported. In the Cloud Control API, the idempotency is achieved using the ClientToken. A ClientToken, which is valid for 36 hours once used. 

    • After that, a resource request with the same client token is treated as a new request. 
    • To overcome this limitation, the modules present in this Collection handle the idempotency by performing a first get_resource(TypeName='', Identifier='') operation using the resource identifier. 
  • Missing server-side pagination may have a severe impact on performance. As you may know, some AWS operations return results that are incomplete and require subsequent requests in order to attain the entire result set. Paginators are a feature of boto3 that act as an abstraction over the process of iterating over an entire result set of a truncated API operation. Cloud Control API lacks this functionality at the moment. This limitation is handled in this Collection by implementing manual client-side paginators.

  • Filtering to provide name based identification to support desired state (idempotency) logic like in amazon.aws is absent. In practice it means you cannot list all the resources and filter the result on the server-side. 

    • For example, several modules do not allow the user to set a primaryIdentifier at creation time. One possible solution would be to allow the user to set a resource name and use that name to set [Tag:Name, but as the API does not allow server-side resource filtering, we can only implement a client-side filtering using that tag information. This approach would definitely have a severe impact on performance. 
  • Not all the resources support the available states. In practice this means that some resources cannot be updated or listed.

What is next?

The new amazon.cloud auto-generated Collection, besides the fact that it can be easily generated using the generator tool and has a pretty abstract set of APIs for all modules, is very straightforward to use and to re-use resources across multiple dependent resources.

We continually strive to:

  • Make a Collection\'s API generated modules more usable and easier to work with. 
  • Increase resource supportability and cover wider use case scenarios more quickly.
  • Improve the overall Collection module's performance.

What can we do to improve provisioning AWS cloud resources with Ansible? More broadly, what can we do to make API generated modules more usable and easier to work with? We'd like to hear what you think.

You can provide feedback by reporting any issue against the amazon.cloud GitHub repository.

Because the modules are auto-generated, you can contribute with GitHub Pull Requests by opening them against the amazon_cloud_code_generator tool and not the resulting Collection.

In conclusion

Although in its alpha version, the new amazon.cloud content Collection shows enormous potential for automating your deployments on AWS with Ansible and greatly increasing the chances of your cloud initiative being successful. 

We hope you found this blog post helpful! But, more importantly, we hope it inspired you to try out the latest amazon.cloud Collection release and let us know what you think.







Automation at the Edge, Summit 2022

Automation at the Edge, Summit 2022

As some of you may know, Red Hat Summit was back in person in Boston last week. For those who are not familiar, Red Hat Summit is the premier enterprise open source event for IT professionals to learn, collaborate, and innovate on technologies from the datacenter and public cloud to the edge and beyond. Red Hat made a lot of exciting announcements, with several that included Red Hat Ansible Automation Platform. If you could not make the event or would like to revisit some of the content, you can access any session on demand

One of the big announcements at Summit was the unveiling of new levels of security from the software supply chain to the edge. In Ansible Automation Platform 2.2, Red Hat is introducing a technical preview of Ansible content signing technology. The new capability helps with software supply chain security by enabling automation teams to validate that the automation content being executed in their enterprise is verified and trusted. 

With the announcement of this new edge capability, we showcased a session for Ansible and edge that is available on demand. The session "GitOps your distributed edge computing model with Red Hat Ansible Automation Platform" covers how Ansible Automation Platform, in combination with GitOps, can decrease the time to market and repair time to deploy and operate network edge infrastructure. It includes a demo that shows how to describe a deployment in Git, which works as a single source of truth. You will be able to see how Ansible Automation Platform enforces the correct state of the network infrastructure of a large-scale organization and its tracking through IT Service Management. 

Scaling automation anywhere using Ansible Automation Platform Red Hat introduced new cross-portfolio edge capabilities, including features in Ansible Automation Platform that solves the management and automation needs to drive visibility and consistency across an organization's edge deployments. 

The session "Ansible Automation Platform 2 automation mesh-starting locally, scaling globally" covers how to scale automation to successfully execute in distributed edge locations. 

Automating RHEL at the edge with Ansible

If you watched the keynote presentation, you heard about the release of a SaaS Edge Manager. However, we realize not everyone can use the cloud to manage their fleet. Below is how to add a postscript to your kickstart file to register your devices directly to an Ansible Automation Platform inventory so you can use it to manage your fleet.

%post
# Create ansible playbook to register device to Ansible automation platform
cat > /tmp/add_to_aap.yml <<EOF
---
- hosts: localhost
 vars:
   aap_url=https://AAPHOST.fqdn.com/api/v2/inventories/CHANGEME/hosts/
   aap_username=changeme
   aap_password=changeme
 gather_facts: true
 tasks:
   - name: create hostname from regex of mac address
     ansible.builtin.set_fact:
       edge_hostname: "{{ ansible_default_ipv4.macaddress | replace(':','') }}"
   - name: set hostname to mac ansible_all_ipv4_address
     ansible.builtin.hostname:
       name: "summit-demo-{{ edge_hostname }}"
       use: systemd
   - name: Update Ansible Tower inventory
     uri:
      url: "{{ aap_url }}"
      user: "{{ aap_ks_user }}"
      password: "{{ aap_ks_password }}"
      method: POST
      body:
        name: "{{ ansible_hostname }}"
        variables: '{ipaddress: "{{ ansible_all_ipv4_addresses }}", macaddress: "{{ ansible_default_ipv4.macaddress }}" }'
      force_basic_auth: yes
      status_code: 201
      body_format: json
      validate_certs: no
EOF
ansible-playbook /tmp/add_to_aap.yml
%end

Step 1: Inventory creation

  • Create the inventory in Ansible Automation Platform, and get the inventory number.

  • Get the URL: in this example, the inventory ID is 2:

    url example

    https://AAPHOST.fqdn.com/#/inventories/inventory/2/details

  • Assign aap_url in vars section: aap_url = https://AAPHOST.fqdn.com/api/v2/inventories/2/hosts/

Step 2: Create credentials in Ansible Automation Platform

Assign credentials to aap_ks_user and aap_ks_password in the Access Users tab in Ansible Automation Platform.

Step 3: Check Ansible Automation Platform

You should now see your devices in Ansible Automation Platform after they boot up.

screenshot







Introducing a brand new way to automate your Azure cloud

Introducing a brand new way to automate your Azure cloud

In December of 2021, Red Hat and Microsoft announced Red Hat Ansible Automation Platform on Microsoft Azure

This year during Red Hat Summit 2022, Red Hat announced the general availability of Red Hat Ansible Automation Platform on Microsoft Azure across North America with global availability coming soon.  

I'd like to spend some time providing some more details about this offering and why you should consider Red Hat Ansible Automation Platform on Azure.

Azure Marketplace deployment

Red Hat Ansible Automation Platform on Azure deploys from the Azure Marketplace as a managed application.  It deploys directly into your Azure subscription, but as the publisher of the application, Red Hat has access to a shared and secured managed resource group to support, maintain, and upgrade your deployment. More specifically, a dedicated Red Hat SRE team deals with all the ongoing management of Red Hat Ansible Automation Platform on Azure, while you focus on expanding your automation strategy within your organization across the hybrid cloud.

screenshot

Azure Integrations

For many organizations using Azure today, there are huge benefits to take advantage of with Red Hat Ansible Automation Platform on Azure. It runs in your Azure subscription. It integrates seamlessly with many of the Azure services, including Azure billing. Also, if you have a Microsoft Azure Consumption Commitment agreement (MACC), the Red Hat Ansible Automation Platform on Azure deployment costs will count towards your MACC, and will be reflected on your Azure bill.  Oh, and did I mention that Red Hat supports the deployment, and you can automate, automate, and automate some more!

Once you've deployed Red Hat Ansible Automation Platform on Azure, with a few simple configuration steps you can integrate into your Azure active directory (AD) environment for authentication.

screenshot

There's great automation content available for you to leverage with examples to learn from if you're new to Red Hat Ansible Automation Platform on Azure.

Here's a GitHub repository that has automation content for automating many Azure resources, like Azure Load Balancers, Azure PostgreSQL, Azure Networking, Azure Security groups, and more. Shortly, I'll highlight some more as we discuss the Red Hat Ansible Certified Content Collection for Microsoft Azure.

Here's an image of some sample content.

screenshot

Content is King!

Anyone using Red Hat Ansible Automation Platform on Azure will definitely want to use the Red Hat Ansible Certified Content Collection for Microsoft Azure. But with your subscription, you have access to all the Red Hat Ansible Certified Content at your fingertips!

screenshot

The Azure Collection includes over 250 modules to interrogate, manage, and automate numerous Azure resource types. From Azure AD, to networking, to DB, to AKS, to storage, to backup, to VMs, to security groups, to IAM... and so much more.  

screenshot

If you'd like to see the full list of modules you can check it out here on the Automation automation hub.  

Here's an example of an Ansible Automation Platform workflow template linking together many job templates to perform a larger automation task.  In this case, spinning up a full development environment for the DevOps team.

screenshot

Go deeper, go wider, and achieve more!

Red Hat Ansible Automation Platform on Azure includes an automation execution environment already tailored for cloud automation so you have everything you need to get started on Azure immediately. Having said that, a question that often comes up is:  "if I'm using Red Hat Ansible Automation Platform on Azure, does that mean I can only perform automation against Azure resources?" The great thing about Ansible Automation Platform in general is that it doesn't matter where you are running it from. In the case Red Hat Ansible Automation Platform on Azure, you are able to automate public cloud, private cloud, physical and virtual environments, and edge resources.  Obviously, one of the requirements here is having proper networking connectivity to other environments. Deploy Red Hat Ansible Automation Platform on Azure, and automate anywhere!

Often, when people think about the Ansible Automation Platform they think configuration Management. However, configuration management is just one of the many use cases that Ansible Automation Platform can accomplish. So many organizations today take advantage of Ansible Automation Platform to automate network and security use cases, integrate into IT Service Management (ITSM) solutions like ServiceNow, Linux automation, Windows automation, and monitoring and analytics solutions.  

Additionally, with the aggressive push to application modernization, many organizations use Ansible Automation Platform to integrate into their DevOps CI/CD pipelines. Are you using Azure DevOps, Jenkins, or other CI/CD tools? Cool, have your pipelines at any phase kick off Ansible Automation Platform automation jobs!  

The automation use cases are endless, and there are so many efficiencies and savings to be gained by leveraging Ansible Automation Platform, not to mention the reduction in human errors, and the increased cross-silo collaboration.