Subscribe to our blog

This blog post is co-authored with Ian Miller.

 

5G and beyond mobile networks are requesting automation capabilities to rapidly scale up their service rollout. To that end, Kubernetes and cloud-native infrastructures unlock a great deal of flexibility through declarative configuration.

However, there is a large number of important non-declarative components (e.g. legacy OSS/BSS systems, bare metal servers, network infrastructure, etc.) that will still require imperative configuration for the foreseeable future.

In this series of two articles, we bring together powerful tools and concepts for effectively managing declarative configurations using Red Hat OpenShift, Red Hat Advanced Cluster Management for Kubernetes, and Red Hat Ansible Automation Platform for integrating any non-declarative system into closed-loop automation workflows.

 

Declarative vs Imperative, a Zero-Sum Game for 5G?

Short answer: definitely not.

Kubernetes and Red Hat OpenShift are built around a declarative model in which configuration Custom Resources (CRs) capture the desired end state and the cluster works to reconcile to it. This model fits in seamlessly with tools like GitOps and the different engines (i.e. clusters, applications, observability, and governance) provided by Red Hat Advanced Cluster Management for Kubernetes.

Both tools are thoroughly leveraged by the Red Hat Zero Touch Provisioning (ZTP) workflow for 5G RAN deployments at the Far Edge of the network. The ZTP workflow provides the ability to manage declaratively the massive deployment of OpenShift clusters through version control in Git.

These managed OpenShift clusters are then carefully configured by the ZTP workflow (using the Policy Governance engine of Red Hat Advanced Cluster Management for Kubernetes in a one-to-many binding Policy) to host O-RAN workloads, while also fulfilling the stringent requirements of 5G cloud-native infrastructure.

The key is that the administrator defines the state declaratively and controllers continuously work in the background to bring the fleet into compliance. When real-world issues inevitably arise, the system eventually comes into compliance as those issues are resolved.

  • Networking issues may make the cluster unreachable for minutes, hours, or days, but the state will reconcile when the cluster is again reachable.
  • The current state of a cluster may not support the desired end state. An operator may be pending installation making it impossible to apply a configuration CR for it. Eventually, the error is resolved and the operator installs, at which point GitOps or Policy can apply the new configuration.

Declarative sounds nice but, how can we integrate non-declarative systems and other backend business processes?

Ansible to the rescue!

One of the key areas where hooks into Ansible automation play an important role is, precisely, to integrate the declarative managed clusters into external systems involved in the orchestration of the fleet, and other backend business processes.

At significant moments in the lifecycle of clusters, or when changes in policy compliance are detected, an Ansible Playbook can be invoked to interact with external system APIs, perform complex analysis of the clusters, or even make changes through GitOps actions.

 

Let’s put some automation to work

In this section, we’ll explore how to effectively configure the Ansible Automation Platform operator to automatically trigger Ansible jobs in response to specific conditions encountered by Red Hat Advanced Cluster Management for Kubernetes. This integration ensures a seamless and efficient management of the 5G network infrastructure.

Prerequisites

Before integrating the Ansible Automation Platform Operator with Red Hat Advanced Cluster Management for Kubernetes to automate Ansible jobs, there are a few prerequisites that need to be met. Figure 1 illustrates a high-level architecture for the proposed integration, including the platform components.

Figure 1: High-level architecture of AAP and Telco ZTP components integration for 5G networks.

Figure 1: High-level architecture of AAP and Telco ZTP components integration for 5G networks.

 

  • OpenShift Hub cluster: Required to host the components of the Telco ZTP workflow. In our environment, we have deployed a virtualized compact hub cluster in a dual-stack disconnected environment using the 4.10 release.
  • Red Hat Advanced Cluster Management for Kubernetes: Required by Telco ZTP workflow and Ansible Automation Platform. In our case, we have deployed this operator release v2.5+ given that the integration with Ansible Automation Platform is only supported from that release (as a dev-preview) onwards.
  • Ansible Automation Platform Operator: Required to host the Ansible Content Collections (using automation hub) needed in this integration. As well as running the playbooks (on automation controller) that will perform day2 operations on ZTP-deployed spoke clusters. Below, we provide installation instructions for the 2.3 version of this operator.
  • Storage operator: Required by statefulsets deployments that are part of the Ansible Automation Platform operator. A high-performance storage backend is strongly recommended for providing dynamic storage provisioning (i.e. default StorageClass) in production environments.

OpenShift Hub cluster setup

The configuration of the OpenShift Hub cluster components used to deploy and manage the fleet of clusters, including Red Hat Advanced Cluster Management for Kubernetes, GitOps, and TALM operators, is covered comprehensively in the Red Hat Zero Touch Provisioning (ZTP) documentation.

It is within the declarative deployment, configuration, and lifecycle management workflows that the Ansible hooks configured in the following sections are invoked.

Ansible Automation Platform operator setup

The Ansible Automation Platform Operator coordinates the deployment, as well as the management (i.e. upgrades and full lifecycle support), of diverse Ansible automation components on top of an OpenShift cluster.

Ansible Automation Platform mirroring

Telco 5G RAN deployments are usually disconnected from the Internet. Hence, for such scenarios without any inbound and/or outbound connectivity to the Internet, the mirroring of the Ansible Automation Platform container images is required.

Note: Red Hat provides the oc-mirror tool for managing diverse mirroring operations in a declarative manner. This technology has been fully supported by Red Hat since the release of OpenShift 4.11.

For that reason, we have dedicated this subsection to effectively accomplish so using Red Hat’s supported tooling. Prior to starting the Ansible Automation Platform mirroring, let’s first inspect what’s available in the 4.12 catalogs regarding this operator.


-> oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.12 --package=ansible-automation-platform-operator

NAME                              	DISPLAY NAME             	DEFAULT CHANNEL
ansible-automation-platform-operator  Ansible Automation Platform  stable-2.3-cluster-scoped

PACKAGE                           	CHANNEL                	HEAD
ansible-automation-platform-operator  stable-2.3             	aap-operator.v2.3.0-0.1673541033
ansible-automation-platform-operator  stable-2.3-cluster-scoped  aap-operator.v2.3.0-0.1673541471

Note: For the Telco ZTP workflow, Red Hat’s recommendation is to use the stable-*-cluster-scoped channel of this operator during installation. This channel contains all the required resources to watch objects across all namespaces in the cluster.

The oc-mirror tooling requires an ImageSetConfiguration resource to manage the mirroring operations in a declarative fashion. To that end, below we provide a sample resource to mirror all the Operator Lifecycle Manager (OLM) contents required to install the Ansible Automation Platform operator locally.


-> cat << EOF > 99-aap-operator-isc.yaml
---
apiVersion: mirror.openshift.io/v1alpha2
kind: ImageSetConfiguration
storageConfig:
  registry:
    imageURL: <local-registry-address>/olm

mirror:
  operators:
    - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
      full: true
      Packages:

... TRUNCATED ...

        - name: ansible-automation-platform-operator
          channels:
            - name: 'stable-2.3-cluster-scoped'

... TRUNCATED ...

EOF

To start the mirroring process towards a local registry, which is accessible from the disconnected 5G RAN network, we can use the following command.


-> oc-mirror --config 99-aap-operator-isc.yaml docker://<local-registry-address>/olm

Once this operation is completed, the command’s output also generates some artifacts (i.e. CatalogSources and ImageContentSourcePolicies) under a newly created directory  ./oc-mirror-workspace/results-*/. Those objects are required to configure the OpenShift cluster appropriately.

As a complement, below we also provide sample objects for both, the CatalogSource and ImageContentSourcePolicy of the OLM target previously described.


---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: redhat-operator-index
  namespace: openshift-marketplace
spec:
  image: <local-registry-address>/olm/redhat/redhat-operator-index:v4.12
  sourceType: grpc

---
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
  labels:
	operators.openshift.org/catalog: "true"
  name: operator-0
spec:
  repositoryDigestMirrors:
  - mirrors:
	- <local-registry-address>/olm/openshift4
	source: registry.redhat.io/openshift4
  - mirrors:
	- <local-registry-address>/olm/redhat
	source: registry.redhat.io/redhat
  - mirrors:
	- <local-registry-address>/olm/ansible-automation-platform
	source: registry.redhat.io/ansible-automation-platform
  - mirrors:
	- <local-registry-address>/olm/rhel8
	source: registry.redhat.io/rhel8
  - mirrors:
	- <local-registry-address>/olm/ansible-automation-platform-23
	source: registry.redhat.io/ansible-automation-platform-23

Ansible Automation Platform operator installation

At this point, we should be ready to kick off the installation of the Ansible Automation Platform Operator. As described by the official documentation, the preferred method of deployment is to install the cluster-scoped operator on a targeted namespace with manual update approval.

Below we also provide sample OLM objects to install this day2 operator in the OpenShift cluster.


-> cat << EOF | oc apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    openshift.io/cluster-monitoring: "true"
  name: aap
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: aap-gp
  namespace: aap
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: aap-sub
  namespace: aap
spec:
  channel: stable-2.3-cluster-scoped
  installPlanApproval: Manual
  name: ansible-automation-platform-operator
  source: redhat-operator-index   # <- update this value accordingly
  sourceNamespace: openshift-marketplace
EOF

Note: The only difference with regard to a connected installation is that we need to set the .spec.source field to the same CatalogSource name generated by oc-mirror in the previous step.

For this integration, we are going to leverage two of the Custom Resource Definitions (CRDs) offered by the Ansible Automation Platform Operator, namely the AutomationHub and the AutomationController. These objects enabled us to install a private automation hub as well as an automation controller, respectively.

Automation Hub setup

To install the Ansible automation hub, there is great documentation in place covering this procedure comprehensively. The AutomationHub operand will deploy an equivalent of the Ansible Galaxy server but privately in the OpenShift cluster.

Note: In order to set the appropriate resource requirements for Ansible automation hub, we strongly recommend checking out the Before you Start section in the official documentation for this product.

Hub configuration

Once Ansible automation hub is successfully installed in the cluster, let’s proceed to configure it to fulfill the requirements for the approach mentioned above. In that direction, we start by defining pre_tasks to obtain authentication credentials for Ansible automation hub.


---
- name: Get AAP Hub route info
  kubernetes.core.k8s_info:
    api_version: route.openshift.io/v1
    kind: Route
    name: automation-hub
    namespace: aap
  register: ah_hostname_raw

- name: Get AAP Hub password
  kubernetes.core.k8s_info:
    api_version: v1
    kind: Secret
    name: automation-hub-admin-password
    namespace: aap
  register: ah_password_raw

- name: Set AAP Hub host
  ansible.builtin.set_fact:
    ah_host: "{{ ah_hostname_raw.resources[0].spec.host }}"

- name: Set AAP Hub username
  ansible.builtin.set_fact:
    ah_username: "admin"

- name: Set AAP Hub password
  ansible.builtin.set_fact:
    ah_password: "{{ ah_password_raw.resources[0].data.password | b64decode }}"

- ansible.builtin.debug:
    msg:
    - "Automation Hub access credentials:"
    - "username: {{ ah_username }}"
    - "password: {{ ah_password }}"
    - "host: https://{{ ah_host }}"

Important Note: Please notice that for demonstration purposes, we have used password for authentication, however an appropriate token based approach is strongly recommended for production environments.

The subsequent configuration steps are executed from a bastion host that reaches the recently installed hub. Mind that, for demonstration purposes, this could be achieved by adding the {{ ah_host }} to the /etc/hosts file.

Create namespace

To upload any required collection to the hub, we first need to create a namespace to host it. It is important that the value name of the namespace matches the value in the collection archive metadata that you will be publishing.

For demonstration purposes only, the below sample command creates a namespace with the name stolostron. This namespace will host the stolostron collection that will be used later on.


---
- name: Configure AAP Hub
  hosts: localhost
  gather_facts: no
  collections:
	- infra.ah_configuration

  pre_tasks:
	- import_tasks: tasks/get_aap_hub_info.yml

  tasks:
    - name: Create namespace
      infra.ah_configuration.ah_namespace:
        name: stolostron
        state: present
        ah_host: "{{ ah_host }}"
        ah_username: "{{ ah_username }}"
        ah_password: "{{ ah_password }}"
        validate_certs: "{{ validate_certs }}"
Upload and Approve collection

Depending on the project requirements, you may need to upload diverse collections to the private automation hub. As commented above, we have leveraged the stolostron collection in the playbooks that we have used for this work.

Note: The Ansible Content Collection stolostron.core has been developed by Red Hat and it contains (among other good stuff) modules and plugins for driving Red Hat Advanced Cluster Management for Kubernetes functionality from Ansible Playbooks.

Furthermore, we also need to approve uploaded collections to allow their use by external Ansible components like, for instance, the automation controller presented in the next section.


---
- name: Configure AAP Hub
  hosts: localhost

... REDACTED ...


  tasks:

      ... REDACTED ...

	- name: Clone collection repo
  	  ansible.builtin.git:
    	    repo: https://github.com/stolostron/ansible-collection.core.git
    	    dest: /var/tmp/stolostron
    	    version: v0.0.2
  	  register: cloned_repo

	- name: Build collection
  	  infra.ah_configuration.ah_build:
    	    path: /var/tmp/stolostron
    	    force: true
    	    output_path: /var/tmp
  	  when: cloned_repo.changed

	- name: Upload and Approve collection
  	  infra.ah_configuration.ah_collection:
    	    namespace: stolostron
    	    name: core
    	    path: /var/tmp/stolostron-core-0.0.2.tar.gz
    	    version: 0.0.2
    	    auto_approve: true
    	    ah_host: "{{ ah_host }}"
    	    ah_username: "{{ ah_username }}"
    	    ah_password: "{{ ah_password }}"
    	    validate_certs: "{{ validate_certs }}"

Another possible approach is to download the collection’s tar files directly from upstream Galaxy and manually upload (and approve) those in the corresponding namespace in our private automation hub instance.

Create authentication token

Finally, we create a token that will be used for authentication while interacting with the API endpoints of the private automation hub. For such purpose, we have leveraged the below post_tasks to accomplish so.


---
- name: Create a new token using username/password
  infra.ah_configuration.ah_token:
    state: present
    ah_host: "{{ ah_host }}"
    ah_username: "{{ ah_username }}"
    ah_password: "{{ ah_password }}"
    validate_certs: "{{ validate_certs }}"
  register: ah_token
  changed_when: false

- name: Save token to disk
  ansible.builtin.copy:
    content: "{{ ah_token.ansible_facts.ah_token.token }}"
    dest: .galaxy_token
  changed_when: false

- ansible.builtin.debug:
    msg:
    - "Automation Hub token: {{ ah_token.ansible_facts.ah_token.token }}"

 

Wrapping Up

In this first article, we focused on the significant role of Ansible Automation Platform in bridging the automation gap within hybrid scenarios. These scenarios often comprise legacy- and non-declarative systems that need to be integrated into the reconciliation loop of modern cloud-native architectures.

Moreover, we have presented practical guidelines to effectively install the Ansible Automation Platform Operator in a disconnected 5G cloud-native infrastructure powered by Red Hat OpenShift. It is worth highlighting that Red Hat technologies have consistently offered open and robust solutions for a wide range of complex use cases.

In our next article, we will concentrate on providing the configurations needed in automation controller to run day2 operations at scale, triggered by the Telco ZTP workflow provided by Red Hat for Telco 5G RAN deployments.

 

Acknowledgments

The authors have wanted to acknowledge for the thoughtful insights and suggestions, which contributed to the improvement of this blog post to:

  • Roger Lopez, Principal Technical Marketing Manager at Red Hat
  • Yuval Kashtan, Senior Principal Software Engineer at Red Hat
  • Elijah DeLee, Associate Manager Software Engineering at Red Hat

 


About the author

Leonardo Ochoa-Aday is a Senior Software Engineer at Red Hat. Before joining Red Hat, he performed as the Cloud/Edge Infrastructure Lead of city-wide 5G testbeds within 5GBarcelona. Got a Ph.D. from the BarcelonaTech. Currently, he is part of the Telco Engineering Ecosystem at Red Hat focusing on container platforms for Telecom companies.
Read full bio

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech