What’s New and What’s Changed in the Ansible Content Collection for Kubernetes

November 11, 2020 by Timothy Appnel

Increasing business demands are driving the need for increased automation to support rapid, yet stable, and reliable deployments of applications and supporting infrastructure. Kubernetes and cloud-native technologies are no different. That is why we recently released kubernetes.core 1.1, our first Certified Content Collection for deploying and managing Kubernetes applications and services.

Prior to the release of kubernetes.core 1.1, its contents were released as community.kubernetes. With this content becoming Red Hat supported and certified, a name change was in order. We are in the process of making that transition, starting with this release. 

In this blog post, we will go over what else has changed and what’s new in this Content Collection as it transitions and enhances it from its community roots. 

 

Focus on The Future

In looking to create a stable and supported release from the upstream sources that Red Hat is known for, the first thing we did was look at what was in community.kubernetes and elsewhere to organize it for the future. This not only led to the aforementioned name change: the content and underlying code was reorganized to be more maintainable and ready to serve as the foundation of other Kubernetes-based collections.

We also quickly recognized that community.kubernetes was serving two distinct user groups with different needs — those working with baseline Kubernetes and those working with Red Hat OpenShift, our enterprise-class Kubernetes platform. So we made the decision to split out OpenShift-specific content and functionality into its own collection called community.okd. This will take some release cycles to completely perform this transition, but it has begun with this release of kubernetes.core.

We’ve copied over the openshift inventory plugin, the oc connection plugin (from community.general), and the k8s_auth module (renamed to openshift_auth), along with associated utilities. We also extracted OpenShift-specific logic in the underlying k8s  module code for resources like Projects not found in the base Kubernetes distribution. 

While the OpenShift-specific code still remains in kubernetes.core for now, all development and maintenance has been moved to community.okd. Once that collection reaches 1.0, we will be deprecating that content in the Kubernetes Collection. Look for a separate blog post on community.okd in the near future.

 

A Commitment to Standards and the Broader Community

Another thing we became aware of was concerns from the community about the k8s modules’ dependency on the openshift python client library. OpenShift is Kubernetes at its heart and having both baseline Kubernetes and OpenShift extensions together as it was, this made a lot of sense. The Openshift library also provided logic to baseline K8s users that was not found in the official Kubernetes python client library. That said, we want to be sensitive that the requirement of the Openshift library raises questions and concerns with the broader Kubernetes community about compatibility with all Kubernetes distributions.

We’ve made a commitment to transition the kubernetes.core collection to the official Kubernetes library. This will take some time and effort to do effectively and without breaking existing usage of this content. Our engineers have already contributed some pull requests to the official client library from the Openshift library and will continue to do so as part of this transition. 

 

New Modules

While a lot of our work has gone into refactoring and reorganizing the Ansible Content Collection for Kubernetes  for the future, that’s not all that’s new in kubernetes.core 1.1. 

A lot of what’s new in this release is our support of Helm 3 that we've already covered in its own blog post. That support wasn’t the only addition, though.

The scenarios that follow show how common uses of these new Kubernetes modules we’ve added. 

 

Scenario: Executing a command in a Pod

Perhaps you need to do a quick restart of a daemon or run some other maintenance process for a service running in a Pod on your Kubernetes cluster.  The k8s_exec module enables you to execute an arbitrary command in a Pod to do just that.

- name: Execute a command
  kubernetes.core.k8s_exec:
    namespace: nodana
    pod: zuul-scheduler
    command: zuul-scheduler full-reconfigure

 

Scenario: Fetching the log of a Pod

Need to fetch a log from a specific Pod? Let the k8s_log help you retrieve the contents of a specific Pod. Here is an example task that will get the log from the first Pod found in the testing namespace matching a selector:

- name: Log a Pod matching a label selector
  kubernetes.core.k8s_log:
    namespace: testing
    label_selectors:
    - app=example
  register: log

This is analogous to the behavior  of a kubectl log command.

 

Scenario: Getting information about your cluster

Ansible has had the k8s_info module for fetching info on any Kubernetes objects in a cluster for some time, but there hasn’t been an easy way to get information about the cluster itself in Ansible. That is why we introduced the k8s_cluster_info module.

- name: Get Cluster information
  kubernetes.core.k8s_cluster_info:
  register: api_status

- debug:
    var: api_status
    verbosity: 1

 

Scenario: Rollback a Deployment

This is an interesting contribution we received from community contributor Julien Huon (@julienhuon).

Try as we may to get things right the first time, sometimes things go wrong and we need to roll them back. The k8s_rollback provides a straight-forward Kubernetes native means of performing a rollback on a Deployment or DaemonSet in your cluster.

- name: Rollback a failed deployment
  kubernetes.core.k8s_rollback:
    api_version: apps/v1
    kind: Deployment
    name: web
    namespace: testing

 

Incremental Improvements

No matter the domain, networking, linux, security, public cloud, we are always looking for ways to make what’s possible easier and more frictionless than writing code or doing things manually from the command line. This is not just limited to new modules and plugins either. 

 

Built-in Template Processing

Looking at a lot of early Ansible automation being written for Kubernetes, we noticed many content developers chose to store their K8S object definition files externally and read them into their plays with something like this:

- name: create foo configmap
  kubernetes.core.k8s:
    definition: "{{ lookup('template', 'foo.yml') | from_yaml }}"

This worked great, but the plays we reviewed would have several, sometimes dozens, of these lookup incantations for every definition param on the k8s module. We thought we could do better with built-in template processing that could provide the same functionality more concisely.

- name: create foo configmap
  kubernetes.core.k8s:
    template: foo.yml

Isn’t that much easier to read and write now?

 

Native Ansible Vaulted File Support

Another area incremental improvement we made was built-in support of Ansible vaulted files. If you are working in a group (maybe you’re using Ansible in a GitOps workflow) and want to store a kubeconfig or Secrets definition files in your Git repository, you‘ll want to encrypt them so they aren’t in plain text and easily read. Now, all of the k8s modules support native handling of Ansible vaulted files. Try it with the kubeconfig or template params.

 

What’s Next?

Ansible lets you connect the different technologies that are ultimately needed to be successful in your efforts with Kubernetes. With the Kubernetes and Helm content in kubernetes.core, Ansible users can better manage applications on Kubernetes clusters and on existing IT and with faster iterations and easier maintenance. And we’re always looking to improve with ways to help users like you get things done in more simplified, faster ways. 

Try out the latest kubernetes.core collection and let us know what you think.

If you want to dive deeper into the topic of Ansible and Kubernetes, you can also check out these resources:

Share:

Topics:
Kubernetes Operators with Ansible


 

Timothy Appnel

Timothy Appnel is a Senior Product Manager, product evangelist and "Jack of all trades" on the Ansible team at Red Hat. Tim is an old-timer in the Ansible community that has been contributing since version v0.5. The synchronize module in Ansible is all his fault.


rss-icon  RSS Feed

RH-ansible-automation-platform_trial-banner
AnsibleFest-2020-banner-A