What’s New in the Ansible Content Collection for Kubernetes - 2.0

June 15, 2021 by Mike Graves

As the adoption of containers and Kubernetes increases to drive application modernization, IT organizations must find ways to easily deploy and manage multiple Kubernetes clusters across regions, both residing in the public cloud and/or on-premises, and all the way to the edge. As such, we continue to expand on the capabilities of our Certified Ansible Content Collection for kubernetes.core.

In this blog post, we’ll highlight some of the exciting new changes in the 2.0 release of this Collection.

 

A New Name

Development on the kubernetes.core Collection had historically taken place in the community.kubernetes GitHub repository, which was built off community contributions before Red Hat supported it. That code base served as the source for both Collections. With this release, we have shifted all development to the kubernetes.core GitHub repository. Moving forward, the community.kubernetes namespace will simply redirect to the kubernetes.core Collection. If you are currently using the community.kubernetes namespace in your playbooks, we encourage you to begin switching over to kubernetes.core. This change better reflects that this codebase is a Red Hat supported Collection.

 

Forward-looking Changes

One of the main objectives of our 2.0 release was to better align the kubernetes.core Collection with the latest technologies and communal efforts. This is reflected by our stance to drop support for Python 2 in this release. With the end-of-life of Python 2 as of January 1, 2020, this enabled our efforts to focus on Python 3 to better support new features and improvements to the Collection.

This release also introduces the replacement of the OpenShift client with the official Kubernetes Python client. We have done this in order to address concerns from the community about the Collection’s dependency on the OpenShift client. This change enables us to more easily incorporate future improvements and new features into the Collection without the loss of any functionality with the new client, as any differences have already been regarded.

 

Performance Improvements

This latest release takes the kubernetes.core Collection to the next level by improving performance of large automation tasks. The focus was not just to deliver a Collection with Kubernetes templating capabilities but to also enhance the experience with Ansible when automating changes across a large number of resources.

 

Turbo Mode

This release introduces the newest addition to the Collection, Ansible turbo mode. Ansible turbo mode focuses on improving the performance of Ansible Playbooks that manipulate many Kubernetes objects. This change is primarily intended for cases where you are touching hundreds of objects; it likely won’t provide any noticeable improvements for managing just a handful of objects.

Currently, the default mode of the operation for the kubernetes.core Collection is to create a new connection to the Kubernetes API for each request. As requests increase, this can cause a significant overhead. With Ansible turbo mode enabled, the connection to the API gets reused and removes the overhead of a newly created connection per request.

You can enable Ansible turbo mode in your environment by simply installing the cloud.common Collection.

In one of our upcoming in depth blog posts, we will expand on the performance savings showing benchmark comparisons between having Ansible turbo mode enabled vs disabled. Keep an eye out for that!

 

Scenario: Apply multiple templates in one task

Previously, the easiest way to process multiple template resource definitions was to use an Ansible loop. This was sufficient if you only had a few templates, but when attempting to apply hundreds of resource definition templates, the performance degradation from looping becomes apparent. 

Many used a workaround to create a single template that used a Jinja2 loop to include additional templates. While faster, this method was complex and difficult to implement. To address this issue, we introduced the ability for the template parameter of the k8s module to accept a list as shown in the example below.

- name: Create the workers
  kubernetes.core.k8s:
    state: present
    continue_on_error: true
    template:
      - crds/worker1.yml
      - crds/worker2.yml
      - crds/worker3.yml

By default, if one template fails, the task will fail and any remaining templates will not be applied. This behavior can be changed by using the new continue_on_error boolean parameter. When a resource can’t be created and the parameter is set to true, an error message will be added to the result for the failing item and execution will continue.

 

Improved Patching

This release brings some important changes to how the k8s module handles patching the existing objects. We have added the ability to control whether the k8s module should create an existing object if it doesn’t exist, or should only patch an existing object. We have also moved the JSON patch strategy out into a separate module to address longstanding issues with its functionality. As a reminder, Kubernetes supports three different patching strategies.

 

Scenario: Only patch existing objects

The default behavior of the k8s module is to create a new object if it does not already exist, and to use strategic merges to patch an object if it does exist. In most cases, this is the right behavior, but sometimes you need more control over this process. For example, you may want to make sure that you are only patching an existing object instead of creating a new one. To support this use case, we’ve added a new patched state:

- name: Add label to existing namespace
  kubernetes.core.k8s:
    state: patched
    definition:
      apiVersion: v1
      kind: Namespace
      metadata:
        name: ns1
        labels:
          stage: development

When using the new patched state, if the object does not exist, it will not be created and a warning will be issued.

 

Scenario: Apply a JSON patch

The JSON patch strategy has long been a problem in the k8s module. The format of a JSON patch is completely different from a Kubernetes resource definition. Thus we  decided to split this functionality into its own module, ks8_json_patch. Moving forward, the merge_type parameter in the k8s module will only support strategic-merge (the default) and merge. As of 2.0, the json type is deprecated for this parameter. To modify an existing object using a JSON patch:

- name: Add a new entry to a ConfigMap
  kubernetes.core.k8s_json_patch:
    kind: ConfigMap
    namespace: ns1
    name: myconfig
    patch:
      - op: add
        path: /data/setting1
        value: somevalue

Similar to the new patched state described above, the k8s_json_patch module will not create a new object if it does not already exist.

 

Summary

With the kubernetes.core Collection, Ansible users can better manage applications on Kubernetes clusters and existing IT environments with faster iterations and easier maintenance. 

We’re always looking to improve with ways to help users like you get things done in more simplified, faster ways.Try out the latest kubernetes.core Collection yourself and let us know what you think.

If you want to dive deeper into Ansible and Kubernetes, check out the following resources:

Share:

Topics:
Orchestration, Containers, Kubernetes Operators with Ansible


 

Mike Graves

Mike Graves is a Senior Software Engineer on the Ansible Cloud team. When he’s not shuffling bits around, he’s probably reading a book or gaming. You can find his work on GitHub at @gravesm.


rss-icon  RSS Feed

RH-ansible-automation-platform_trial-banner
AnsibleFest-2020-banner-A