Command Module Deep Dive for Networks

Command Module Deep Dive for Networks

Enterprise customers often ask the Ansible Network team about the most common use cases for network automation. For this blog post I want to talk about one of the most used (and most versatile) set of network modules: the command modules. The command modules let you run networking commands with Ansible, the same way a network engineer would type them on the command line. With Ansible, though, the output doesn't just fly by the terminal window to be lost forever; it can be stored and used in subsequent tasks. It can also be captured in variables, parsed for use by other tasks, and stored in host variables for future reference. Today we're going to cover basic use of the network command modules, including retaining command output with the register parameter. We'll also cover scaling to multiple network devices with hostvars and adding conditional requirements with the wait_for parameter and three related parameters: interval, retries, and match. The takeaway from this blog post is that any repeatable network operations task can be automated. Ansible is more than configuration management, it allows network operators the freedom to decouple themselves from routine tasks and save themselves time.

There are command modules for a variety of platforms, including all the modules supported under the network offering:

Network Platform *os_command Module
Arista EOS eos_command
Cisco IOS / IOS-XE ios_command
Cisco IOS-XR iosxr_command
Cisco NX-OS nxos_command
Juniper Junos junos_command
VyOS vyos_command

Basic Command Module Usage

Here is a simple playbook using eos_command to run show version:

---
- name: COMMAND MODULE PLAYBOOK
  hosts: eos
  connection: network_cli

  tasks:
   - name: EXECUTE ARISTA EOS COMMAND
     eos_command:
       commands: show version
     register: output

   - name: PRINT OUT THE OUTPUT VARIABLE
     debug:
       var: output

There are two tasks; the first task uses eos_command with a single parameter called commands. Since I am only running one command I can just type show version on the same line as commands. If I had more than one command, I would list each one on a separate line below the commands: parameter. In this example I use the register keyword to save the output of the show version command. You can use the register parameter at the task level with any Ansible task. The register parameter defines a variable to store the output of the task for use in subsequent tasks. In my playbook the variable is called output.

The second task uses the debug module to print out the content of the variable output, from the previous task. In this case I will see the same output I would have seen if I'd typed "show version" at the command line directly on the EOS device. My playbook prints this output in the terminal window where I ran the playbook. The Ansible debug module is great for checking variables.

Below is the output from running the playbook:

PLAY [eos] *************************************************************************

TASK [execute Arista eos command] **************************************************
ok: [eos]

TASK [print out the output variable] ***********************************************
ok: [eos] => {
    "output": {
        "changed": false,
        "failed": false,
        "stdout": [
            "Arista vEOS\nHardware version:    \nSerial number:       \nSystem MAC address:  0800.27ec.005e\n\nSoftware image version: 4.20.1F\nArchitecture:           i386\nInternal build version: 4.20.1F-6820520.4201F\nInternal build ID:      790a11e8-5aaf-4be7-a11a-e61795d05b91\n\nUptime:                 1 day, 3 hours and 23 minutes\nTotal memory:           2017324 kB\nFree memory:            1111848 kB"
        ],
        "stdout_lines": [
            [
                "Arista vEOS",
                "Hardware version:    ",
                "Serial number:       ",
                "System MAC address:  0800.27ec.005e",
                "",
                "Software image version: 4.20.1F",
                "Architecture:           i386",
                "Internal build version: 4.20.1F-6820520.4201F",
                "Internal build ID:      790a11e8-5aaf-4be7-a11a-e61795d05b91",
                "",
                "Uptime:                 1 day, 3 hours and 23 minutes",
                "Total memory:           2017324 kB",
                "Free memory:            1111848 kB"
            ]
        ]
    }
}

PLAY RECAP *************************************************************************
eos                        : ok=2    changed=0    unreachable=0    failed=0

You can see in the output above that both tasks were executed successfully. The first task with the default verbosity has no output, it simply returns the host the task was executed on, eos, with ok and the color green to indicate success. The second task, using the debug module, returns output from the command that was executed. You see the same information in two different formats:

  • stdout
  • stdout_lines

The stdout returns everything a human operator would have seen on the command line in one large string. The stdout_lines returns a list of strings, making the information easier to read. Each item is a separate line that was returned from the command.

Here is the output to see what that looks like:

Arista EOS command line output

eos>show vers
Arista vEOS
Hardware version:
Serial number:
System MAC address: 0800.27ec.005e
Software image version: 4.20.1F
Architecture: i386
Internal build version: 4.20.1F-6820520.4201F
Internal build ID: 790a11e8-5aaf-4be7-a11a-e61795d05b91
Uptime: 1 day, 3 hours and 56 minutes
Total memory: 2017324 kB
Free memory: 1116624 kB

Ansible stdout_lines

"stdout_lines": [
    [
          "Arista vEOS",
          "Hardware version:",
          "Serial number:",
          "System MAC address: 0800.27ec.005e",
          "",
          "Software image version: 4.20.1F",
          "Architecture: i386",
          "",
          "Internal build version: 4.20.1F-6820520.4201F",
          "Internal build ID: 790a11e8-5aaf-4be7-a11a-e61795d05b91",
          "",
          "Uptime: 1 day, 3 hours and 23 minutes",
          "Total memory: 2017324 kB",
          "Free memory: 1111848 kB"
        ]

Engineers and folks familiar with JSON and YAML have already noticed another interesting detail: stdout_lines begins with two opening brackets

"stdout_lines": [
            [

The two opening brackets show that stdout_lines actually returned a list of lists of strings. If we modify our debug task slightly we use this feature to view selections from the output. Since the output only has one list within the list, that entire sub-list is referenced as list zero, or the first list. Let's look at a single line from the return value. I want to grab the System MAC Address for our test. Looking above at the output, the System MAC address is returned in the fourth line, which corresponds to line 3 (since computers start counting from 0). This means we want to grab line 3 of list 0, which corresponds to output.stdout_lines[0][3].

- name: print out a single line of the output variable
  debug:
  var: output.stdout_lines[0][3]

The debug task returns exactly what we need:

TASK [print out a single line of the output variable] ******************************
ok: [eos] => {
    "output.stdout_lines[0][3]": "System MAC address:  0800.27ec.005e"
}

Why would we want that first list to be zero and what is the use case for having multiple lists? It is possible to run multiple commands with one command task.Here is a playbook with three commands:

---
- hosts: eos
  connection: network_cli
  tasks:
    - name: execute Arista eos command
      eos_command:
        commands:
          - show version
          - show ip int br
          - show int status
      register: output

    - name: print out command
      debug:
        var: output.stdout_lines

The output from output now looks like this:

"output.stdout_lines": [
    [
        "Arista vEOS",
        "Hardware version:    ",
        "Serial number:       ",
        "System MAC address:  0800.27ec.005e",
        "",
        "Software image version: 4.20.1F",
        "Architecture:           i386",
        "Internal build version: 4.20.1F-6820520.4201F",
        "Internal build ID:      790a11e8-5aaf-4be7-a11a-e61795d05b91",
        "",
        "Uptime:                 1 day, 4 hours and 20 minutes",
        "Total memory:           2017324 kB",
        "Free memory:            1111104 kB"
    ],
    [
        "Interface              IP Address        Status    Protocol      MTU",
        "Ethernet1              172.16.1.1/24      up         up          1500",
        "Management1            192.168.2.10/24    up         up          1500"
    ],
    [
        "Port  Name    Status       Vlan    Duplex  Speed  Type     Flags",
        "Et1           connected  routed    full    unconf EbraTestPhyPort   ",
        "Et2           connected    1       full    unconf EbraTestPhyPort   ",
        "Et3           connected    1       full    unconf EbraTestPhyPort   ",
        "Ma1           connected  routed   a-full a-1G   10/100/1000"
    ]
]

List zero corresponds to the show version command, list one corresponds to the show ip int br command and list two corresponds to the show int status command. The list number directly corresponds to the order the command was run in.

Arista EOS Command Relevant List
show version output.stdout_lines[0]
show ip int br output.stdout_lines[1]
show int status output.stdout_lines[2]

Scaling Command Module Use: Host Variables

So what happens if we run on two or more network devices at the same time?

diagram of Ansible running multiple network devices

The variable output is saved uniquely as a host variable per inventory host.  If I had three switches and ran this playbook against them, I would have an output variable for each unique host.  For a demonstration we will grab the IP address from the show ip int br command above for the Ethernet1 port for just one of the switches, switch03.  The show ip int br corresponds to the second command we run, and the ethernet1 interface shows up in the 2nd line so we know we want stdout_lines[1][1].  To reference vars about a specific host we use the keyword hostvars and do a dictionary lookup on the host we want.

This is what the debug task will look like:

- name: debug hostvar
  debug:
    var: hostvars["switch03"].output.stdout_lines[1][1]

And the output matches what we would expect:

TASK [debug hostvar] ***************************************************************
ok: [switch03] => {
    "hostvars["switch03"].output.stdout_lines[1][1]": "Ethernet1              172.16.1.3/24      up         up              1500"
}

By default a task will use variables specific to that host, but when using hostvars you can directly reference other host variables.

Conditions in Command Module Tasks: wait_for

The wait_for parameter applies conditional logic directly after a command is run. This means within the same task you can decide to purposely fail if output does not match a desired state. By default, with the above tasks, when there is no wait_for parameter specified the task is only run one time. However, if the wait_for parameter is specified the task is run until the condition is met or the maximum retries (the default is 10 retries) is hit. If I turn on command logging I can easily see this with a playbook specifically meant to fail for demonstration purposes:

---
- hosts: eos
  connection: network_cli
  tasks:
    - name: execute Arista eos command
      eos_command:
        commands:
          - show int status
        wait_for:
          - result[0] contains DURHAM

This playbook will run show int status 10 times, because it will never find the word DURHAM in the output of show int status.

A show logging command shows me the command was indeed run 10 times:

Mar 24 20:33:52 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=17 start_time=1521923632.5 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status
Mar 24 20:33:53 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=18 start_time=1521923633.71 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status
Mar 24 20:33:54 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=19 start_time=1521923634.81 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status
Mar 24 20:33:55 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=20 start_time=1521923635.92 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status
Mar 24 20:33:56 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=21 start_time=1521923636.99 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status
Mar 24 20:33:58 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=22 start_time=1521923638.07 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status
Mar 24 20:33:59 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=23 start_time=1521923639.22 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status
Mar 24 20:34:00 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=24 start_time=1521923640.32 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status
Mar 24 20:34:01 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=25 start_time=1521923641.4 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status
Mar 24 20:34:02 eos Aaa: %ACCOUNTING-6-CMD: admin vty6 192.168.2.1 stop task_id=26 start_time=1521923642.47 timezone=UTC service=shell priv-lvl=15 cmd=show interfaces status

Here we can look at a real world example. For this playbook everything is configured to bring up an OSPF adjacency with another device, except the ip ospf area command. We will apply the command and then use the wait_for parameter to make sure the adjacency comes up (indicated by FULL). If full is not found within 10 retries the task will fail.

---
- hosts: eos
  connection: network_cli
  tasks:
    - name: turn on OSPF for interface Ethernet1
      eos_config:
        lines:
          - ip ospf area 0.0.0.0
        parents: interface Ethernet1

    - name: execute Arista eos command
      eos_command:
        commands:
          - show ip ospf neigh
        wait_for:
          - result[0] contains FULL

Execute the playbook with the ansible-playbook command:

  ansible-playbook ospf.yml

PLAY [eos] *********************************************************************************************

TASK [turn on OSPF for interface Ethernet1] *******************************************************
changed: [eos]

TASK [execute Arista eos command] ****************************************************************
ok: [eos]

PLAY RECAP ******************************************************************************************
eos                    : ok=2    changed=1    unreachable=0    failed=0

Checking on the command line confirms the playbook ran successfully:

eos#show ip ospf neigh
Neighbor ID     VRF      Pri State             Dead Time   Address         Interface
2.2.2.2         default  1   FULL/DR           00:00:33    172.16.1.2      Ethernet1

In addition to contains we can use:

  • eq: Equal
  • neq: Not equal
  • gt: Greater than
  • ge: Greater than or equal
  • lt: Less than
  • le : Less than or equal

There are also three parameters that can be used in conjunction with wait_for. All of these are documented on the individual module pages:

Parameter Description
interval Time between each command retry
retries The number of times we retry the task until we fail (or the condition is met)
match Match all of your conditionals or just any of them

Let's quickly elaborate on the match parameter:

- name: execute Arista eos command
  eos_command:
    commands:
      - show ip ospf neigh
    match: any
    wait_for:
      - result[0] contains FULL
      - result[0] contains 172.16.1.2

With match: any set, the task will succeed if the result contains either FULL or 172.16.1.2. With match: all (which is the default), both must be true before the task can successfully pass. It is far more likely if you have multiple conditionals that you want them all met versus just one of them to be true.

Looking for a use-case where you might want to use match: any? Imagine that you want to confirm internet access to and from a datacenter. For this particular data center you have five ISPs (Internet Service Providers), and five discrete BGP connections between your data center and those ISPs. An Ansible Playbook could check all five BGP connections and continue on if any of them are up and working, rather than all five. Just remember that any implies OR versus all which implies and.

Parameter Description
match: any Implicit OR, any conditional can be met
match: all Implicit AND, all conditionals must be met

Negative Conditions: Handling Inverse Logic

Sometimes you are looking for absences or other negative conditions in command output. It's tempting to use the neq comparison for any negative scenario, but it's not always the right choice. If you want the inverse logic of contains (output from this command should not contain this) consider using the register keyword to store the output followed by a when statement on a subsequent task. If you want to stop the playbook if conditions are not met, consider simply using the fail module or assert module where you can fail on purpose. The neq shown above only makes sense if you can grab an exact value (if you can get key-value pairs or JSON) versus getting a string or list of strings. Otherwise you are going to be doing exact string compares.

Going Further

Read the documentation on Working with Command Output in Network Modules here.  

The more specific conditionals like ge, le, etc. can work really well on JSON output from certain networking platforms as shown in the example in the documentation.




Connect Ansible Tower and Jenkins in under 5 minutes

Connect Ansible Tower and Jenkins in under 5 minutes

We often hear from customers that they are using Jenkins in some capacity or another. And since I'm a consultant, I'm lucky to hear first hand what our customers are using and how they need to integrate Ansible Tower. There has always been a way to integrate the Ansible Tower and Jenkins using tower-cli, but I thought there could be a neater, closer to native, way of doing it.

So here we go. I've recorded this short screencast to show you just how easy it is:

The screencast is not currently available. You can find helpful material in the links below or on the Ansible community forum: https://forum.ansible.com/tag/awx

Below you will find a few links from the video and a link to how to try Ansible Tower.

plugins.jenkins.io/ansible-tower

wiki.jenkins.io/display/JENKINS/Ansible+Tower+Plugin




Windows Package Management

Windows Package Management

Welcome to the third installment of our Windows-centric Getting Started Series!

In the previous post we covered how you can use Ansible and Ansible Tower to help manage your Active Directory environment. This post will go into how you can configure some of those machines on your domain. Most of this post is going to be dominated by specific modules. Ansible has a plethora of Windows modules that can be found here. As time is not a flat circle, I can't discuss all of them today but only a few that are widely used.

MSIs and the win_package Module

So you got your domain up, you have machines added to it, now let's install some stuff on those machines. I do have a few notes before moving forward in regards to the modules we'll be discussing. The module win_msi is deprecated and will be removed in Ansible 2.8 (current version as of this post is 2.5). In its place you can use win_package which I will be using throughout this post.

Alright, back to installing stuff. The win_package module is the place to be. It is used specifically for .msi and .exe files that need to be installed or uninstalled. These files can also be sourced locally, from a URL or from a network resource.

The parameters within the module add a lot of flexibility. As of Ansible 2.5, you can now list your arguments and the module will escape the arguments as necessary. However, it is recommended to use a string when dealing with MSI packages due to the unique escaping issues with MsiExec.

Below are a few examples of how you can use the win_package module. The first one shows how to install Visual C++ and list arguments:

- name: Install Visual C thingy with list of arguments instead of a string
  win_package:
    path:
http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe
    product_id: '{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}'
    arguments:
    - /install
    - /passive
    - /norestart

Above, we see that the product ID is listed. While Ansible can and does extract the ID from the MSI when it's local, we don't want to force the host to download the MSI if it's not necessary. When you supply the product ID, Ansible can quickly check to see if the package is already installed without downloading a potentially huge MSI from the internet first. You can install without the product ID. An example of this can be found below: 

- name: Install Remote Desktop Connection Manager locally omitting the product_id
  win_package:
    path: C:\temp\rdcman.msi
    state: present

As I stated earlier, you can also download from a network share and specify the credentials needed to access that share. The example below shows it in action, installing 7-zip from a network resource: 

- name: Install 7zip from a network share specifying the credentials
  win_package:
    path: \\domain\programs\7z.exe
    product_id: 7-Zip
    arguments: /S
    state: present
    user_name: DOMAIN\User
    user_password: Password

Windows Package Management and Chocolatey

Unlike most Linux distros, Windows does not have a built-in package manager. Windows does have the Windows App Store but I don't think that a whole lot of those products are making their way into data centers.

There is, however, a community project called Chocolatey that provides a full package management experience for Windows users. It helps take away some of the pain that comes with managing raw setup.exe and .msi files. And wouldn't you know, we have a module for it!

But before we get into talking about the module, let's talk a little bit more about Chocolatey. A good comparison for people who might be Mac users, Chocolatey is similar to that of Homebrew. Chocolatey is designed to easily work with all aspects of managing Windows software (installers, zip archives, runtime binaries, internal and 3rd party software) using a packaging framework that understands both versioning and dependency requirements.

The Chocolatey module is similar in use as its *nix counterparts, simple and powerful. It does have a soft requirement in regards to the version. And what I mean by soft requirement is that it needs v. 0.10.5 to run but if Chocolatey doesn't see that version, it will update it for you. And to add some more sugar to that dessert, if Chocolatey is not present on the machine, the module will install it for you as well before going through with its assigned tasks.

To get started with the module, one of the easiest examples could be installing a lightweight CLI tool. Let's use git because people's workflows are all the same, right?

- name: Install git
  win_chocolatey:
    name: git
    state: present

All joking aside, it is that easy to install git. It is just as easy to install a different version of something as well if you need to have a specific version of something. Let's say you need Notepad++, version 6.6. It would look something like this: 

- name: Install notepadplusplus version 6.6
  win_chocolatey:
    name: notepadplusplus
    version: '6.6'

One key thing to note when you're stating a version: make sure to enter it as a string (see the two tick marks around 6.6). Reason being is that if it is not entered as a string, it's considered a YAML float. Many valid version numbers don't translate properly into a float and yield the same result (eg, '6.10' != '6.1' for most versioning schemes, but 6.10 as a float will become 6.1), so it's a good habit to always quote version numbers to ensure that they're not re-formatted.

Some packages might require an interactive user logon to make an installation. To pass the correct credentials, you can use become to achieve this. The example below shows an installation of a package that requires the use of become. Note that you can become: System and it will not require you to supply a password.

- name: Install a package that requires 'become'
  win_chocolatey:
    name: officepro2013
  become: yes
  become_user: Administrator
  become_method: runas

The win_chocolatey module is strong and powerful but in some scenarios will not work without become. There is no easy way to find out if a package requires become so the best course is to try it without and use become if that fails. 

Packages and Chocolate Bars in Windows Automation

To wrap up this blog post, we covered a couple of ways you can automate the installation of packages for your Windows environment. Whether you are all in on using Chocolatey or need to install some packages, Ansible has the power to do all of that and more for you, in a simple and easy-to-read format.

In our next and final post of the Getting Started with Windows Automation series, we will talk about Security and Updates in Windows using Ansible!




Connecting to a Windows Host

Connecting to a Windows Host

Welcome to the first installment of our Windows-specific Getting Started series!

Would you like to automate some of your Windows hosts with Red Hat Ansible Tower, but don't know how to set everything up? Are you worried that Red Hat Ansible Engine won't be able to communicate with your Windows servers without installing a bunch of extra software? Do you want to easily automate everyone's best friend, Clippy?

Ansible-Windows-Clippy

We can't help with the last thing, but if you said yes to the other two questions, you've come to the right place. In this post, we'll walk you through all the steps you need to take in order to set up and connect to your Windows hosts with Ansible Engine.

Why Automate Windows Hosts?

A few of the many things you can do for your Windows hosts with Ansible Engine include:

  • Starting, stopping and managing services
  • Pushing and executing custom PowerShell scripts
  • Managing packages with the Chocolatey package manager

In addition to connecting to and automating Windows hosts using local or domain users, you'll also be able to use runas to execute actions as the Administrator (the Windows alternative to Linux's sudo or su), so no privilege escalation ability is lost.

What's Required?

Before we start, let's go over the basic requirements. First, your control machine (where Ansible Engine will be executing your chosen Windows modules from) needs to run Linux. Second, Windows support has been evolving rapidly, so make sure to use the newest possible version of Ansible Engine to get the latest features!

For the target hosts, you should be running at least Windows 7 SP1 or later or Windows Server 2008 SP1 or later. You don't want to be running something from the 90's like Windows NT, because this might happen:

Ansible-Windows-90s

Lastly, since Ansible connects to Windows machines and runs PowerShell scripts by using Windows Remote Management (WinRM) (as an alternative to SSH for Linux/Unix machines), a WinRM listener should be created and activated. The good news is, connecting to your Windows hosts can be done very easily and quickly using a script, which we'll discuss in the section below.

Step 1: Setting up WinRM

What's WinRM? It's a feature of Windows Vista and higher that lets administrators run management scripts remotely; it handles those connections by implementing the WS-Management Protocol, based on Simple Object Access Protocol (commonly referred to as SOAP). With WinRM, you can do cool stuff like access, edit and update data from local and remote computers as a network administrator.

The reason WinRM is perfect for using with Ansible Engine is because you can obtain hardware data from WS-Management protocol implementations running on non-Windows operating systems (in this specific case, Linux). It's basically like a translator that allows different types of operating systems to work together.

So, how do we connect?

With most versions of Windows, WinRM ships in the box but isn't turned on by default. There's a Configure Remoting for Ansible script you can run on the remote Windows machine (in a PowerShell console as an Admin) to turn on WinRM. To set up an https listener, build a self-signed cert and execute PowerShell commands, just run the script like in the example below (if you've got the .ps1 file stored locally on your machine):

Ansible-Windows-Powershell

Note: The win_psexec module will help you enable WinRM on multiple machines if you have lots of Windows hosts to set up in your environment.

For more information on WinRM and Ansible, check out the Windows Remote Management documentation page.

Step 2: Install Pywinrm

Since pywinrm dependencies aren't shipped with Ansible Engine (and these are necessary for using WinRM), make sure you install the pywinrm-related library on the machine that Ansible is installed on. The simplest method is to run pip install pywinrm in your Terminal.

Step 3: Set Up Your Inventory File Correctly

In order to connect to your Windows hosts properly, you need to make sure that you put in ansible_connection=winrm in the host vars section of your inventory file so that Ansible Engine doesn't just keep trying to connect to your Windows host via SSH.

Also, the WinRM connection plugin defaults to communicating via https, but it supports different modes like message-encrypted http. Since the "Configure Remoting for Ansible" script we ran earlier set things up with the self-signed cert, we need to tell Python, "Don't try to validate this certificate because it's not going to be from a valid CA." So in order to prevent an error, one more thing you need to put into the host vars section is: ansible_winrm_server_cert_validation=ignore

Just so you can see it in one place, here is an example host file (please note, some details for your particular environment will be different):

[win]
172.16.2.5
172.16.2.6

[win:vars]
ansible_user=vagrant
ansible_password=password
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore

Step 4: Test Connection

Let's check to see if everything is working. To do this, go to your control node's terminal and type ansible [host_group_name_in_inventory_file] -i hosts -m win_ping. Your output should look like this:

Ansible-Windows-Screen-Grab

Note: The win_ prefix on all of the Windows modules indicates that they are implemented in PowerShell and not Python.

Troubleshooting WinRM

Because WinRM can be configured in so many different ways, errors that seem Ansible Engine-related can actually be due to problems with host setup instead. Some examples of WinRM errors that you might see include an HTTP 401 or HTTP 500 error, timeout issues or a connection refusal. To get tips on how to solve these problems, visit the Common WinRM Issues section of our Windows Setup documentation page.

Conclusion

You should now be ready to automate your Windows hosts using Ansible, without the need to install a ton of additional software! Keep in mind, however, that even if you've followed the instructions above, some Windows modules have additional specifications (e.g., a newer OS or more recent PowerShell version). The best way to figure out if you're meeting the right requirements is to check the module-specific documentation pages.

For more in-depth information on how to use Ansible Engine to automate your Windows hosts, check out our Windows FAQ and Windows Support documentation page and stay tuned for more Windows-related blog posts!




Using Ansible to Mitigate Network Vulnerabilities

Using Ansible to Mitigate Network Vulnerabilities

Even Networks Aren't Immune

Just like with Windows and Linux servers, networking devices can be exploited by vulnerabilities found in their operating systems. Many IT organizations do not have a comprehensive strategy for mitigating security vulnerabilities that span multiple teams (networking, servers, storage, etc.). Since the majority of network operations is still manual, the need to mitigate quickly and reliably across multiple platforms consisting of hundreds of network devices becomes extremely important.

In Cisco's March 2018 Semiannual Cisco IOS and IOS XE Software Security Advisory Bundled Publication, 22 vulnerabilities were detailed. While Red Hat does not report or keep track of individual networking vendors CVEs, Red Hat Ansible Engine can be used to quickly automate mitigation of CVEs based on instructions from networking vendors.

In this blog post we are going to walk through CVE-2018-0171 which is titled "Cisco IOS and IOS XE Software Smart Install Remote Code Execution Vulnerability." This CVE is labeled as critical by Cisco, with the following headline summary:

"...a vulnerability in the Smart Install feature of Cisco IOS Software and Cisco IOS XE Software could allow an unauthenticated, remote attacker to trigger a reload of an affected device, resulting in a denial of service (DoS) condition, or to execute arbitrary code on an affected device."

Gathering Information from Networks

Users leverage Ansible modules to access devices, retrieve information, execute commands and handle systems using specific keywords. One of the first things a CVE requires is collection of inventory. To mitigate a CVE, the networking platform and specific version of code is required. CVE-2018-0171 affects the IOS and IOS-XE network operating systems and Ansible can obtain this information easily. Let's use the ios_facts module which returns key-value pairs for use in subsequent tasks. For example: ansible_net_model returns the model, and ansible_net_image returns the image file the device is running. For a full list see the ios_facts module documentation page.

- name: gather facts for ios platforms
  ios_facts:
    gather_subset: all

- name: output facts to terminal window
  debug:
    msg: >
      Device {{ansible_net_hostname}}, model
{{ansible_net_model}}, running {{ansible_net_version}}

When executing the playbook we get nice output like this:

ok: [rtr1] => {
    "msg": "Device rtr1, model CSR1000V, running 16.05.02\n"
}
ok: [rtr2] => {
    "msg": "Device rtr2, model CSR1000V, running 16.05.02\n"
}
ok: [switch] => {
    "msg": "Device c3850-1, model WS-C3850-24T, running 16.06.01\n"
}

This allows us to quickly grab useful information about our network, and check it against Cisco Security Advisory. In a demo on the GitHub network-automation project we show how to use network facts to quickly build a nice HTML report.

The vulnerability CVE-2018-0171 specifies that to see if a device is vulnerable we must run the show vstack config command. In my network, I have three devices running IOS-XE, two are CSR1000V devices, and one device is a 3850. The two CSR devices don't have the command, while the 3850 switch does. To make my playbook robust enough to handle errors when a command doesn't exist, I can use the ignore_errors parameter. Otherwise, the playbook would fail and exit when a target network node doesn't have the ability to use that command. Alternatively, I could run the playbook only on switches by using a limit. For this example, let's assume we are running the Cisco 3850 which has the show vstack config command.

- name: run show vstack config
    ios_command:
      commands:
        - show vstack config
    register: showvstack

In the playbook above I used the register: showvstack. The showvstack is a user defined term (I chose it, it is not reserved). By registering this I can use the output from the show vstack config later in the playbook. We can use the debug module to look at the showvstack variable to see how it's formatted:

ok: [switch] => {
    "showvstack": {
        "changed": false,
        "failed": false,
        "stdout": [
            "Capability: Director | Client\n Oper Mode: Disabled\n Role: NA\n Vstack Director IP address: 0.0.0.0\n\n *** Following configurations will be effective only on director ***\n Vstack default management vlan: 1\n Vstack start-up management vlan: 1\n Vstack management Vlans: none\n Join Window Details:\n\t Window: Open (default)\n\t Operation Mode: auto (default)\n Vstack Backup Details:\n\t Mode: On (default)\n\t Repository:"
        ],

<<rest of output removed for brevity>>

There is a stdout and a stdout_lines. To read more on the common return values refer to the documentation. Next, we will use my new favorite module, the assert module. This enables us to check if given expressions are true, failing the task if they are not. Cisco provides two outputs that we need to check for in the result of the show vstack config command:

switch1# show vstack config
Role: Client (SmartInstall enabled)

or

switch2# show vstack config
Capability: Client
Oper Mode: Enabled
Role: Client

We can use the assert module to check the text we saved in the showvstack variable:

- name: Check to make sure Cisco's Smart Install Client Feature is not enabled (1/2)
  assert:
    that:
      - "'SmartInstall enabled' not in showvstack.stdout"
      - "'Role' not in showvstack.stdout"
      - "'Client' not in showvstack.stdout"

Each line in the assert module that is added means there is an implicit AND, meaning all three need to be true for the task to pass.

Similarly we can check the second statement:

- name: Check to make sure Cisco's Smart Install Client Feature is not enabled (1/1)
  assert:
    that:
      - "'Oper Mode' not in showvstack.stdout"
      - "'Enabled' not in showvstack.stdout"
      - "'Role' not in showvstack.stdout"
      - "'Client' not in showvstack.stdout"

For this particular CVE it lists that there are no workarounds available. On some CVEs we could use the ios_command or ios_config modules to mitigate the CVE based on the instructions the vendor provided. For this particular CVE it links to the documentation on how to disable vstack using the command no vstack which could be sent using the ios_command module. It also recommends for older releases to block traffic on TCP port 4786, which could be pushed using the ios_config module. Since no workaround is provided on the CVE, a network operator needs to make an educated decision based on their environment. Alternatively, for CVE-2018-0150 there is a workaround provided, and the ios_config could simply send no username cisco to mitigate the CVE.

Red Hat Ansible Engine and Red Hat Ansible Tower can be used to help network operators and administrators scale repetitive tasks like checking these dozens of CVEs and make sure their network is safe from vulnerabilities. On the server side, when system administrators are using Red Hat Insights, they can automatically generate playbooks for Red Hat Enterprise Linux to help with vulnerabilities and proactively identify threats to security, performance, and stability. Ansible can be the common way to execute tasks across your entire IT infrastructure.




Enable self-healing applications with Ansible and Dynatrace

Enable self-healing applications with Ansible and Dynatrace

The size, complexity and high rate of change in today's IT environments can be overwhelming. Enabling the performance and availability of these modern microservice environments is a constant challenge for IT organizations.

One trend contributing to this rate of change is the adoption of IT automation for provisioning, configuration management and ongoing operations. For this blog, we want to highlight the repeatable and consistent outcomes allowed by IT automation, and explore what is possible when Ansible automation is extended to the application monitoring platform Dynatrace.

Monitoring Today

Considering the size, complexity and high rate of change in today\'s IT environments, traditional methods of monitoring application performance and availability are often necessary and commonplace in most operations teams. Application performance monitoring (APM) platforms are used to detect bottlenecks and problems that can impact the experience of your customers.

Monitoring alone, however, isn't always enough to help keep your applications running at peak performance. When issues are detected, APM platforms are designed to alert the operator of the problem and its root-cause. The Ops team can then agree on a corrective action, and implement this action against the impacted systems.

What if common or time-consuming corrective actions could be automated?

Dynatrace Automates Remediation

The Dynatrace APM platform provides AI-powered, full stack performance monitoring of your microservice environments and its underlying infrastructure. Dynatrace enables insights into your IT operation and detects if areas of your environment do not meet performance or error rate thresholds by an automated baselining.

Once Dynatrace detects abnormal system behavior that affects real users, a problem alert is created that groups all incidents that share the same root-cause.

Demo application triggers a Problem alert. Dynatrace detected a degradation in response time, impacting 54 real users and more than 300 service calls:

Dynatrace Problem Alert

As soon as Dynatrace detects a problem within an environment, a problem notification can be sent out to third party systems to notify them about the incidents. Dynatrace allows users to integrate with Ansible Tower as a Notification System, allowing operators to launch Ansible Tower job templates from Dynatrace Problem Notifications.

Ansible Tower is now available as a featured third-party integration within the Dynatrace Notification System:

Ansible Tower integration with Dynatrace

The integration also allows transferring contextual information for the detected problem. This means Ansible job templates can leverage these extra variables for a context-aware, finer grained remediation in terms of executing a predefined playbook. 

Specify the Ansible Tower job template URL, credentials and an optional custom message. The Notification can be saved and will be triggered as soon as Dynatrace detects a problem in your environment:

Ansible Tower job template

Execution of a job template triggered by the Dynatrace problem notification sent to\ Ansible Tower:

Dynatrace executes Ansible Tower job

Note that extra variables are passed with the job template, designed to eliminate the need for the operator to provide this contextual information.

Self-Healing Applications in Action

Once your Ansible job templates are in place and customized for facilitating remediation tasks and the integration within Dynatrace is set up, the workflow for your self-healing applications looks as follows:

  • Dynatrace monitors your environment and detects problems once they affect real users
  • Dynatrace sends a problem notification to Ansible Tower
  • Ansible Tower launches the specified job template to start the remediation
  • Once the problem is resolved, Dynatrace closes the problem

As you can see, the Dynatrace - Ansible Tower integration is designed to simplify the setup of IT management automation tasks. Furthermore, the integration of Ansible Tower into the Dynatrace Problem Notifications workflow enables self-healing applications by triggering pre-defined, automatable Ansible job templates that are executed by Ansible Tower each time a problem is detected.




Getting Started with Ansible Tower's API

Getting Started with Ansible Tower's API

Welcome to another entry in the Getting Started series. The API (Application Programming Interface) or, as I like to refer to it, the Magical Land of Automation Information, can be used in quite a few ways. In this Getting Started post, we will be discussing Red Hat Ansible Tower's API and how you can use it to extract information to utilize in your playbooks and other tools.

The idea for this blog post came about when David Federlein was developing a new Ansible Tower demo and presentation. I will be making references to that codebase, which you can follow along with throughout this post. Please note that this demo utilizes Vagrant and VirtualBox so you'll need to have those applications installed if you would like to stand up the demo yourself.

Ansible Tower's API

Ansible Tower's API is fully browsable. You can navigate to your instance's REST API by typing this into your browser: http://<Tower server name>/api/v2. Once there, you can click any of the listed links and view the current objects loaded for that particular attribute in Ansible Tower. Everything you can do in Ansible Tower's UI can be done from the API; you can also use it to view everything from credentials to users. As we'll review in the next section, you can manually post to the API or make calls through a playbook.

Posting to the API

There are many different ways that you can make calls to the API, but today we are going to focus on two of the most basic:

  1. Manually from the REST API interface of Ansible Tower
  2. From a playbook

What I mean by "basic" here is that these methods are done only through Ansible Tower. As most of you might know, you can do some pretty amazing stuff with the information from Ansible Tower with other applications.

We'll not only be able to configure and modify Ansible Tower via these methods, but we'll also demonstrate that you can kick off jobs via API call as well. This will allow tighter integration with other aspects of your enterprise infrastructure and give the ability to run Red Hat Ansible Engine workloads while still restrained by the role-based access controls configured around those resources and Job Templates.

Posting Manually

For starters, the easiest (albeit not the quickest or most automated) way to post to the API is from the API interface. Here you can select an object to post to. Each object has a template at the bottom of the page that displays the fields that can be contained in a post.

For example, let's say you want to add a project to your Ansible Tower instance via the API. All you would have to do is navigate to your Ansible Tower's API screen (https://<towerip>/api/v2) select the project URL (/api/v2/projects/) and then scroll down to the bottom. Displayed there will be the content, which will look like this:

{
    "name": "",
    "description": "",
    "local_path": "",
    "scm_type": "",
    "scm_url": "",
    "scm_branch": "",
    "scm_clean": false,
    "scm_delete_on_update": false,
    "credential": null,
    "timeout": 0,
    "organization": null,
    "scm_update_on_launch": false,
    "scm_update_cache_timeout": 0
}

Once you have that content, fill in the quotes with the relative information from your environment. After you paste it into your field, hit POST. If that posted successfully, you can view the project in the Ansible Tower UI and also through the API.

If it failed, you will receive a notification of a bad request. The method for fixing the error will show up in quotes below it. For example, if you are creating a user and fail to enter a password for that user, it will fail and return the following error:

{
    "password": [
        "This field may not be blank."
    ]
}

If you run into any issues with making a post to the API (like the above error), the OPTIONS button found at the top right of the UI next to GET can be of great help. The OPTIONS button describes the acceptable values for POST, PUT and PATCH for the specific object or endpoint you are wishing to post to.

Once the error you have found is fixed in the content field, hit "Post" one more time and note that the object has now been added to Ansible Tower successfully.

Posting Via a Playbook

Another way to post to Ansible Tower's API is through a playbook. The GitHub repo that I linked earlier in the post does this throughout the post installation plays. Almost everything done after the installation is done through the API.

To see it in action, let's sync that project that you just added into your instance. This will require some prior knowledge on the construction of Ansible Playbooks. If you need help or want to brush up on your playbook knowledge, you can visit our documentation.

The play that kicks off the job sync utilizes the URI module within Ansible. This module is used to interact with web services, such as the Ansible Tower API. This exact play can be found in the codebase that I linked above at /roles/tower/main.yml.

- name: kick off project sync
  uri:
    url:  https://localhost/api/v1/projects/7/update/
    method: POST
    user: admin
    password: "{{ towerpass }}"
    validate_certs: False
    status_code:
      - 200
      - 201
      - 202
  when: response.status == 201

In this playbook task, we are telling Ansible to navigate to the API URL for your project. In this instance, it's https://localhost/api/v2/projects/7/update/. Notice that the project has a number before update. Projects are assigned a number in Ansible Tower based on the timing of their entry into Ansible Tower. This number can only be found by navigating to the API interface for projects https://<your_ip_here>/api/v2/projects/. Once there, you will need to find the project you wish to sync and then make a post to the update endpoint of that project number. The example does the update on project number 7.

Once you have found the correct project you want to update, you will need to make a post to the update endpoint. In this example, since we are updating project 7, the endpoint is https://localhost/api/v1/projects/7/update/.

For this post to work successfully with the URI module, you will need to also pass the API your user credentials that you log into Tower with. In this example, we are using the default admin user. You can use whichever user that has sufficient access to make such a post.

Kicking Off a Job

Now, the header might seem a little ambiguous. "Jake, kicking off a job isn't that hard in Ansible Tower." This is correct, but for this example, we are going to kick off a job in Ansible Tower from a playbook task, which is yet another thing you can do by making a call to the API. The specific example I am going to reference can be found in the vagrant-common role (/roles/vagrant-common/main.yml).

Now once you get your spectacles out, the task that I am narrowing is found in the example below:

name: kick off the provisioning job template
  shell:  "curl -f -H 'Content-Type: application/json' -XPOST --user
admin:{{ towerpass }}
https://172.16.2.42/api/v2/job_templates/8/launch/ --insecure"
  when: inventory_hostname == 'demovm4'

At first glance, you are seeing the shell module in use, running a curl command to a specific https endpoint. It just so happens that this https endpoint is the API endpoint for launching a specific job template.

That specific job template is assigned a number in Ansible Tower. In order to not have to go digging through the API to find your specific job template endpoint, a quick and easy way to find it is to navigate to the job template that you want to launch via the API. Once there, look at the URL and the number it's assigned to will be there.

Once you find the correct job template, the https endpoint will look something like api/v2/job_templates/8/launch/. Hit that endpoint with a -XPOST in a curl command and you should be cooking with gas.




Infoblox Integration in Ansible 2.5

Infoblox Integration in Ansible 2.5

The Ansible 2.5 open source project release includes the following Infoblox Network Identity Operating System (NIOS) enablement:

  • Five modules
  • A lookup plugin (for querying Infoblox NIOS objects)
  • A dynamic inventory script

For network professionals, this means that existing networking Ansible Playbooks can utilize existing Infoblox infrastructure for IP Address Management (IPAM), using Infoblox for tracking inventory and more. For more information on Infoblox terminology, documentation and examples, refer to the Infoblox website

Let's elaborate on each of these Ansible 2.5 additions. All of the following examples (and many more) are provided in the network automation community project, under the infoblox_ansible GitHub repository. The integrations for Ansible require that the control node (where Ansible is being executed from) have the infoblox-client installed. It can be found here and installed with pip issuing the pip install infoblox-client command.

Ansible Infoblox Modules

There are five new modules included with Ansible 2.5. They can be currently found in the development branch of the documentation:

Here is an example playbook on configuring a IPv4 network using the nios_network module:

---
- hosts: localhost
  connection: local
  tasks:
    - name: set dhcp options for a network
      nios_network:
        network: 192.168.100.0/24
        comment: sean put a comment here
        options:
          - name: domain-name
            value: ansible.com
        state: present
        provider: "{{nios_provider}}"

Since this playbook did not specify the network_view parameter it will default to the default view. To run the playbook use the ansible-playbook command:

SEANs-MacBook-Pro:infoblox_ansible sean$ ansible-playbook  configure_network.yml

PLAY [localhost] ***************************************************************************************

TASK [set dhcp options for a network] ***************************************************************
changed: [localhost]

PLAY RECAP ******************************************************************************************
localhost                  : ok=1    changed=1    unreachable=0    failed=0

We can login to the web https GUI website and look under Data Management -> IPAM where we will see the new network listed:

Ansible-Infoblox-Image-1

The modules can keep state (where applicable) so when we re-run the playbook instead of saying changed it will just say OK and not perform any changes to Infoblox. This is also referred to as idempotency (referred to in the Ansible Docs glossary).

SEANs-MacBook-Pro:infoblox_ansible sean$ ansible-playbook  configure_network.yml

PLAY [localhost] ***************************************************************************************

TASK [set dhcp options for a network] ***************************************************************
ok: [localhost]

PLAY RECAP ******************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0

Ansible Infoblox Lookup Plugin

Next let's look at the new lookup plugin for Infoblox. The Ansible documentation for the lookup plugin can be found here. The lookup plugin allows us to query different InfoBlox NIOS objects, such as network views, dns views, host records, and more. In my Infoblox IPAM tab (Data Management->IPAM) I have four top of rack leaf switches, and two spine switches defined. I can see them under the list view for managed nodes:

Ansible-Infoblox-Image-2

Let's look at an Ansible Playbook snippet focused on grabbing information about a host record:

 - name: fetch host leaf01
      set_fact:
        host: "{{ lookup('nios', 'record:host', filter={'name': 'leaf01'}, provider=nios_provider) }}"

We will set the result of the lookup plugin (specified by the keyword nios above) to the variable host. We only want the information for leaf01, so we will filter based on the name. For the full playbook checkout the get_host_record.yml stored on the network automation community.

Run the playbook with the ansible-playbook command:

SEANs-MacBook-Pro:infoblox_ansible sean$ ansible-playbook get_host_record.yml

PLAY [localhost] ***************************************************************************************

TASK [fetch host leaf01] ******************************************************************************
ok: [localhost]

TASK [check the leaf01 return variable] *************************************************************
ok: [localhost] => {
<SNIPPET, REST OF OUTPUT REMOVED FOR BREVITY>
    "host": {
        "ipv4addrs": [
            {
                "configure_for_dhcp": false,
                "host": "leaf01",
                "ipv4addr": "192.168.1.11"
            }
        ],
    }
}

TASK [debug specific variable (ipv4 address)] ******************************************************
ok: [localhost] => {
    "host.ipv4addrs[0].ipv4addr": "192.168.1.11"
}

TASK [fetch host leaf02] ******************************************************************************
ok: [localhost]

TASK [check the leaf02 return variable] *************************************************************
ok: [localhost] => {
<SNIPPET, REST OF OUTPUT REMOVED FOR BREVITY>

    "host": {
        "ipv4addrs": [
            {
                "configure_for_dhcp": false,
                "host": "leaf02",
                "ipv4addr": "192.168.1.12"
            }
        ],
    }
}

PLAY RECAP ******************************************************************************************
localhost                  : ok=5    changed=0    unreachable=0    failed=0

The above playbook shows us how we can query Infoblox to grab specific information about Infoblox objects (in this case, specific hosts). These facts can be used through an Ansible play and allow Infoblox to act as a single source of truth for information that may be changing. While the Ansible modules allow you to configure Infoblox, the lookup plugin allows you to grab information from Infoblox to use in subsequent tasks. To read more about Ansible variables, facts and the set_fact module, refer to the Ansible variables documentation.

Ansible Infoblox Dynamic Inventory

Ansible dynamic inventory scripts allow import of inventory from another source like Cobbler, AWS or in this case Infoblox NIOS. You can read more about dynamic inventory on the Ansible dynamic inventory documentation page.

There are two files that need to be located under the contrib/inventory/ in the Ansible project:

  • infoblox.yaml - specifies the provider arguments and optional filters
  • infoblox.py - python script that retrieves inventory

Update the infoblox.yaml with your login information to the NIOS instance. This includes the username, password and an IP address or hostname. Make sure the infoblox.yaml file is located in /etc/ansible/infoblox.yaml.

To test your setup the python script infoblox.py can be run by executing python infoblox.py on the command line:

[ec2-user@ip-172-16-103-218 infoblox]$ python infoblox.py
{
    " ": {
        "hosts": [
            "leaf01",
            "leaf02",
            "leaf03",
            "leaf04",
            "spine01",
            "spine02"
        ]
    },
<SNIPPET, REST OF OUTPUT REMOVED FOR BREVITY>

For this playbook we will create a small debug playbook to print out the inventory_hostname for each host we grab using the infoblox python dynamic inventory script.

---
- hosts: all
  gather_facts: false
  tasks:
    - name: list all hosts
      debug:
      var: inventory_hostname
      delegate_to: localhost

To grab the inventory for a playbook use the -i parameter and specify the infoblox.py python script. Run the playbook with the ansible-playbook command:

[sean@rhel-7]$  ansible-playbook -i infoblox.py debug.yml

PLAY [all] ***********************************************************************************************

TASK [list all hosts] ************************************************************************************
ok: [leaf01 -> localhost] => {
    "inventory_hostname": "leaf01"
}
ok: [leaf03 -> localhost] => {
    "inventory_hostname": "leaf03"
}
ok: [leaf02 -> localhost] => {
    "inventory_hostname": "leaf02"
}
ok: [leaf04 -> localhost] => {
    "inventory_hostname": "leaf04"
}
ok: [spine01 -> localhost] => {
    "inventory_hostname": "spine01"
}
ok: [spine02 -> localhost] => {
    "inventory_hostname": "spine02"
}

PLAY RECAP ******************************************************************************************
leaf01                       : ok=1    changed=0    unreachable=0    failed=0
leaf02                       : ok=1    changed=0    unreachable=0    failed=0
leaf03                       : ok=1    changed=0    unreachable=0    failed=0
leaf04                       : ok=1    changed=0    unreachable=0    failed=0
spine01                    : ok=1    changed=0    unreachable=0    failed=0
spine02                    : ok=1    changed=0    unreachable=0    failed=0

More Information

For more information on Ansible networking check out the Ansible Networking microsite. Infoblox NIOS can now utilize Ansible Playbook that are already configuring Cisco IOS, NX-OS, IOS-XR, Juniper JunOS, Arista EOS and much more.




Getting Started with LDAP Authentication in Ansible Tower

Getting Started with LDAP Authentication in Ansible Tower

Next in the Getting Started series is covering the basics of configuring Red Hat Ansible Tower to allow users to log in with LDAP credentials. In this post, we'll explain a few troubleshooting tips to help narrow down problems and correct them. As long as you have a map of your LDAP tree/forest, this post should help get users logging in with their LDAP credentials.

CONFIGURATION SETTINGS

To configure your Ansible Tower for LDAP authentication, navigate to Settings (the gear icon) and to the "Configure Tower" section. The area within these configuration settings we're focusing on is "Authentication", and the sub category should be set to "LDAP".

Ansible-Getting-Started-Tower-LDAP-7

The fields that will be the primary focus are:

  • LDAP server URI
  • Bind DN and password
  • User/group searches

The other fields will allow you to refine your LDAP searches to reduce the resources used in production or map your organization.

The LDAP URI is simply the IP or hostname of your LDAP server prepended with the protocol (ldap://).

Ansible-Getting-Started-Tower-LDAP-8\

The bind DN will be a user credential and password (followed by the group and domain) with access to read the LDAP structure.

Ansible-Getting-Started-Tower-LDAP-1

REFINING USER SEARCH

With Ansible Tower able to connect to the LDAP server, refining the user search completes the configuration. The User Search entry will match the pattern specified by location and scope. In this case the user ID is the sAMAccountName value (instead of uid) since the search is against an Active Directory tree.

Ansible-Getting-Started-Tower-LDAP-4Ansible-Getting-Started-Tower-LDAP-2

USER AND GROUP SEARCH

The User and Group searches are where the most troubleshooting might have to be done, depending on how complex your directory structure is. Use the ldapsearch tool from the openldap package to construct searches against the LDAP server. Begin with a basic search and dive incrementally to refine your searches.

ldapsearch -x  -H ldap://10.10.10.254 -D "CN=jarvis,CN=Users,DC=shield,DC=team" -w 01Password! -b "cn=Users,dc=shield,dc=team"

This search is general and will list results in the location specified (-b "cn=Users,dc=shield,dc=team") with the location matching what you would use for your LDAP search scope against your server.

The LDAP Require Group and LDAP Deny Group fields are for adding single entries to narrow your search scope by a single group. The LDAP User DN Template field will narrow down the scope to just the format you enter in the field. In the LDAP User Search field within the configuration page use:

  • SCOPE_SUBTREE: to search recursively down the directory tree
  • SCOPE_ONELEVEL: to specify a search one level down the tree only
  • SCOPE_BASE: to only search the level specified in the base DN

Use the results returned from the LDAP search tool to choose the values to search by, for example: uid or sAMAccountName & group or groupOfNames. It's worth keeping in mind that LDAP User DN Template will supercede your LDAP User Search, so only use one or the other when setting it up.

Ansible-Getting-Started-Tower-LDAP-6

For Windows/AD Admins

These steps set up a single-sign-on to Ansible Tower for logging in LDAP users. Configuration of Ansible Tower to authenticate against LDAP-connected hosts would be done in the Credentials section, and the same considerations will apply to authentication against Windows hosts that apply to Ansible.Considerations include prepping WinRM on the hosts to accept connections. Before preparing and running jobs against Windows hosts in an Active Directory, make sure to have the Credentials set up appropriately!

USER ATTRIBUTE MAP

Finally, it's important to dedicate some time when testing LDAP authentication to attribute user and organization mapping. The LDAP User Attribute Map is where the LDAP attributes are mapped to Ansible Tower attributes. Examples include first name, last name, email, etc. In this case the email attribute is mapping to the [userPrincipalName] in the Active Directory Server being used. The default is "mail" for most LDAP layouts, but you will need to know your structure in order to map accordingly.

Ansible-Getting-Started-Tower-LDAP-5

The LDAP User Flags By Group field can be used to quickly narrow down mapping. For example, users belonging to the OU named "secret" are mapped to the superusers group in Ansible Tower in the example below:

Ansible-Getting-Started-Tower-LDAP-3

More complex mapping will get equally more complex when mapping to teams and organizations in Ansible Tower. The example being used has a single organization with org admins defined as the OU named "secret" that was matched in User Flags By Group.

{
 "Shield": {
  "admins": [
   "ou=secret,dc=shield,dc=team"
  ],
  "remove_admins": false,
  "remove_users": false,
  "users": true
 }
}

Users are assigned teams using the LDAP Team Map field. The simple LDAP database in the example below is mapping two groups to two respective teams within the same organization.

{
 "secret": {
  "organization": "Shield",
  "users": "OU=secret,DC=shield,DC=team",
  "remove": false
 },
 "avengers": {
  "organization": "Shield",
  "users": "OU=avengers,DC=shield,DC=team",
  "remove": false
 }
}

Mapping users and groups to Ansible Tower will vary in difficulty based on the LDAP database layout. Use the LDAP search command to refine your group queries and match them accordingly in Ansible Tower.

Recap

  • To authenticate LDAP users logging into Ansible Tower, use: LDAP server URI, bind DN & password and user and group search
  • Using LDAP User DN Template overrides the User Search
  • Use LDAP Require Group and/or LDAP Deny Group to reduce the number of groups searched by Ansible Tower
  • LDAP User attributes in Ansible Tower are defined in LDAP User Attribute Map
  • Use LDAP User Flags By Group to set LDAP user flags in Ansible Tower
  • Groups in LDAP are mapped to organizations or teams in LDAP Organization Map and LDAP Team Map, respectively



Adding Proxy Support within Red Hat Ansible Tower

Adding Proxy Support within Red Hat Ansible Tower

Getting Started with Adding Proxy Support

There are many reasons why proxies are implemented into an environment. Some can be put in place for security, others as load balancers for your systems. No matter the use, if you have a proxy in place, Red Hat Ansible Tower may need to utilize it. For a more in-depth look at what we will be doing in this post, you can visit our docs specifically on Proxy Support within Ansible Tower here.

Adding a Load Balancer (Reverse Proxy)

In some instances, you might have Ansible Tower behind a load balancer and need that information added to your instance. Sessions in Ansible Tower associate an IP address upon creation, and Ansible Tower's policy requires that any use of the session match the original IP address.

To allow for support of a proxy, you will have to make a few changes to your Ansible Tower configuration. Previously, this would have been done in a settings.py file found on your Ansible Tower host, but as of 3.2 you can now make these changes in the UI. To make these edits, you must be an admin on the instance and navigate to Settings, and then to Ansible Tower configuration.

Once you are in the Ansible Tower Configuration, select the System tab up at the top next to Jobs. Once there, we are going to be making an edit to the Remote Host Headers box. There will already be some text in there that is set after the installation. By default REMOTE_HOST_HEADERS is set to ['REMOTE_ADDR', 'REMOTE_HOST'].

The edit you are going to make should reflect the following line with the relevant information from your organization\'s environment.

REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST']

Once you have entered the relevant information, click the green Save button in the bottom right corner and you'll be all set.

Outbound Proxy

Setting up Ansible Tower to utilize an outbound proxy is quick and easy. One of the things that we see quite often when an outbound proxy needs to be in place is a project sync failing (if you aren't using locally stored playbooks). This error appears when Ansible Tower cannot resolve the source control management (SCM) domain that you are using to manage your versioned playbooks, such as github.com. To fix this issue, you will need to make some configuration changes to Ansible Tower. To do this, navigate to the admin settings (the gear in the top right hand corner) and from there, select Configure Ansible Tower.

Navigate to the Jobs tab that can be found across the top of the page. Once you are inside the Jobs tab, scroll down until you find the extra environment variables.

You will need to enter three line entries to add your proxy to your instance. Please note, you will need to know the server URL to make these changes worth your while.

AWX_TASK_ENV['http_proxy'] = 'http://url:port/'

AWX_TASK_ENV['https_proxy'] = 'http://url:port/'

AWX_TASK_ENV['no_proxy'] = '127.0.0.1,localhost'

Once the information has been entered, select the green Save button in the bottom right hand corner.

Please note, if you are upgrading from a prior release, you may need to remove prior settings from configuration files before using the Ansible Tower interface to configure these settings.

Now you can use Ansible Tower's power to automate while allowing it to utilize your proxy server, ELB or whichever form of filtering you have in place for your environment. It is not a hard process to implement, but does require some prior knowledge about your particular infrastructure.