The Ansible Basics: Why Automation Matters Today in IT

May 20, 2015 by Justin Nemmers

IT-AutomationYour development team has completed weeks of work, delivering their masterpiece-an application-to IT for  deployment, but it doesn’t work.

See, the developers made use of a different port, that now needs to be opened on the firewall so end users can communicate with the software. IT changed the firewall rule, but didn’t tell development, so they never even knew it was an issue. Later, they create another application with the same issue, except this time, it will be deployed in a different environment.

No procedure or policy was created to capture all of the changes necessary to successfully deploy the app, so the same thing happens again. It’s a vicious cycle.

IT departments struggle to manage thousands of configurations and hundreds of applications with everyone working in silos. Teams  who develop the apps frequently are  not on the teams that use them.  Meanwhile, operations teams deploy apps they didn’t write and have to convey to the development team when changes need to be made in order for them to work in this new and foreign-to-development thing called "a production environment".

Sound familiar?

Today’s IT environments are extremely complex. In the past, applications and hardware were closely connected. Apps came from a single vendor complete with their own hardware and software, backed up in a truck to your environment.

Hardware was loosely standard-based, which meant organizations chose a vendor and were then tied to that vendor for their hardware and software. Though it was difficult to change vendors – from Digital to Sun Microsystems, for example - it could be done if you redesigned nearly everything in your environment.

As time went on, however, the coupling of hardware and software began to loosen.

Hardware became commoditized and open standards-based architectures allowed software providers to build their own operating environments. Suddenly, software developers could develop applications for any operating system, regardless of the hardware.

Companies no longer relied on one vendor for their hardware and software needs, which gave them more freedom, but with more choices within environments, how components were pieced together became extremely complicated.

You could now buy hardware from anyone and choose your own operating center and applications, but you also had to now manage all of these pieces in-house rather than rely on your hardware provider for support.

Also, where there used to be one server with one owner running a specific list of applications, there was now one server that could run 100 virtual machines owned by multiple teams and operating systems, with different versions of those operating systems operating on different networks and storage pools.

As virtual environments took hold, it was no longer possible to point to one server and easily identify what it did. In this new landscape, the “tree,” the data center, is always growing, and there are many different trees with large applications that might be in multiple trees in multiple environments.

Managing all of this and understanding how everything fits together isn’t easy and falls on the IT department’s shoulders.

Though there are a number of tools available  for some time to help manage this complex IT environment, they are incomplete. At the time these tools were built, applications were easy to configure and deploy to a server or virtual machine because a company’s web server, database server and middleware were all in one place.

But today, as application workloads are more widely distributed, and IT applications and configurations are more complex, single point-in-time configuration management alone is  simply no longer adequate.

Think about it like this: When you come home from the grocery store, there is a precise and specific set of processes – an orchestrated workflow – that needs to happen in order for you to get from inside your car to your sofa.

First, you pull into your driveway. Then, you stop the car, open the garage door, open your car door, shut the car door, walk to the house, unlock the door, etc. This orchestrated set of events needs to occur the same way, every time. (i.e. you can’t open your door before you stop the car, for example.)

Similarly in IT, there has historically never been a single tool that could accurately describe the end-to-end configuration of each application in a particular environment. Though some tools could describe the driveway, for example, they could not also accurately describe the door (it’s height and width and whether the handle is on the left or right side, etc.)

Ansible uniquely allows organizations to describe not just the configurations in an environment, but also the process of how these configurations are coupled together to make an application. Your application.

With Ansible, businesses can easily and securely manage large-scale computing infrastructures on premise and in the cloud. Ansible does this by providing enterprise-ready solutions for automating apps, systems and cloud resources for IT departments based on the most popular open source IT automation projects in the market today.

This is an invaluable resource for our customers, many of which deal with configurations in the hundreds, with multiple configurations for multiple applications and destination environments – sometimes working towards one unified web server, other times in multiple server environments.

The key here is helping IT organizations understand the big picture of how these hundreds of configurations, applications and teams of people can successfully work together. It’s a piece of strategy that delineates those IT teams that will successfully transform and adapt to rapidly changing technology, and those that will continue to spend too much money struggling just to keep their heads above water.

Ideally, development teams would create a playbook that they deliver alongside their application so that IT could then use it to deploy and manage said application. When changes to the playbook are made, they are sent back to the development team so that the next time they deploy the application they are not reinventing the wheel. Everyone’s on the same page.  All the time.

This eliminates a massive back and forth and miscommunication between the two teams, which also reduces delays in deployment. By automating this process with tools like Ansible, there are fewer human errors and better communication and collaboration overall. Companies can save money, compress their deployment time and time between releases, and validate compliance frequently and automatically. It injects some agility into traditional development and operations methodology.

Ansible perfectly describes an environment from the infrastructure and operating system all the way to the configurations and anything else related to the successful deployment and management of any given application.

And once a playbook has been created for the first deployment, IT departments already have a proven roadmap for how to do it right the next time. Further reading that outlines how Ansible Tower can help roles within IT, check out Dave Johnson’s recent post on, Ansible Tower in the Software Development Lifecycle.

So let’s get automating.

IT Automation


Justin Nemmers

Justin is the Ansible Product Owner at Ansible by Red Hat. He has spent a career helping organizations transform their IT environments by adopting new, and better making better use of existing technologies. Over his career, he has held both technical, sales, marketing and product leadership roles at a number of organizations, including Red Hat, where he ran a large services team. He resides in Raleigh, NC with his wife and children and tweets from @justnems.

rss-icon  RSS Feed

Ansible Tower by Red Hat
Ansible In-Depth Whitepaper
Ansible Tower by Red Hat
Learn About Ansible Tower