Be honest. How many of you are still logging directly into the systems that you administer, via SSH, and changing things? I am. It’s a hard habit to break, but it’s one worth breaking. Luckily I don’t have very many servers of my own to manage, but changing things manually, instead of modeling those things in a language of automation is a sure way to build up technical debt and regret it later.
It’s been a long time since I’ve done any sort of system administration as my day job. But I talk to Ansible customers on a daily basis, and I have seen all sorts of environments: simple, complicated, small, large, well-managed, and poorly-managed. But one constant that I see throughout is increasing complexity and scale. Even for small shops with a few users, today’s platforms for data management, cloud hosting, and containers require a lot more distinct machines under management for their operation than the good old days when a couple of bare-metal LAMP servers could run a full web application.
Many people have written about the exponential growth in computing: from the early days of mainframes hosting hundreds of users and applications, to a single server rack with a bunch of power-hungry bare-metal servers, to virtualization which breaks each of those servers into multiple virtual machines. Cloud built on this by making those VMs a commodity to be treated as “cattle, not pets”, and the coming promise of even smaller and specialized containers will keep that exponential growth of the number of managed endpoints going. And if you believe the buzzword bingo, the “Internet of Things” is only going to make things worse. (Or better, if you’re a cloud infrastructure vendor!)
15 years ago when you had a handful of those bare-metal LAMP machines, you may have been OK logging into machines individually and installing packages, tweaking settings, and updating applications. But when those two machines turn into 20, and then 200, manual management becomes untenable. Hands on keyboards becomes a liability. How do you recreate a machine when something goes wrong? How can you ensure that each component of the application is configured in a secure way? How about big security issues like HeartBleed and ShellShock?
There are lots of infrastructure automation and configuration management tools out there, but they tend to be complicated, specialized, and have a steep learning curve. They’re not accessible to non-developers or non-admins; the content is not easily audited or shared, and they have complicated bootstrapping procedures which make deployment into an existing environment difficult.
Ansible, though, is different. It uses a simple, human-readable language to write the configuration and automation steps: playbooks. If they’re written well, Ansible playbooks become their own documentation, and they can be easily shared with people all across an organization: developers, admins, support engineers. Even non-technical people can often read a playbook and understand the sequence of events and the end state of the system. Ansible intends to be a common language of automation for an entire organization.
Automation benefits everyone in an organization, and should be accessible to as many people as possible. This is why we’ve worked hard to keep the Ansible language simple and expressive, and provided tools to make Ansible content shareable and reusable. Visit Ansible Galaxy to learn more.
Once you’ve written an Ansible playbook or two, you may want to consider Ansible Tower -- our web UI and REST API interface to the playbooks you already have in your organization. We have customers using Tower to share the benefits of automation with a broad range of people: Sales Engineers are creating full-blown application environments in minutes with a click of a button, front-line Support Engineers are handling sophisticated application management tasks without touching a terminal, and skilled systems administrators are saving hours a day by automating routine tasks.