Deploying Highly Available OpenShift Origin Clusters with Ansible

September 19, 2013 by bennojoy

OpenShift is a popular new Platform-As-A-Service application hosting platform. It's relatively complicated to deploy, but in the following post you will find out how to quickly and easily deploy a production-ready OpenShift cluster on-premise or in the AWS EC2 cloud using Ansible.


A Primer into OpenShift Architecture

OpenShift Overview

OpenShift Origin is the next generation application hosting platform which enables the users to create, deploy and manage applications within their cloud. In other words, it provides a PaaS service (Platform as a Service). This alleviates the developers from time consuming processes like machine provisioning and necessary application deployments. OpenShift provides disk space, CPU resources, memory, network connectivity, and various application deployment platforms like JBoss, Python, MySQL, etc., so the developers can spend their time on coding and testing new applications rather than spending time figuring out how to acquire and configure these resources.

OpenShift Components

Here's a list and a brief overview of the different components used by OpenShift.

  • Broker: is the single point of contact for all application management activities. It is responsible for managing user logins, DNS, application state, and general orchestration of the application. Customers don't contact the broker directly; instead they use the Web console, CLI tools, or JBoss tools to interact with Broker over a REST-based API.
  • Cartridges: provide the actual functionality necessary to run the user application. OpenShift currently supports many language Cartridges like JBoss, PHP, Ruby, etc., as well as many database Cartridges such as Postgres, MySQL, MongoDB, etc. In case a user need to deploy or create a PHP application with MySQL as a backend, they can just ask the broker to deploy a PHP and a MySQL cartridge on separate "Gears".
  • Gear: Gears provide a resource-constrained container to run one or more Cartridges. They limit the amount of RAM and disk space available to a Cartridge. For simplicity we can consider this as a separate VM or Linux container for running an application for a specific tenant, but in reality they are containers created by SELinux contexts and PAM namespacing.
  • Node: are the physical machines where Gears are allocated. Gears are generally over-allocated on nodes since not all applications are active at the same time.
  • BSN (Broker Support Nodes): are the nodes which run applications for OpenShift management. For example, OpenShift uses MongoDB to store various user/app details, and it also uses ActiveMQ to communicate with different application nodes via MCollective. The nodes which host these supporting applications are called as Broker Support Nodes.
  • Districts: are resource pools which can be used to separate the application nodes based on performance or environments. For example, in a production deployment we can have two Districts of Nodes, one of which has resources with lower memory/CPU/disk requirements, and another for high performance applications.
The Application Creation Process

app_deploy

The above figure depicts an overview of the different steps involved in creating an application in OpenShift. If a developer wants to create or deploy a JBoss & MySQL application, they can request the same from different client tools that are available, the choice can be an Eclipse IDE , command line tool (RHC) or even a web browser (management console).

Once the user has instructed the client tool to deploy a JBoss & MySQL application, the client tool makes a web service request to the broker to provision the resources. The broker in turn queries the Nodes for Gear and Cartridge availability, and if the resources are available, two Gears are created and JBoss and MySQL Cartridges are deployed on them. The user is then notified and they can then access the Gears via SSH and start deploying the code.

Deployment Diagram of OpenShift via Ansible

arch

The above diagram shows the Ansible playbooks deploying a highly-available Openshift PaaS environment. The deployment has two servers running LVS (Piranha) for load balancing and provides HA for the Brokers. Two instances of Brokers also run for fault tolerance. Ansible also configures a DNS server which provides name resolution for all the new apps created in the OpenShift environment.

Three BSN (Broker Support Node) nodes provide a replicated MongoDB deployment and the same nodes run three instances of a highly-available ActiveMQ cluster. There is no limitation on the number of application nodes you can deploy–the user just needs to add the hostnames of the OpenShift nodes to the Ansible inventory and Ansible will configure all of them.

Note: As a best practice if the deployment is in an actual production environment it is recommended to integrate with the infrastructure's internal DNS server for name resolution and use LDAP or integrate with an existing Active Directory for user authentication.

Deployment Steps for OpenShift via Ansible

As a first step we need to set up a host with Ansible.

Assuming the Ansible host is RHEL variant, install the EPEL package

yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Once the EPEL repository is installed, Ansible can be installed via the following command.

yum install ansible

It is recommended to use separate machines for the different components of OpenShift, but if this is a test environment, we could combine the services but at least four nodes are mandatory since the MongoDB and ActiveMQ cluster need at least three for the cluster to work properly.

Once Ansible is set up, check out OpenShift playbook repository from GitHub to the Ansible management host:

git clone https://github.com/ansible/ansible-examples.git

Set up the host inventory as follows.

    [dns]
      ec2-54-226-116-175.compute-1.amazonaws.com
     [mongo_servers]
      ec2-54-226-116-175.compute-1.amazonaws.com
      ec2-54-227-131-56.compute-1.amazonaws.com
      ec2-54-227-169-137.compute-1.amazonaws.com
     [mq]
      ec2-54-226-116-175.compute-1.amazonaws.com
      ec2-54-227-131-56.compute-1.amazonaws.com
      ec2-54-227-169-137.compute-1.amazonaws.com
     [broker]
      ec2-54-227-63-48.compute-1.amazonaws.com
      ec2-54-227-171-2.compute-1.amazonaws.com
     [nodes]
      ec2-54-227-146-187.compute-1.amazonaws.com
     [lvs]
     ec2-54-227-176-123.compute-1.amazonaws.com
     ec2-54-227-177-87.compute-1.amazonaws.com

Once the inventory is set up with hosts in your environment, the OpenShift stack can be deployed easily by issuing the following command:

ansible-playbook -i hosts site.yml

Deploying an Application in OpenShift

To create an application in OpenShift, access the management console via any browser. The VIP specified in group_vars/all can used to access the Management Console, or IP address of any broker node can also be used.

https://<ip-of-broker-or-vip>/

The login page will prompt for a username and password. The default is "demo/passme". Once you are logged in, follow the onscreen instructions to create your first Application.

Note: the Python 2.6 Cartridge is by default installed by the playbooks, so choose Python 2.6 as the Cartridge.

On-Premise OpenShift Origin Deployment via Ansible

http://www.youtube.com/watch?v=I9_yLCuvHgo

Deploying OpenShift in EC2

The Ansible OpenShift repo also has playbook that would deploy highly-available OpenShift in EC2. The playbooks are also able to deploy the cluster in any EC2 API-compatible clouds like Eucalyptus.

Before deploying to EC2, please make sure:

  • A security groups is created which allows ssh and HTTP/HTTPS traffic.
  • Your AWS access/secret key is entered in group_vars/all
  • Specify the number of nodes required for the cluster in group_vars/all in the variable "count".

Once that is done, the cluster can be deployed simply by issuing the command.

ansible-playbook -i ec2hosts ec2.yml -e id=openshift

Note: 'id' is a unique identifier for the cluster. If you are deploying multiple clusters, please make sure the value given is separate for each deployments. The role of the created instances can figured out checking the tags tab in EC2 console.

OpenShift Origin Deployment via Ansible in EC2

http://www.youtube.com/watch?v=SgP0irLnkAk

Removing the Deployed Cluster from EC2

To remove the deployed OpenShift cluster in EC2, just run the following command. The id parameter should be the same which was given to create the cluster.

ansible-playbook -i ec2hosts ec2_remove.yml -e id=openshift

We hope this blog post was useful and informative! We are always interested in your feedback so please send us an email at info@ansibleworks.com if you have any questions, suggestions, or comments!

Related News

Ansible Me A Sandwich | Installing and Building Docker With Ansible | Listen To Your Servers Talk | Fixing Heartbleed With Ansible | Orchestration, You Keep Using That Word

Share:

Topics:
OpenShift


 


rss-icon  RSS Feed