OpenShift v3: PaaS for the Software Defined Data Center ..(6/7)

Why OpenShift…

The containers and container orchestration landscape comprise of quite a few players and projects jockeying for market share. However, what makes OpenShift a unique platform is in how it helps enterprises surmount these challenges in five different areas:

  1. Firstly, the provenance of OpenShift is RHEL (Red Hat Enterprise Linux), possibly the industry’s dominant Operating System. Linux is indeed the foundation of containers especially with work done around cgroups, kernel namespaces and SELinux.  OpenShift v3 is a result of about five years of extensive engineering work and learning from live customers which makes it a highly robust platform.
  2. Secondly, by leveraging the Red Hat’s JBoss middleware portfolio, OpenShift offers a multifaceted PaaS for any kind of application architecture spanning Application servers, Messaging Brokers & lightweight integration platforms among others. OpenShift is very mindful of supporting legacy applications that are Stateful in nature.
  3. As we will see, OpenShift largely abstracts operations teams from the complexity involved in deploying & managing containerized workloads at scale.
  4. OpenShift is a true container platform which means that containers are the basic dev, build and runtime units. The platform is made to natively containers across build, deploy and manage continuum. Accordingly, it also provides developers with an integrated toolchain to develop containerized applications.
  5. By leveraging best of breed technologies such as Kubernetes & Docker, OpenShift avoids reinventing the wheel. Using both these foundational blocks, containerized applications deployed in OpenShift can be designed to be high-performance, fault-tolerant & provide a high degree of scalability.

With OpenShift v3, Red Hat offers a few groundbreaking architectural improvements over the older v2.

#1 OpenShift – A Container Management Platform

As mentioned above, OpenShift v3 is based on Linux container technology and is a platform that leverages Docker containers as the standard runtime artifact. Accordingly, everything in OSE is a container in terms of how applications are built, deployed, exposed to developers and orchestrated by administrators on the underlying hardware. For those new to containers, a reread of my Docker post below is highly recommended. Accordingly, everything runs in OSE v3 in a docker container. Docker is the runtime and packaging standard with OSE. Red Hat is also providing a default Docker Registry called the Atomic Registry with a full-fledged UI that gets installed with OSE by default.  This is a Certified Docker Registry which provides secure Images for a range of Open Source technologies

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

#2 OpenShift – Container Orchestration with Kubernetes –

Once Docker Engine is used to provide formatted application images, OSE uses Kubernetes primitives to provision, to deploy the overall cluster and to orchestrate deployments across multiple hosts.

Kubernetes is the container orchestration engine & cluster services manager of the PaaS. Red Hat has been making significant improvements to underlying services such as networking and storage. Open vSwitch is used for underlying networking and a Docker Registry is added by the OSE team. 

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

#3 OpenShift- Enable Easier Use of Containers across the DevOps Application lifecycle.

OpenShift provides a range of capabilities and tools that enable developers to perform source code management, build and deploy processes.

OpenShift provides facilities for Continuous integration (CI). It does this in several ways. Firstly, code from multiple team members is checked (push and merge code pull requests) into a common source control repository based on Git. This supports constant check-ins and automated checks/gates are added to run various kinds of tests. OpenShift includes Git workflow where a push event causes a Docker image build.

OpenShift can then automate all steps required to promote the work product from a CI standpoint to delivery using CD. These involve automated testing, code dependency checks etc and promoting images from one environment to the other. OpenShift provides a web console, command line tools & an Eclipse-based IDE plugin.

For management, Red Hat provides ManageIQ/Cloudforms for management of OSE Clusters. This tool does an excellent job of showing what containers are running on the platform, which hosts they’re running on and a range of usage statistics such as on each node; memory & CPU footprint

OpenShift Architecture…

OpenShift v3 (OSE) architecture mirrors underlying Kubernetes and Docker concepts as faithfully as it can.. Accordingly, it has a Master-Slave architecture as depicted below.

OpenShift Architecture (courtesy – Red Hat documentation [1])
The following other key concepts are important to note –

  1. OpenShift is an application platform targeted towards both developers and cloud administrators who are developing and deploying Cloud Native microservices based applications.
  2. The OpenShift Master is the control plane of the overall architecture. It is responsible for scheduling deployments, acting as the gateway for the API, and for overall cluster management. As depicted in the below illustration, It consists of several components, such as the Master API  that handles client authentication & authorization, the Controller that hosts the scheduler, and replication controller & the Client tools. It’s important to note that the management functionality only accesses the master to initiate changes in the cluster and does not access the nodes directly. The Master node also runs several processes that manage the cluster including the etcd datastore which stores all state data.
  3. The second primitive in the architecture is the concept of a Node. A node refers to a host which may be virtual or physical. The node is the worker in the architecture and runs application stack components on what are called Pods.
  4. The basic application runtime unit is a Docker Container (which provides container management and packaging) and OSE uses this paradigm across the lifecycle functions of container-based development & packaging – build, provision, schedule, orchestrate and manage – applications.
  5. All of the container & cluster management functionality is provided by Kubernetes – with some changes done by Red Hat primarily persistent storage and networking capabilities. Open vSwitch (OVS) provides the SDN networking implementation for communication between Pods and Services. We will discuss networking in a below section.
  6. OpenShift software installed on a RHEL 7/RHEL Atomic server (supports OSTree based updates) gives the host an OSE node personality – either a master or a slave. Instantiating a Docker image causes an application to be deployed in a container.
  7. Containers within OSE are grouped into Kubernetes Pods. Pods wrap around a group of Containers, thus application containers run inside Kubernetes pods. Kubernetes Pods are a first-class citizen in an OSE architecture.
  8. OSE also provides a private internal docker registry which can serve out container images to a range of consuming applications. The registry itself runs inside OSE as a Pod and stores all images. Red Hat Software Collections provides a range of certified images.
  9. Fluentd provides log management functionality in OSE. Fluentd runs on each OSE node and collects both application level and system logs. These logs are pushed into ElasticSearch for storage and Kibana is used for visualization. All of these packages are themselves in containers.
  10. Red Hat CloudForms is provided as a way to manage containers running on OSE. Using CloudForms, a deep view of the entire OSE cluster is provided – Inventory, Discovery, Monitoring etc. for hosts & pods in them.
  11. OpenShift v3 also introduces the concept of a project, which is built on Kubernetes namespaces. Projects are a way of separating functional applications from one another in an OSE cluster. Users and applications that pertain to one project (and namespace)  can only view resources within that project. Authorization is provided by Groups which are a collection of users.
  12. Given that containers run inside pods, Kubernetes assigns an IP address to these Pods. For example, consider a classical 3 tier architecture – Web layer; Application Server and a Database; Three different images once instantiated become a Docker container; Each of these can now be scaled independently of the other. Thus, it is a better design for each of these to run inside their own Pods. All containers run inside the same pod have the same IP address. However, they are required to use non-conflicting ports. Services are a higher level of abstraction in OpenShift. A service (e.g. an application server, or database) is a collection of pods. The service abstraction is important to note as it enables a given runtime component (e.g. a database, or, a message queue etc) to be reused among various applications.
  13. To reiterate pods are the true first-class citizens inside OSE. A pod runs on a given host. However, if a service consists of 10 pods, they can all be distributed across hosts. Thus, scaling applications imply scaling pods. Pods overall provide a clean architecture by abstracting Docker images from underlying storage and networking.
  14. Real world applications are typically composed of multiple tiers and containers across each, OSE leverages the concept of a Kubernetes Service. Access to the application is managed using the Service abstraction. A service is a proxy that connects multiple pods and exposes them to the outside world. Service also provides the notion of Labels. E.g. a JBoss application server pod can be called “Tier=Middle Tier”. Service can group pods based on labels which enable a range of interesting use cases and flexibility around pod access based on tags. Important examples are A/B deployments, Rolling deployments etc.
  15. All underlying networking is handled by an SDN layer based on Open vSwitch (OVS). This enables cloud and network admins to assign IP address ranges for Services/Pods. These IP’s are only reachable from the internal network. Open vSwitch enables design their network in a way that is best suited to their network. In addition, traffic can be encrypted to enable the highest degree of security.
  16. OpenShift also provides an integrated Routing layer to expose applications running inside pods to the outside world. The routing layer maps to kubernetes Ingress and Ingress Controller. Thus, OSE v3 also includes HAProxy as a reverse proxy. Once an application is deployed into OSE, a DNS entry is automatically created inside the load balancer. All the pods behind a service are added as endpoints behind the applications.
  17. All load balancing (across front-end pods) for external client application requests is done by the Router, which is the entry point for any external requests coming in as shown in the above illustration. OpenShift enables administrators to deploy routers to nodes in a cluster. These routes can be used by developers to expose applications running inside pods to external clients and services.  The routing layer is pluggable and two plugins are provided and supported by default.
  18. OpenShift provides extensive build and deployment tools for developers. An example in this regard are the Builder images are provided by OpenShift. The builder image is combined with source code to create a deployable artifact which is a logical application with all its binaries and dependencies. Once the developers provide the source code and commit it to their GIT repo, this triggers a build by the Master server.  The application source is combined with relevant builder images to create a custom image which is then stored in the OSE registry. Using WebHooks, OSE integrates with Git to automate the entire build and change process. Once an application container image is available, the deployment process takes over and deploys it on a given node, within a pod. Once deployed, a service is created along with a DNS route created in the routing layer for external users to access.
  1. As mentioned above, HA Proxy is running on a server with a static IP address. When MyApp1 and MyApp2 are deployed, corresponding entries are added in the DNS. For example –

MyApp1.mycloud.com

MyApp2.mycloud.com

Users are able to access the newly created application through the routing layer as shown above. Admins can set runtime resource utilization quotas for projects using the GUI, a major improvement over v2.  

Changes and upgrades to applications follow the same process as outlined above.

OpenShift SDN…

OpenShift as a platform provides a unified cluster network for interpod communication. [2] The implementation of the pod network is maintained by OpenShift SDN. The SDN provides an overlay network using Open vSwitch (OVS).  There are three SDN plugins provided for configuring the pod network.

  • The ovs-subnet plug-in which provides a flat pod network for interpod and service communication.
  • The ovs-multitenant plug-in which provides project level isolation for pods and services. Each project within the cluster is assigned a virtual network ID (VNID_ which is unique. This ensures that traffic originating from pods can be identified easily. Pods not in the original project cannot send or receive packets from pods/services from other projects.
  • The ovs-networkpolicy plug-in which allows custom isolation policies using NetworkPolicy objects.

Conclusion…

 With OpenShift v3, Red Hat has built a robust application platform that combines Docker and Kubernetes primitives along with custom build services and 3rd part integrations. Expect to see Fortune 500 companies build Cloud Native applications leveraging this platform in the years to come.

References…

[1] OpenShift v3 Documentation –  https://docs.openshift.com/container-platform/3.4/architecture/index.html

[2] OpenShift v3 Networking – https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/sdn.html

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *