The Seven Characteristics of Cloud Native Application Architectures..

We are in the middle of a series of blogs on Software Defined Datacenters (SDDC) @ http://www.vamsitalkstech.com/?p=1833. The key business imperative driving the SDDC architectures is their ability to natively support digital applications. Digital applications are “Cloud Native” (CN) in the sense that these platforms are originally being written for cloud frameworks – instead of being ported over to the Cloud as an afterthought. Thus, Cloud Native application development emerging as the most important trend in digital platforms. This blog post will define the seven key architectural characteristics of these CN applications.

Image Credit – Shutterstock

What is driving the need for Cloud Native Architectures… 

The previous post in the blog covered the monolithic architecture pattern. Monolithic architectures , which currently dominate the enterprise landscape, are coming under tremendous pressures in various ways and are increasingly being perceived to be brittle. Chief among these forces include – massive user volumes, DevOps style development processes, the need to open up business functionality locked within applications to partners and the heavy human requirement to deploy & manage monolithic architectures etc. Monolithic architectures also introduce technical debt into the datacenter – which makes it very difficult for the business lines to introduce changes as customer demands change – which is a key antipattern for digital deployments.

Why Legacy Monolithic Architectures Won’t Work For Digital Platforms..

Applications that require a high release velocity presenting many complex moving parts, which are worked on by few or many development teams are an ideal fit for the CN pattern.

Introducing Cloud Native Applications…

There is no single and universally accepted definition of a Cloud Native application. I would like to define a CN Application as “an application built using a combination of technology paradigms that are native to cloud computing – including distributed software development, a need to adopt DevOps practices, microservices architectures based on containers, API based integration between the layers of the application, software automation from infrastructure to code, and finally orchestration & management of the overall application infrastructure.”

Further, Cloud Native applications need to be architected, designed, developed, packaged, delivered and managed based on a deep understanding of the frameworks of cloud computing (IaaS and PaaS).

Characteristic #1 CN Applications dynamically adapt to & support massive scale…

The first & foremost characteristic of a CN Architecture is the ability to dynamically support massive numbers of users, large development organizations & highly distributed operations teams. This requirement is even more critical when one considers that cloud computing is inherently multi-tenant in nature.

Within this area, the typical concerns need to be accommodated –

  1. the ability to grow the deployment footprint dynamically (Scale-up)  as well as to decrease the footprint (Scale-down)
  2. the ability to gracefully handle failures across tiers that can disrupt application availability
  3. the ability to accommodate large development teams by ensuring that components themselves provide loose coupling
  4. the ability to work with virtually any kind of infrastructure (compute, storage and network) implementation

Characteristic #2 CN applications need to support a range of devices and user interfaces…

The User Experience (UX) is the most important part of a human facing application. This is particularly true of Digital applications which are omnichannel in nature. End users could not care less about the backend engineering of these applications as they are focused on an engaging user experience.

Demystifying Digital – the importance of Customer Journey Mapping…(2/3)

Accordingly, CN applications need to natively support mobile applications. This includes the ability to support a range of mobile backend capabilities – ranging from authentication & authorization services for mobile devices, location services, customer identification, push notifications, cloud messaging, toolkits for iOS and Android development etc.

Characteristic #3 They are automated to the fullest extent they can be…

The CN application needs to be abstracted completely from the underlying infrastructure stack. This is key as development teams can focus on solely writing their software and does not need to worry about the maintenance of the underlying OS/Storage/Network. One of the key challenges with monolithic platforms (http://www.vamsitalkstech.com/?p=5617) is their inability to efficiently leverage the underlying infrastructure as they have a high degree of dependency to it. Further, the lifecycle of infrastructure provisioning, configuration, deployment, and scaling is mostly manual with lots of scripts and pockets of configuration management.

The CN application, on the other hand, has to be very light on manual asks given its scale. The provision-deploy-scale cycle is highly automated with the application automatically scaling to meet demand and resource constraints and seamlessly recovering from failures. We discussed Kubernetes in one of the previous blogs.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

Frameworks like these support CN Applications in providing resiliency, fault tolerance and in generally supporting very low downtime.

Characteristic #4 They support Continuous Integration and Continuous Delivery…

The reduction of the vast amount of manual effort witnessed in monolithic applications is not just confined to their deployment as far as CN applications are concerned. From a CN development standpoint, the ability to quickly test and perform quality control on daily software updates is an important aspect. CN applications automate the application development and deployment processes using the paradigms of CI/CD (Continuous Integration and Continuous Delivery).

The goal of CI is that every time source code is added or modified, the build process kicks off & the tests are conducted instantly. This helps catch errors faster and improve quality of the application. Once the CI process is done, the CD process builds the application into an artifact suitable for deployment after combining it with suitable configuration. It then deploys it onto the execution environment with the appropriate identifiers for versioning in a manner that support rollback. CD ensures that the tested artifacts are instantly deployed with acceptance testing.

 Characteristic #5 They support multiple datastore paradigms…

The RDBMS has been a fixture of the monolithic application architecture. CN applications, however, need to work with data formats of the loosely structured kind as well as the regularly structured data. This implies the need to support data streams that are not just high speed but also are better suited to NoSQL/Hadoop storage. These systems provide Schema on Read (SOR) which is an innovative data handling technique. In this model, a format or schema is applied to data as it is accessed from a storage location as opposed to doing the same while it is ingested. As we will see later in the blog, individual microservices can have their own local data storage.

A Holistic New Age Technology Approach To Countering Payment Card Fraud (3/3)…

Characteristic #6 They support APIs as a key feature…

APIs have become the de facto model that provide developers and administrators with the ability to assemble Digital applications such as microservices using complicated componentry. Thus, there is a strong case to be made for adopting an API centric strategy when developing CN applications. CN applications use APIs in multiple ways – firstly as the way to interface loosely coupled microservices (which abstract out the internals of the underlying application components). Secondly, developers use well-defined APIs to interact with the overall cloud infrastructure services.Finally, APIs enable the provisioning, deployment, and management of platform services.

Why APIs Are a Day One Capability In Digital Platforms..

Characteristic #7 Software Architecture based on microservices…

As James Lewis and Martin Fowler define it – “..the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.” [1]

Microservices are a natural evolution of the Service Oriented Architecture (SOA) architecture. The application is decomposed into loosely coupled business functions and mapped to microservices. Each microservice is built for a specific granular business function and can be worked on by an independent developer or team. As such it is a separate code artifact and is thus loosely coupled not just from a communication standpoint (typically communication using a RESTful API with data being passed around using a JSON/XML representation) but also from a build, deployment, upgrade and maintenance process perspective. Each microservice can optionally have its localized datastore. An important advantage of adopting this approach is that each microservice can be created using a separate technology stack from the other parts of the application. Docker containers are the right choice to run these microservices on. Microservices confer a range of advantages ranging from easier build, independent deployment and scaling.

A Note on Security…

It goes without saying that security is a critical part of CN applications and needs to be considered and designed for as a cross-cutting concern from the inception. Security concerns impact the design & lifecycle of CN applications ranging from deployment to updates to image portability across environments. A range of technology choices is available to cover various areas such as Application level security using Role-Based Access Control, Multifactor Authentication (MFA), A&A (Authentication & Authorization)  using protocols such as OAuth, OpenID, SSO etc. The topic of Container Security is very fundamental one to this topic and there are many vendors working on ensuring that once the application is built as part of a CI/CD process as described above, they are packaged into labeled (and signed) containers which can be made part of a verified and trusted registry. This ensures that container image provenance is well understood as well as protecting any users who download the containers for use across their environments.

Conclusion…

In this post, we have tried to look at some architecture drivers for Cloud-Native applications. It is a given that organizations moving from monolithic applications will need to take nimble , small steps to realize the ultimate vision of business agility and technology autonomy. The next post, however, will look at some of the critical foundational investments enterprises will have to make before choosing the Cloud Native route as a viable choice for their applications.

References..

[1] Martin Fowler – https://martinfowler.com/intro.html

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

The fourth and previous blog in this seven part series on Software Defined Datacenters (@ http://www.vamsitalkstech.com/?p=5010)  discussed how Linux Containers & Docker, are emerging as a key component of digital applications. We looked at various drivers & challenges stemming from running Containerized Applications from both a development and IT operations standpoint. In the fifth blog in this series, we will discuss another key emergent technology – Google’s Kubernetes (k8s)– which acts as the foundational runtime orchestrator for large scale container infrastructure. We will take the discussion higher up the stack in the next blog with OpenShift – Red Hat’s PaaS (Platform as a Service) platform – which includes Kubernetes and provides a powerful, agile & polyglot environment to build and manage microservices based applications.

The Importance of Container Orchestration… 

With Cloud Native application development emerging as the key trend in Digital platforms, containers offer a natural choice for a variety of reasons within the development process. In a nutshell, Containers are changing the way applications are being architected, designed, developed, packaged, delivered and managed. That is the reason why Container Orchestration has become a critical “must have” since for enterprises to be able to derive tangible business value – they must be able to run large scale containerized applications.

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

While containers have existed in Unix based operating systems such as Solaris and FreeBSD, pioneering work in the Linux OS community has led to the mainstreaming of this disruptive technology. Now, despite all the benefits afforded to both developers and IT Operations by containers, there are critical considerations involved in running containers at scale in complex n-tier real world applications across multiple datacenters.

What are some of the key considerations in running containers at scale –

Consideration #1 – You need a Model/Paradigm/Platform for the lifecycle management of containers – 

This includes the ability to organize applications into groups of containers, scheduling these applications on host servers that match their resource requirements, deploy applications as changes happen, manage complex storage integration, network topologies and provide seamless ways to destroy, restart etc  etc

Consideration #2 – Administrative Lifecycle Management  –

This covers a range of lifecycle processes ranging from constant deployments to upgrades to monitoring and monitoring. Granular issues include support for application patching with minimal downtime, support for canary releases, graceful failures in cloud-native applications, (container) capacity scale up & scale down based on traffic patterns etc.

Consideration #3 – Support DevelopMENT PROCESSES moving to DevOps and microservices

These reasons vary from rapid feature development, ability to easily accommodate CI/CD approaches, flexibility (as highlighted in the above point). For instance,k8s removes one of the biggest challenges with using vanilla containers along with CI/CD tools like Jenkins –  the challenge of linking individual containers that run microservices with one another. Other useful features include load balancing, service discovery, rolling updates and red/green deployments.

While the above drivers are just general guidelines, the actual tipping point for large scale container adoption will vary from enterprise to enterprise. However, the common precursor to supporting containerized applications at scale has to be an enterprise grade management and orchestration platform. And for some very concrete reasons we will discuss,k8s is fast emerging as the defacto leader in this segment.

Introducing Kubernetes (K8s)…

Kubernetes (kube or k8s) is an open-source platform that aims to automate the scheduling, deploying and managing applications running on containers. Kubernetes (and platforms built leveraging it) are designed to bring both development and operations teams together. This affects how Cloud Native applications are architected, composed, deployed, and managed.

k8s was incubated at Google (given their expertise in running billions of container workloads at scale) over the last decade. One caveat, the famous cluster controller & container management system known as Borg is deployed extensively at Google. Borg is a predecessor to k8s but is generally believed that while k8s borrows its core design tenets from Borg, it only contains a subset of the features present in Borg. [4]

Again, from [4] – “Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We’ve incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.

However, k8s is not a Google-only project anymore. In 2015 it was donated to the Cloud Native Foundation. The next year, 2015 also saw the k8s foundational release 1.0. Since then the project has been moving with a fair degree of feature & release velocity. The next version 1.4 was released in 2016. With the current 1.7 release, k8s has found wider industry wide adoption. The last year has seen heavy contributions from the likes of Red Hat, Microsoft, Mirantis, and Fujitsu et al to thek8s codebase.

k8s is infrastructure agnostic with clusters deployable on pretty much any Linux distribution – Red Hat, CentOS, Debian, Ubuntu etc.  K8s also runs on all popular cloud platforms – AWS, Azure and Google Cloud. It is also virtually hypervisor agnostic supporting – VMWare, KVM, and libvirt. It also supports both Docker or Windows Containers or rocket (rkt) runtimes with expanding support as newer runtimes become available.[3]

After this brief preamble, let us now discuss the architecture and internals of this exciting technology. We will then discuss why it has begun to garner massive adoption and why it deserves a much closer look by enterprise IT teams.

The Architecture of Kubernetes…

As depicted in the below diagram, Kubernetes (k8s) follows a master-slave methodology much like Apache Mesos and Apache Hadoop.

Kubernetes Architecture

The k8s Master is the control plane of the architecture. It is responsible for scheduling deployments, acting as the gateway for the API, and for overall cluster management. As depicted in the below illustration, It consists of several components, such as an API server, a scheduler, and a controller manager. The master is responsible for the global, cluster-level scheduling of pods and handling of events. For high availability and load balancing, multiple masters can be setup.  The core API server which runs in the master hosts a RESTful service that can be queried to maintain the desired state of the cluster and to maintain workloads. The admin path always goes through the Master to access the worker nodes and never goes directly to the workers.  The Scheduler service is used to schedule workloads on containers running on the slave nodes. It works in conjunction with the API server to distribute applications across groups of containers working on the cluster. It’s important to note that the management functionality only accesses the master to initiate changes in the cluster and does not access the nodes directly.

The second primitive in the architecture is the concept of a Node. A node refers to a host which may be virtual or physical. The node is the worker in the architecture and runs application stack components on what are called Pods. It needs to be noted that each node runs several kubernetes components such as a kubelet and a kube proxy. The kubelet is an agent process that works to start and stop groups of containers running user applications, manages images etc and communicates with the Docker engine. The kube-proxy works as a proxy networking service that redirects traffic to specific services and pods (we will define these terms in a bit). Both these agents communicate with the Master via the API server.

Nodes (which are VMs or bare metal servers) are joined together to form Clusters. As the name connotes, Clusters are a pool of resources – compute, storage and networking – that are used by the Master to run application components. Nodes, which used to be known as minions in prior releases, are the workers. Nodes host end user applications using their local resources such as compute, network and storage. Thus they include components to aid in logging, service discovery etc. Most of the administrative and control interactions are done via the kubectl script or by performing RESTful calls to the API server. The state of the cluster and the workloads running on it is constantly synchronized with the Master using all these components.  Clusters can be easily made highly available and scaled up/down on demand. They can also be federated across cloud providers and data centers if a hybrid architecture is so desired.

The next and perhaps the most important runtime abstraction in k8s is called a Pod. It is recommended that applications deployed in a K8s infrastructure be composed of lightweight and stateless microservices. These microservices can be deployed in individual or multiple containers. If the former strategy is chosen, each container only performs a specialized business activity. Though k8s also supports stateful applications, stateless applications confer a variety of benefits including loose coupling, auto-scaling etc.

The Pod is essentially the unit of infrastructure that runs an application or a set of related applications and as such it always exists in the context of a set of Linux namespaces or cgroups. A Pod is a group of one or more containers which always run on the same host. They are always scheduled together and share a common IP address/port configuration. However, these IP assignments cannot be guaranteed to stay the same over time. This can lead to all kinds of communication issues over complex n-tier applications. Kubernetes provides an abstraction called a Service – which is a grouping of a set of pods mapped to a common IP address.

The pod level inter-communication happens over IPC mechanisms. Pods also share local storage running on the node with the shared storage essentially mounted on.  The infrastructure can provide services to the pod that span resources and process management.  The key advantage here is that Pods can run related groups of applications with the advantage that individual containers can be made not only more lightweight but also versioned independently, which greatly aids in complex software projects where multiple teams are working on their own microservices which can be created and updated on their own separate cadence.

Labels are key value pairs that k8s uses to identify a particular runtime element be it a node, pod or service. They are most frequently applied to pods and can be anything that makes sense to the user or the administrator. Example of a pod label would be –  (app=mongodb, cluster=eu3,language=python).  Label Selectors determine what Pods are targeted by a Service.

From an HA standpoint, administrators can declare a configuration policy that states the number of pods that they need to have running at any given point. This ensures that pod failures can be recovered from automatically by starting new pods. An important HA feature is the notion of replica sets. The Replication controller ensures that there are a specified set of pods available to a given application and in the event of failure, new pods can be started to ensure that the actual state matches the desired state. Such pods are called replica sets. Workloads that are stateful are covered for HA using what are called pet sets.

The Replication Controller component running in the Master node determines which pods it controls and then uses a pod template file (typically a JSON or YAML file) to create new pods. It also is in charge of ensuring that the number of pods stays in consonance with replica counts. It is important to note that while the Replication controller just replaces dead or dysfunctional pods on the nodes that hosted them, it does not more pods across nodes.

Storage & Networking in Kubernetes  –

Local pod storage is ephemeral and is reclaimed when the pod dies or is taken offline but if data needs to be persistent or shared between pods K8s provides Volumes. So really, depending on the use case, k8s supports a range of storage options from local storage to network storage (NFS, Ceph, Gluster, Ceph) to cloud storage (Google Cloud or AWS). More details around these emerging features are found at the K8s official documentation. [1]

Kubernetes has a pluggable networking implementation that works with the design of the underlying network. [2], there are four networking challenges to solve:

  • Container-container communication within a host – this is based purely on IPC & localhost mechanisms
  • Interpod communication across hosts – Here Kubernetes mandates that all pods be able to communicate with one another without NAT and that the IP of a pod is the same from within the pod and outside of it.
  • Pod to Service communications – provided by the Service implementation. As we have seen above, K8s services are provided with IP addresses that clients can reach them by.  These IP addresses are proxied by the kube-proxy process which runs on all nodes sends to the service which then routes the external request to the correct pod.
  • External to Service communication – again provided by the Service implementation. This is done primarily by mapping the load balancer configuration to services running in the cluster. As outlined above, when traffic is sent to a node, the the kube-proxy process ensures that the traffic is routed to the appropriate service.

Network administrators looking to implement the K8s cluster model have a variety of choices from open source projects such as – Flannel, OpenContrail etc.

Why is Kubernetes such an exciting (and important) Cloud technology –

We have discussed the business & technology advantages of building an SDDC over the previous posts in this series.  As a project, k8s has very lofty goals to simplify the lifecycle of not only containers but also to enable the deployment & management of distributed systems across any kind of modern datacenter infrastructure. It’s designed to promote extensibility and pluggability (via APIs) as we will see in the next blog with OpenShift.

There are three specific reasons why k8s is rapidly becoming a de facto choice for Container orchestration-

  1. Once containers are used to full-blown applications,  organizations need to deal with several challenges to enable efficiency in the overall development & deployment processes. These include enabling a rapid speed of application development among various teams working on APIs, UX front ends, business logic, data etc.
  2. The ability to scale application deployments and to ensure a very high degree of uptime by leveraging a self-healing & immutable infrastructure. A range of administrative requirements from monitoring, logging, auditing, patching and managing storage & networking all come to consideration.
  3. The need to abstract developers away from the infrastructure. This is accomplished by allowing dev teams to specific their infrastructure requirements via declarative configuration.

Conclusion…

Kubernetes is emerging as the most popular platform to deploy and manage digital applications based on a microservices architecture. As a sign of its increased adoption and acceptance, Kubernetes is being embedded in Platform as a Service (PaaS) offerings where it offers all of the same advantages for administrators (deploying application stacks) while also freeing up developers of complex underlying infrastructure. The next post in this series will discuss OpenShift, Red Hat’s market leading PaaS offering, which leverages best of breed projects such as Docker and Kubernetes.

References…

[1] Kubernetes Offical documentation – https://kubernetes.io/docs/concepts/storage/persistent-volumes/

[2] Kubernetes Networking
– https://kubernetes.io/docs/concepts/cluster-administration/networking/

[3] Key Commits to Kubernetes – http://stackalytics.com/?project_type=kubernetes-group&metric=commits

[4] Borg: The predecessor to Kubernetes – http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

The third and previous blog in this seven part series (@ http://www.vamsitalkstech.com/?p=4659)  discussed Apache Mesos, a project that aims to abstract away various system resources – CPU, memory, network and disk resources to provide consuming digital applications with a giant cluster from which they can utilize capacity – a key requirement of the Software Defined Datacenter (SDDC). In this fourth blog, we will discuss another important ecosystem technology & project – Linux Containers and Docker – which forms the foundational runtime component in the SDDC. The next blog will discuss Kubernetes – Google’s container orchestration platform.

Much like shipping goods in Containers over Oceans, Linux Containers offer a portable, lightweight & convenient way to ship business applications. (Image Credit – WallPapers 13)

Executive Summary…

We can agree that the Digital application is inherently a distributed application. Such applications have historically been extremely hard to develop, setup and manage across a large fleet of data center servers that are a mix of platforms and technologies. Thus it is no surprise that one of the most disruptive developments in the last five years has been the innovation in the Linux container space. Containers now enable the running distributed applications at scale. 

Due to business reasons, Digital applications demand constant updates, changes and incremental revisions in response to changing customer needs. The Software Defined Datacenter (SDDC) thus needs a runtime paradigm that enables not just efficient hardware usage but also supports standardized application environments that are portable simplified and consistent across hybrid clouds and hypervisors.  Containers fill this need and are thus emerging to be the natural unit of deployment across the SDDC. Much has been written on the topic of Docker and Linux Container technology. My goal for this blog post is to distill key insights in the container ecosystem.

The Technologies of Linux Containers & Docker

Unlike Virtual Machines, Container Engines such as Docker share a common OS (Image Credit – MSFT Azure)

Linux Containers are alike and yet different from virtual machines. They are alike in the sense that each Container shares system resources on the underlying hardware platform – CPU, RAM, and Network – as with VMs. However, while each VM maintains its separate copy of the Operating System (OS), containers share the same OS kernel while keeping themselves separate from other containers running on the same OS.  How do they do that?

Though the terms ‘Docker’ and ‘Container’ have become almost synonymous – it needs to be noted that Docker is a company focused on developing technology enablement around containers in areas such as orchestration, networking, and management. Docker was an open source project (now renamed to Moby [1]) that provided capabilities such as a standard description of container formats, utilities for application packaging, deployment & lifecycle management of applications inside Linux Containers. It provides a Docker CLI command line tool for the lifecycle management of image-based containers.

Prior to the explosion of interest in Linux containers & the founding of Docker, traditional Linux distributions (with a minimum kernel level of 3.8) supported two foundational paradigms – control groups (cgroups) and kernel namespaces.  Linux containers use both these features to achieve their goal of isolation and portability. Cgroups enables the host to limit the resources each container process can use from a CPU, Memory, Filesystem, User ID components and Network standpoint. This ensures that containers running on a host cannot starve others of resources thus avoiding the “Noisy Neighbor” problem that bedeviled a lot of cloud deployments.

Kernel Namespaces ensure another kind of isolation for process interactions within the OS. Containers can only view and modify resources in the same namespace. This ensures a security mechanism where other containers and processes on the host cannot launch attacks on a given application running on a tenant container or on the host itself. Thus the combination of both these technologies ensures that multiple applications running within their individual containers can share CPU and Memory without needing the overhead of virtualization. Docker also grants each container its own networking implementation thus ensuring that resources such as socket and interfaces can also be protected.

Companies including Red Hat, IBM, Google, Cisco, VMware, and CoreOS have greatly aided with the development of and accessibility of containers in their platforms and products.

Layered Filesystems..

Various Image Layers in Docker. Each layer in the file system is mounted on the previous.The topmost is the Writable Container. (Image Credit – Docker)

We discussed how Container Images are Immutable. This is the key advantage of using container technology such as Docker & is made possible by the notion of a Union filesystem. What are Union filesystems and how do they enforce immutability? Much like the image in a Virtual Machine sense, Containers also run from an image, which typically are a snapshot of a filesystem but tend to be much smaller than VM images since the Container is installed on a host kernel.

Union filesystems are best described as a layered architecture – in that each layer is created independently and then added atop of the previous layer.  An example of a Union filesystem is a Linux Kernel – an OS – then a data base like Oracle – then Tomcat – and a web application on it. The top layer is always the Writable layer. The real advantage in using a union filesystem is that using these images becomes super efficient from a storage and execution standpoint. Union filesystems also help in sharing portions of the OS across containers. Simply put, an image contains everything an application needs – from it’s dependencies and external libraries. When an Image is run, it is called a Container. In the case of Docker, it uses a layered copy on write filesystem called AUFS (Another Union Filesystem).

Containers and Developers..

Containers are possibly the first infrastructure software category created by developers in mind. The prominence of Linux Containers has Docker coincided with the onset of agile development practices under the DevOps umbrella – CI/CD etc. Containers are an excellent choice to create agile delivery pipelines and continuous deployment. At their core, Containers enable the creation of multiple self-contained execution environments over the same operating system.

Developers are naturally excited about Linux Containers for five specific reasons –

  1. Containers allow for image consistency across OS environments. This is a huge help in accelerating the development process from development to debugging to production. Developers can just focus on building their applications (in dev environments that match the test and prod) and packaging them in containers. This just takes a lot of the inefficiency around environment dissimilarities out of the equation.
  2. Containers are treated as a standard linux process by the kernel & thus are orders of magnitude quicker from a startup time when compared to VMs. This means that developers can start their applications in a manner of seconds as long as they run them in a container.
  3. Containers also provide development organizations the ability to standardize application development workflows and update processes. This solves the scalability problem that digital applications have caused large organizations.
  4. Digital applications are leading the move to adopt microservices. Microservices offer a way to build applications as a collection of discrete services as opposed to a monolithic architecture. By there very nature, microservices can be built and managed by different teams. Containerization affords a lightweight way of building and deploying microservices.
  5. Containers offer a portable way of delivering applications (across Operating Systems) as well as provide horizontal scalability.

    Digital Application development using Containers..

Digital Application Development and Deployment Workflow using Containers.

There are a few key runtime components involved in operationalizing a small to medium to large scale container infrastructure as the above illustration depicts.

  1. Firstly, developers create container images. These images describe an application and it’s dependencies. An easy way to conceptualize an image is to think of it as a basic deployment template. Image are also immutable in that they are read only and any changes happen in the top most layer which is writable. Modifying an image is to create a new one. Images thus have a Parent Child relationship. Developers create images by building their applications on their developer environments, performing unit tests and then pushing to a repository. Once the container is built with the necessary dependencies, these tools run a battery of tests to validate business functionality. A large part of this process is usually best automated using CI/CD tools like Jenkins, CruiseControl or Buildbot etc.
  2. The built images are then made available in a Container Registry. This is either maintained internally or sourced from a trusted external source. As the name suggests, Registries maintain a catalog of container images of frequently used software – e.g. Custom applications and other software packages such as WordPress, Relational databases, Web Servers, Big Data technologies and Application Servers etc
  3. The next step is to create and deploy (runtime) containers from these images on a set of servers. Once images are released as a result of application development, sys admins work on the provisioning of the servers to run these images. Once a Container engine is installed on the server, images are loaded on and they take the runtime shape of containers. The mode of getting these images on these servers follows either a push/pull mechanism.
  4. Scheduling of containers on servers is also a process that usually done by Sys Admins. This involves running containers of certain kinds on servers that match up to certain CPU, I/O and Network capacity requirements
  5. To create complex real world deployments, not only do the servers and networking have to be created but these containers are also interconnected (e.g. a web server container to an application server) using Discovery mechanisms. These containers then need to also connect to a host of enterprise services. Customer traffic is then routed to the clustered containers running on these servers. Monitor the logs and performance of these containers and the microservices running on them.
  6. The process repeats from step #1 above.

Industry Adoption of Containers.

In a few years, containers will deliver the bulk of compute workloads across public cloud providers such as Amazon AWS, Google Compute Engine and Microsoft Azure. Given that the VM options on these clouds can run multiple containers which can scale on demand, the industry will begin to gravitate to higher utilization density. The SDDC has already begun incorporating hybrid architectures that run both containers and VMs in a complementary fashion.The Software Defined Datacenter will incorporate a hybrid model consisting of applications running on both Linux Containers and Virtual Machines.

Customers also have choices of traditional enterprise operating systems such as Red Hat Enterprise Linux or Microsoft Windows or can also run containers on OS’s developed for the purpose of hosting containers at hyper scale. These OS’s just provide tools to manage containers and nothing else. Examples include Red Hat Atomic Platform and CoreOS. Moving up the stack, pioneers such as Google and Red Hat have added core support for containers in projects such as OpenStack, Kubernetes, Mesos, OpenShift & CloudFoundry by helping with networking and persistent storage. Kubernetes (which we will cover in the next post) also handles provisioning on multiple public cloud platforms. Config Mgmt platforms such as Ansible, Chef and Puppet now support containerized deployments.

Technical Considerations for Container Adoption

Some key considerations that industry players are tackling from the standpoint of running containers at scale –

  1. Container Orchestration –  Organize groups of containers into compassable applications, scheduling them on servers that match their resource requirements, placement of containers based on network topology etc
  2. Container Networking – Containers follow a pluggable model and the network is no different. Key considerations – an enterprise network connectivity stack is needed to not only provide the interconnect between different containers but also to integrate them with existing Layer 2/3 networks. Additionally, network isolation needs to be provided for microservices running on these containers using either a dedicated IP address for each or an overlay network.
  3. Management and Monitoring -Life cycle processes ranging from Management and Monitoring encompass a range of questions – application patching with low downtime, graceful failures in cloud native applications, container scale up & scale down based on traffic patterns etc.

Containers and your Enterprise…

So what is the best way to adopt containers across a large enterprise?

  • Develop your container strategy in the context of the Nexus of Forces (i.e., information, mobile, social and cloud) initiatives in your organization — Containers are at the junction of these technologies.
  • Institute an organizational process to examine the business value of any initiative to adopt Containers. Understand what tools and platforms to adopt that will abstract away the complexities of using containers.
  • Understanding skills required to leverage containers. Containers are a new way for both developers and SysOps. Dependency management moves to the developers but they realize tremendous benefits in adopting these for high-velocity Digital applications
  • Identifying, measuring and benchmarking key success metrics that measure the ROI of the overall container investments.

Conclusion..

To sum up, the Linux (and Windows) container space is exploding both from a mindshare as well as an adoption standpoint. What is hugely encouraging is that a host of next generation platform technologies (ranging from IaaS to PaaS) are not just choosing to support containers as their basic runtime unit but are also focusing on becoming the defacto solution supporting a host of container ecosystem usecases – provisioning, orchestration, management, CI/CD et al. The next two blogs will respectively discuss how Google Kubernetes and Red Hat OpenShift overcome these challenges and abstract away much of the complexity around container deployments.

The next blog post in this series will discuss Google Kubernetes, the dominant project in the container orchestration space.

References

[1] Introducing Moby Project –  https://blog.docker.com/2017/04/introducing-the-moby-project/