Want to go Cloud or Digital Native? You’ll Need to Make These Six Key Investments…

The ability for an enterprise to become a Cloud Native (CN) or Digitally Native (DN) business implies the need to develop a host of technology capabilities and cultural practices in support of two goals. First, IT becomes aligned with & responsive to the business. Second, IT leads the charge on inculcating a culture of constant business innovation. Given these realities, large & complex enterprises that have invested into DN capabilities often struggle to identify the highest priority areas to target across lines of business or in shared services. In this post, I want to argue that there are six fundamental capabilities large enterprises need to adopt housewide in order to revamp legacy systems. 

Introduction..

The blog has discussed a range of digital applications and platforms at depth. We have covered a range of line of business use cases & architectures – ranging from Customer Journeys, Customer 360, Fraud Detection, Compliance, Risk Management, CRM systems etc. While the specific details will vary from industry to industry, the common themes to all these implementations include a seamless ability to work across multiple channels, to predictively anticipate client needs and support business models in real-time. In short, these are all Digital requirements which have been proven in the webscale world with Google, Facebook, Amazon and Netflix et al.  Most traditional companies are realizing that the adopting the practices of these pioneering enterprises are a must for them to survive and thrive.

However, the vast majority of Fortune 500 enterprises need to overcome significant challenges in their migrating their legacy architecture stacks to a Cloud Native mode.  While it is very easy to slap mobile UIs via static HTML on existing legacy systems, without a re-engineering of their core, they can never realize the true value of digital projects. The end goal of such initiatives is to ensure that underlying systems are agile and able to be responsive to business requirements. The key question then becomes how to develop and scale these capabilities across massive organizations.

Legacy Monolithic IT as a Digital Disabler…

From a top-down direction, business leadership is requesting agiler IT delivery and faster development mechanisms to deal with competitive pressures such as social media streams, a growing number of channels, disruptive competitors and demanding millennial consumers. When one compares the Cloud Native (CN) model (@ http://www.vamsitalkstech.com/?p=5632) to the earlier monolithic deployment stack (@ http://www.vamsitalkstech.com/?p=5617), it is easily noticeable that there are a sheer number of technical elements and trends that enterprise IT is being forced to devise strategies for.

This pressure is being applied on Enterprise IT from both directions.

Let me explain…

In most organizations, the process of identifying the correct set of IT capabilities needed for line of business projects looks like the below –

  1. Lines of business leadership works with product management teams to request IT for new projects to satisfy business needs either in support of new business initiatives or to revamp existing offerings
  2. IT teams follow a structured process to identify the appropriate (siloed) technology elements to create the solution
  3. Development teams follow a mix of agile and waterfall models to stand up the solution which then gets deployed and managed by an operations team
  4. Customer needs and update requests get slowly reflected causing customer dissatisfaction
    Given this reality, how can legacy systems and architectures reinvent themselves to become Cloud Native?

Complexity is inevitable & Enterprises that master complexity will win…

The correct way to creating a CN/DN architecture is that certain technology investments need to be made by complex organizations to speed up each step of the above process. The key challenge in the CN process is to help incumbent enterprises kickstart their digital products to disarm competition.

The sheer number of offerings of the digital IT challenge is due in large part to a large number of technology trends and developments that have begun to have a demonstrable impact on IT architectures today. There are no fewer than nine—including social media and mobile technology, the Internet of Things (IoT), open ecosystems, big data and advanced analytics, and cloud computing et al.

Thus, the CN movement is a complex mishmash of technologies that straddle infrastructure, storage, compute and management. This is an obstacle that must be surmounted by enterprise architects and IT leadership to be able to best position their enterprise for the transformation that must occur.

Six Foundational Technology Investments to go Cloud Native…

There are six foundational technology investments that predicate the creation of a Cloud Native Application Architecture – IaaS, PaaS & Containers, Container Orchestration, Data Analytics & BPM, API Management & DevOps.

There are six layers that large enterprises will need to focus on to improve their systems, processes, and applications in order to achieve a Digital Native architecture. These investments can proceed in parallel.

#1 First and foremost, you will need an IaaS platform

An agile IaaS is an organization-wide foundational layer which provides unlimited capacity across a range of infrastructure services – compute, network, storage, and management. IaaS provides an agile but scalable foundation to deploy everything else on it without incurring undue complexity in development, deployment & management. Key tenets of the private cloud approach include better resource utilization, self-service provisioning and a high degree of automation. Core IT processes such as the lifecycle of resource provisioning, deployment management, change management and monitoring will need to be redone for an enterprise-grade IaaS platform such as OpenStack.

#2 You will need to adopt a PaaS layer with Containers at its heart  –

Containers are possibly the first infrastructure software category created by developers in mind. The prominence of Linux Containers has Docker coincided with the onset of agile development practices under the DevOps umbrella – CI/CD etc. Containers are an excellent choice to create agile delivery pipelines and continuous deployment. It is a very safe bet to make that in a few years, the majority of digital applications (or mundane applications for that matter) will transition to hundreds of services deployed on and running on containers.

Adopting a market leading Platform As A Service (PaaS) platform such as Red Hat’s OpenShift or CloudFoundry can provide a range of benefits from helping with container adoption, tools to help with CI/CD process, reliable rollout with A/B testing, Green-Blue deployments. A PaaS such as OpenShift adds auto-scaling, failover & other kinds of infrastructure management.

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

#3 You will need an Orchestration layer for Containers –

At their core, Containers enable the creation of multiple self-contained execution environments over the same operating system. However, containers are not enough in and of themselves – to drive large-scale DN applications. An Orchestration layer at a minimum, organizes groups of containers into applications, schedules them on servers that match their resource requirements, places the containers on complex network topology etc. It also helps with complex tasks such as release management, Canary releases and administration. The actual tipping point for large-scale container adoption will vary from enterprise to enterprise. However, the common precursor to supporting containerized applications at scale has to be an enterprise-grade management and orchestration platform. Again, a PaaS technology such as OpenShift provides two benefits in one – a native container model and orchestration using Kubernetes.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

#4 Accelerate investments in and combine Big Data Analytics and BPM engines –

In the end, the ability to drive business processes is what makes an agile enterprise. Automation in terms of both Business Processes (BPM) and Data Driven decision making are proven approaches used at webscale,  data-driven organizations. This makes all the difference in terms of what is perceived to be a digital enterprise. Accordingly, the ability to tie in a range of front, mid and back-office processes such as Customer Onboarding, Claims Management & Fraud Detection to a BPM-based system and allowing applications to access these via a loosely coupled architecture based on microservices is key. Additionally leveraging Big Data architectures to process data streams in near real-time is another key capability to possess.

Why Big Data Analytics is the Future of CRM..

#5 Invest in APIs –

APIs enable companies to constantly churn out innovative offerings while still continuously adapting & learning from customer feedback. Internet-scale companies such as Facebook provide edge APIs that enable thousands of companies to write applications that drives greater customer volumes to the Facebook platform. The term API Economy is increasingly in vogue and it connotes a loosely federated ecosystem of companies, consumers, business models and channels

APIs are used to abstract out the internals of complex underlying platform services. Application Developers and other infrastructure services can be leveraged well defined APIs to interact with Digital platforms. These APIs enable the provisioning, deployment, and management of platform services.

Applications developed for a Digital infrastructure will be developed as small, nimble processes that communicate via APIs and over traditional infrastructure such as service mediation components (e.g Apache Camel). These microservices based applications will offer huge operational and development advantages over legacy applications. While one does not expect legacy but critical applications that still run on mainframes (e.g. Core Banking, Customer Order Processing etc) to move over to a microservices model anytime soon, customer-facing applications that need responsive digital UIs will definitely move.

Why APIs Are a Day One Capability In Digital Platforms..

#6 Be prepared, your development methodologies will gradually evolve to DevOps – 

The key non-technology component that is involved in delivering error-free and adaptive software is DevOps.  Currently, most traditional application development and IT operations happen in silos. DevOps with its focus on CI/CD practices requires engineers to communicate more closely, release more frequently, deploy & automate daily, reduce deployment failures and mean time to recover from failures.

Typical software development life cycles that require lengthy validations and quality control testing prior to deployment can stifle innovation. Agile software process, which is adaptive and is rooted in evolutionary development and continuous improvement, can be combined with DevOps. DevOps focuses on tight integration between developers and teams who deploy and run IT operations. DevOps is the only development methodology to drive large-scale Digital application development.

Conclusion..

By following a transformation roughly outlined as above, the vast majority of enterprises can derive a tremendous amount of value in their Digital initiatives. However, the current industry approach as in vogue – to treat Digital projects as a one-off, tactical project investments – does not simply work or scale anymore. There are various organizational models that one could employ from the standpoint of developing analytical maturity. These ranging from a shared service to a line of business led approach. An approach that I have seen work very well is to build a Digital Center of Excellence (COE) to create contextual capabilities, best practices and rollout strategies across the larger organization. The COE should be at the forefront of pushing the above technology boundaries within the larger framework of the organization.

The Seven Characteristics of Cloud Native Application Architectures..

We are in the middle of a series of blogs on Software Defined Datacenters (SDDC) @ http://www.vamsitalkstech.com/?p=1833. The key business imperative driving the SDDC architectures is their ability to natively support digital applications. Digital applications are “Cloud Native” (CN) in the sense that these platforms are originally being written for cloud frameworks – instead of being ported over to the Cloud as an afterthought. Thus, Cloud Native application development emerging as the most important trend in digital platforms. This blog post will define the seven key architectural characteristics of these CN applications.

Image Credit – Shutterstock

What is driving the need for Cloud Native Architectures… 

The previous post in the blog covered the monolithic architecture pattern. Monolithic architectures , which currently dominate the enterprise landscape, are coming under tremendous pressures in various ways and are increasingly being perceived to be brittle. Chief among these forces include – massive user volumes, DevOps style development processes, the need to open up business functionality locked within applications to partners and the heavy human requirement to deploy & manage monolithic architectures etc. Monolithic architectures also introduce technical debt into the datacenter – which makes it very difficult for the business lines to introduce changes as customer demands change – which is a key antipattern for digital deployments.

Why Legacy Monolithic Architectures Won’t Work For Digital Platforms..

Applications that require a high release velocity presenting many complex moving parts, which are worked on by few or many development teams are an ideal fit for the CN pattern.

Introducing Cloud Native Applications…

There is no single and universally accepted definition of a Cloud Native application. I would like to define a CN Application as “an application built using a combination of technology paradigms that are native to cloud computing – including distributed software development, a need to adopt DevOps practices, microservices architectures based on containers, API based integration between the layers of the application, software automation from infrastructure to code, and finally orchestration & management of the overall application infrastructure.”

Further, Cloud Native applications need to be architected, designed, developed, packaged, delivered and managed based on a deep understanding of the frameworks of cloud computing (IaaS and PaaS).

Characteristic #1 CN Applications dynamically adapt to & support massive scale…

The first & foremost characteristic of a CN Architecture is the ability to dynamically support massive numbers of users, large development organizations & highly distributed operations teams. This requirement is even more critical when one considers that cloud computing is inherently multi-tenant in nature.

Within this area, the typical concerns need to be accommodated –

  1. the ability to grow the deployment footprint dynamically (Scale-up)  as well as to decrease the footprint (Scale-down)
  2. the ability to gracefully handle failures across tiers that can disrupt application availability
  3. the ability to accommodate large development teams by ensuring that components themselves provide loose coupling
  4. the ability to work with virtually any kind of infrastructure (compute, storage and network) implementation

Characteristic #2 CN applications need to support a range of devices and user interfaces…

The User Experience (UX) is the most important part of a human facing application. This is particularly true of Digital applications which are omnichannel in nature. End users could not care less about the backend engineering of these applications as they are focused on an engaging user experience.

Demystifying Digital – the importance of Customer Journey Mapping…(2/3)

Accordingly, CN applications need to natively support mobile applications. This includes the ability to support a range of mobile backend capabilities – ranging from authentication & authorization services for mobile devices, location services, customer identification, push notifications, cloud messaging, toolkits for iOS and Android development etc.

Characteristic #3 They are automated to the fullest extent they can be…

The CN application needs to be abstracted completely from the underlying infrastructure stack. This is key as development teams can focus on solely writing their software and does not need to worry about the maintenance of the underlying OS/Storage/Network. One of the key challenges with monolithic platforms (http://www.vamsitalkstech.com/?p=5617) is their inability to efficiently leverage the underlying infrastructure as they have a high degree of dependency to it. Further, the lifecycle of infrastructure provisioning, configuration, deployment, and scaling is mostly manual with lots of scripts and pockets of configuration management.

The CN application, on the other hand, has to be very light on manual asks given its scale. The provision-deploy-scale cycle is highly automated with the application automatically scaling to meet demand and resource constraints and seamlessly recovering from failures. We discussed Kubernetes in one of the previous blogs.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

Frameworks like these support CN Applications in providing resiliency, fault tolerance and in generally supporting very low downtime.

Characteristic #4 They support Continuous Integration and Continuous Delivery…

The reduction of the vast amount of manual effort witnessed in monolithic applications is not just confined to their deployment as far as CN applications are concerned. From a CN development standpoint, the ability to quickly test and perform quality control on daily software updates is an important aspect. CN applications automate the application development and deployment processes using the paradigms of CI/CD (Continuous Integration and Continuous Delivery).

The goal of CI is that every time source code is added or modified, the build process kicks off & the tests are conducted instantly. This helps catch errors faster and improve quality of the application. Once the CI process is done, the CD process builds the application into an artifact suitable for deployment after combining it with suitable configuration. It then deploys it onto the execution environment with the appropriate identifiers for versioning in a manner that support rollback. CD ensures that the tested artifacts are instantly deployed with acceptance testing.

 Characteristic #5 They support multiple datastore paradigms…

The RDBMS has been a fixture of the monolithic application architecture. CN applications, however, need to work with data formats of the loosely structured kind as well as the regularly structured data. This implies the need to support data streams that are not just high speed but also are better suited to NoSQL/Hadoop storage. These systems provide Schema on Read (SOR) which is an innovative data handling technique. In this model, a format or schema is applied to data as it is accessed from a storage location as opposed to doing the same while it is ingested. As we will see later in the blog, individual microservices can have their own local data storage.

A Holistic New Age Technology Approach To Countering Payment Card Fraud (3/3)…

Characteristic #6 They support APIs as a key feature…

APIs have become the de facto model that provide developers and administrators with the ability to assemble Digital applications such as microservices using complicated componentry. Thus, there is a strong case to be made for adopting an API centric strategy when developing CN applications. CN applications use APIs in multiple ways – firstly as the way to interface loosely coupled microservices (which abstract out the internals of the underlying application components). Secondly, developers use well-defined APIs to interact with the overall cloud infrastructure services.Finally, APIs enable the provisioning, deployment, and management of platform services.

Why APIs Are a Day One Capability In Digital Platforms..

Characteristic #7 Software Architecture based on microservices…

As James Lewis and Martin Fowler define it – “..the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.” [1]

Microservices are a natural evolution of the Service Oriented Architecture (SOA) architecture. The application is decomposed into loosely coupled business functions and mapped to microservices. Each microservice is built for a specific granular business function and can be worked on by an independent developer or team. As such it is a separate code artifact and is thus loosely coupled not just from a communication standpoint (typically communication using a RESTful API with data being passed around using a JSON/XML representation) but also from a build, deployment, upgrade and maintenance process perspective. Each microservice can optionally have its localized datastore. An important advantage of adopting this approach is that each microservice can be created using a separate technology stack from the other parts of the application. Docker containers are the right choice to run these microservices on. Microservices confer a range of advantages ranging from easier build, independent deployment and scaling.

A Note on Security…

It goes without saying that security is a critical part of CN applications and needs to be considered and designed for as a cross-cutting concern from the inception. Security concerns impact the design & lifecycle of CN applications ranging from deployment to updates to image portability across environments. A range of technology choices is available to cover various areas such as Application level security using Role-Based Access Control, Multifactor Authentication (MFA), A&A (Authentication & Authorization)  using protocols such as OAuth, OpenID, SSO etc. The topic of Container Security is very fundamental one to this topic and there are many vendors working on ensuring that once the application is built as part of a CI/CD process as described above, they are packaged into labeled (and signed) containers which can be made part of a verified and trusted registry. This ensures that container image provenance is well understood as well as protecting any users who download the containers for use across their environments.

Conclusion…

In this post, we have tried to look at some architecture drivers for Cloud-Native applications. It is a given that organizations moving from monolithic applications will need to take nimble , small steps to realize the ultimate vision of business agility and technology autonomy. The next post, however, will look at some of the critical foundational investments enterprises will have to make before choosing the Cloud Native route as a viable choice for their applications.

References..

[1] Martin Fowler – https://martinfowler.com/intro.html

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

The third and previous blog in this seven part series (@ http://www.vamsitalkstech.com/?p=4659)  discussed Apache Mesos, a project that aims to abstract away various system resources – CPU, memory, network and disk resources to provide consuming digital applications with a giant cluster from which they can utilize capacity – a key requirement of the Software Defined Datacenter (SDDC). In this fourth blog, we will discuss another important ecosystem technology & project – Linux Containers and Docker – which forms the foundational runtime component in the SDDC. The next blog will discuss Kubernetes – Google’s container orchestration platform.

Much like shipping goods in Containers over Oceans, Linux Containers offer a portable, lightweight & convenient way to ship business applications. (Image Credit – WallPapers 13)

Executive Summary…

We can agree that the Digital application is inherently a distributed application. Such applications have historically been extremely hard to develop, setup and manage across a large fleet of data center servers that are a mix of platforms and technologies. Thus it is no surprise that one of the most disruptive developments in the last five years has been the innovation in the Linux container space. Containers now enable the running distributed applications at scale. 

Due to business reasons, Digital applications demand constant updates, changes and incremental revisions in response to changing customer needs. The Software Defined Datacenter (SDDC) thus needs a runtime paradigm that enables not just efficient hardware usage but also supports standardized application environments that are portable simplified and consistent across hybrid clouds and hypervisors.  Containers fill this need and are thus emerging to be the natural unit of deployment across the SDDC. Much has been written on the topic of Docker and Linux Container technology. My goal for this blog post is to distill key insights in the container ecosystem.

The Technologies of Linux Containers & Docker

Unlike Virtual Machines, Container Engines such as Docker share a common OS (Image Credit – MSFT Azure)

Linux Containers are alike and yet different from virtual machines. They are alike in the sense that each Container shares system resources on the underlying hardware platform – CPU, RAM, and Network – as with VMs. However, while each VM maintains its separate copy of the Operating System (OS), containers share the same OS kernel while keeping themselves separate from other containers running on the same OS.  How do they do that?

Though the terms ‘Docker’ and ‘Container’ have become almost synonymous – it needs to be noted that Docker is a company focused on developing technology enablement around containers in areas such as orchestration, networking, and management. Docker was an open source project (now renamed to Moby [1]) that provided capabilities such as a standard description of container formats, utilities for application packaging, deployment & lifecycle management of applications inside Linux Containers. It provides a Docker CLI command line tool for the lifecycle management of image-based containers.

Prior to the explosion of interest in Linux containers & the founding of Docker, traditional Linux distributions (with a minimum kernel level of 3.8) supported two foundational paradigms – control groups (cgroups) and kernel namespaces.  Linux containers use both these features to achieve their goal of isolation and portability. Cgroups enables the host to limit the resources each container process can use from a CPU, Memory, Filesystem, User ID components and Network standpoint. This ensures that containers running on a host cannot starve others of resources thus avoiding the “Noisy Neighbor” problem that bedeviled a lot of cloud deployments.

Kernel Namespaces ensure another kind of isolation for process interactions within the OS. Containers can only view and modify resources in the same namespace. This ensures a security mechanism where other containers and processes on the host cannot launch attacks on a given application running on a tenant container or on the host itself. Thus the combination of both these technologies ensures that multiple applications running within their individual containers can share CPU and Memory without needing the overhead of virtualization. Docker also grants each container its own networking implementation thus ensuring that resources such as socket and interfaces can also be protected.

Companies including Red Hat, IBM, Google, Cisco, VMware, and CoreOS have greatly aided with the development of and accessibility of containers in their platforms and products.

Layered Filesystems..

Various Image Layers in Docker. Each layer in the file system is mounted on the previous.The topmost is the Writable Container. (Image Credit – Docker)

We discussed how Container Images are Immutable. This is the key advantage of using container technology such as Docker & is made possible by the notion of a Union filesystem. What are Union filesystems and how do they enforce immutability? Much like the image in a Virtual Machine sense, Containers also run from an image, which typically are a snapshot of a filesystem but tend to be much smaller than VM images since the Container is installed on a host kernel.

Union filesystems are best described as a layered architecture – in that each layer is created independently and then added atop of the previous layer.  An example of a Union filesystem is a Linux Kernel – an OS – then a data base like Oracle – then Tomcat – and a web application on it. The top layer is always the Writable layer. The real advantage in using a union filesystem is that using these images becomes super efficient from a storage and execution standpoint. Union filesystems also help in sharing portions of the OS across containers. Simply put, an image contains everything an application needs – from it’s dependencies and external libraries. When an Image is run, it is called a Container. In the case of Docker, it uses a layered copy on write filesystem called AUFS (Another Union Filesystem).

Containers and Developers..

Containers are possibly the first infrastructure software category created by developers in mind. The prominence of Linux Containers has Docker coincided with the onset of agile development practices under the DevOps umbrella – CI/CD etc. Containers are an excellent choice to create agile delivery pipelines and continuous deployment. At their core, Containers enable the creation of multiple self-contained execution environments over the same operating system.

Developers are naturally excited about Linux Containers for five specific reasons –

  1. Containers allow for image consistency across OS environments. This is a huge help in accelerating the development process from development to debugging to production. Developers can just focus on building their applications (in dev environments that match the test and prod) and packaging them in containers. This just takes a lot of the inefficiency around environment dissimilarities out of the equation.
  2. Containers are treated as a standard linux process by the kernel & thus are orders of magnitude quicker from a startup time when compared to VMs. This means that developers can start their applications in a manner of seconds as long as they run them in a container.
  3. Containers also provide development organizations the ability to standardize application development workflows and update processes. This solves the scalability problem that digital applications have caused large organizations.
  4. Digital applications are leading the move to adopt microservices. Microservices offer a way to build applications as a collection of discrete services as opposed to a monolithic architecture. By there very nature, microservices can be built and managed by different teams. Containerization affords a lightweight way of building and deploying microservices.
  5. Containers offer a portable way of delivering applications (across Operating Systems) as well as provide horizontal scalability.

    Digital Application development using Containers..

Digital Application Development and Deployment Workflow using Containers.

There are a few key runtime components involved in operationalizing a small to medium to large scale container infrastructure as the above illustration depicts.

  1. Firstly, developers create container images. These images describe an application and it’s dependencies. An easy way to conceptualize an image is to think of it as a basic deployment template. Image are also immutable in that they are read only and any changes happen in the top most layer which is writable. Modifying an image is to create a new one. Images thus have a Parent Child relationship. Developers create images by building their applications on their developer environments, performing unit tests and then pushing to a repository. Once the container is built with the necessary dependencies, these tools run a battery of tests to validate business functionality. A large part of this process is usually best automated using CI/CD tools like Jenkins, CruiseControl or Buildbot etc.
  2. The built images are then made available in a Container Registry. This is either maintained internally or sourced from a trusted external source. As the name suggests, Registries maintain a catalog of container images of frequently used software – e.g. Custom applications and other software packages such as WordPress, Relational databases, Web Servers, Big Data technologies and Application Servers etc
  3. The next step is to create and deploy (runtime) containers from these images on a set of servers. Once images are released as a result of application development, sys admins work on the provisioning of the servers to run these images. Once a Container engine is installed on the server, images are loaded on and they take the runtime shape of containers. The mode of getting these images on these servers follows either a push/pull mechanism.
  4. Scheduling of containers on servers is also a process that usually done by Sys Admins. This involves running containers of certain kinds on servers that match up to certain CPU, I/O and Network capacity requirements
  5. To create complex real world deployments, not only do the servers and networking have to be created but these containers are also interconnected (e.g. a web server container to an application server) using Discovery mechanisms. These containers then need to also connect to a host of enterprise services. Customer traffic is then routed to the clustered containers running on these servers. Monitor the logs and performance of these containers and the microservices running on them.
  6. The process repeats from step #1 above.

Industry Adoption of Containers.

In a few years, containers will deliver the bulk of compute workloads across public cloud providers such as Amazon AWS, Google Compute Engine and Microsoft Azure. Given that the VM options on these clouds can run multiple containers which can scale on demand, the industry will begin to gravitate to higher utilization density. The SDDC has already begun incorporating hybrid architectures that run both containers and VMs in a complementary fashion.The Software Defined Datacenter will incorporate a hybrid model consisting of applications running on both Linux Containers and Virtual Machines.

Customers also have choices of traditional enterprise operating systems such as Red Hat Enterprise Linux or Microsoft Windows or can also run containers on OS’s developed for the purpose of hosting containers at hyper scale. These OS’s just provide tools to manage containers and nothing else. Examples include Red Hat Atomic Platform and CoreOS. Moving up the stack, pioneers such as Google and Red Hat have added core support for containers in projects such as OpenStack, Kubernetes, Mesos, OpenShift & CloudFoundry by helping with networking and persistent storage. Kubernetes (which we will cover in the next post) also handles provisioning on multiple public cloud platforms. Config Mgmt platforms such as Ansible, Chef and Puppet now support containerized deployments.

Technical Considerations for Container Adoption

Some key considerations that industry players are tackling from the standpoint of running containers at scale –

  1. Container Orchestration –  Organize groups of containers into compassable applications, scheduling them on servers that match their resource requirements, placement of containers based on network topology etc
  2. Container Networking – Containers follow a pluggable model and the network is no different. Key considerations – an enterprise network connectivity stack is needed to not only provide the interconnect between different containers but also to integrate them with existing Layer 2/3 networks. Additionally, network isolation needs to be provided for microservices running on these containers using either a dedicated IP address for each or an overlay network.
  3. Management and Monitoring -Life cycle processes ranging from Management and Monitoring encompass a range of questions – application patching with low downtime, graceful failures in cloud native applications, container scale up & scale down based on traffic patterns etc.

Containers and your Enterprise…

So what is the best way to adopt containers across a large enterprise?

  • Develop your container strategy in the context of the Nexus of Forces (i.e., information, mobile, social and cloud) initiatives in your organization — Containers are at the junction of these technologies.
  • Institute an organizational process to examine the business value of any initiative to adopt Containers. Understand what tools and platforms to adopt that will abstract away the complexities of using containers.
  • Understanding skills required to leverage containers. Containers are a new way for both developers and SysOps. Dependency management moves to the developers but they realize tremendous benefits in adopting these for high-velocity Digital applications
  • Identifying, measuring and benchmarking key success metrics that measure the ROI of the overall container investments.

Conclusion..

To sum up, the Linux (and Windows) container space is exploding both from a mindshare as well as an adoption standpoint. What is hugely encouraging is that a host of next generation platform technologies (ranging from IaaS to PaaS) are not just choosing to support containers as their basic runtime unit but are also focusing on becoming the defacto solution supporting a host of container ecosystem usecases – provisioning, orchestration, management, CI/CD et al. The next two blogs will respectively discuss how Google Kubernetes and Red Hat OpenShift overcome these challenges and abstract away much of the complexity around container deployments.

The next blog post in this series will discuss Google Kubernetes, the dominant project in the container orchestration space.

References

[1] Introducing Moby Project –  https://blog.docker.com/2017/04/introducing-the-moby-project/

Why Digital Disruption is the Cure for the Common Data Center..

The foundation of digital business is the boundary-free enterprise, which is made possible by an array of time- and location-independent computing capabilities – cloud, mobile, social and data analytics plus sensors and APIs. There are no shortcuts to the digital enterprise.”

— Mike West,Analyst,Saugatack Research 2015

At its core Digital is a fairly straightforward concept. It is essentially about offering customers more contextual and relevant experiences while creating internal teams that can turn on a dime to serve customers. It is clear that these kinds of consumer capabilities just cannot be offered using an existing technology stack. This blogpost seeks to answer what this next generation computing stack may look like.

What Digital has in Store for Enterprises…

Digital transformation is a daily fact of life at web scale shops like Google, Amazon, Apple, Facebook and Netflix. These mega shops have built not just intuitive and appealing applications but have gradually evolved them into platforms that offer discrete marketplaces that serve global audiences. They also provide robust support for mobile applications that deliver services such as content, video, e-commerce, gaming etc via such channels. In fact they have heralded the age of new media and in doing so have been transforming both internally (business models, internal teams & their offerings) as well as externally.

CXOs at established Fortune 1000 enterprises were unable to find resonance in these stories from the standpoint of their enterprise’s reinvention. This makes a lot of sense as these established companies have legacy investments and legacy stakeholders – both of which represent change inhibitors that the FANGs (Facebook Amazon Netflix and Google) did not have. Enterprise practitioners need to understand how Digital technology can impact both existing technology investments and the future landscape.

Where are most Enterprises at the moment…

Much of what exists in the datacenters across organizations are antiquated from a technology stack. These range from hardware platforms to network devices & switches to monolithic applications running on them. Connecting these applications are often proprietary or manual integraton architectures. There are inflexible, proprietary systems & data architectures, lots of manual processes, monolithic applications and tightly coupled integration. Rapid provisioning of IT resources is a huge bottleneck which frequently leads to lines of business adopting the public cloud to run their workloads.  According to Rakesh Kumar, managing vice president at Gartner – “For over 40 years, data centers have pretty much been a staple of the IT ecosystem,Despite changes in technology for power and cooling, and changes in the design and build of these structures, their basic function and core requirements have, by and large, remained constant. These are centered on high levels of availability and redundancy, strong, well-documented processes to manage change, traditional vendor management and segmented organizational structures. This approach, however, is no longer appropriate for the digital world.” [2]

On that note, the below blogpost had captured the three essential technology investments that make up Digital Transformation.

The Three Core Competencies of Digital – Cloud, Big Data & Intelligent Middleware

If Digital has to happen, IT is one of the largest stakeholders…

Digital applications present seamless expereinces across channels & devices, are tailored to individual customers needs, understand their preferences & need to be developed in an environment of constant product innovation.

So, which datacenter capabilities are required to deliver this?

Figuring out the best architectural foundation to support , leverage & monetize on digital experiences is complex.  The past few years have seen the rapid evolution of many transformational technologies—Big Data, Cognitive Computing, Cloud technology (Public clouds, OpenStack, PaaS, Containers, Software-defined networking & storage), the Blockchain – the list goes on and on. These are leading enterprises to a smarter way of developing enterprise applications and to a more modern, efficient, scalable, cloud-based architectures.

So, what capabilities do Datacenters need to innovate towards?

digital_datacenter

                                         The legacy Datacenter transitions to the Digital Datacenter

While, the illustration above is self explanatory. Enterprise IT will need to majorly embrace Cloud Computing – whatever forms the core offering may take – public, private or hybrid. The compute infrastructure ranging from a mix of open source virtualization to Linux containers. Containers essentially virtualize the operating system so that multiple workloads can run on a single host, instead of virtualizing a server to create multiple operating systems. These containers are easily ported across different servers without the need for reconfiguration and require less maintenance because there are fewer operating systems to manage. For instance, the OpenStack Cloud Project specifies Docker (a defacto standard), a Linux format for containers that’s designed to automate the deployment of applications as highly portable, self-sufficient containers.

Cloud computing will also enable the rapid scale up & scale down across the gamut of infrastructure (compute – VM/Baremetal/Containers, storage – SAN/NAS/DAS, network – switches/routers/Firewalls etc) in near real-time (NRT). Investments in SDN (Software Defined Networking) will be de riguer in order to improve software based provisioning, network, time to market and to drive network equipment costs down. The other vector that brings about datacenter innovation is around automation i.e vastly reducing manual efforts in network and application provisioning. These capabilities will be key as the vast majority of digital applications are deployed as Software as a service (SaaS).

An in depth discussion of these Software Defined capabilities can be found at the below blogpost.

Financial Services IT begins to converge towards Software Defined Datacenters..

Applications developed for a Digital infrastructure will be developed as small, nimble processes that communicate via APIs and over infrastructure like service mediation components (e.g Apache Camel). These microservices based applications will offer huge operational and development advantages over legacy applications. While one does not expect legacy but critical applications that still run on mainframes (e.g. Core Banking, Customer Order Processing etc) to move over to a microservices model anytime soon, customer facing applications that need responsive digital UIs will definitely move.

Which finally brings us to the most important capability of all – Data. The heart of any successful Digital implementation is Data. The definition of Data includes internal data (e.g. customer data, data about transactions, customer preferences data), external datasets & other relevant third party data (e.g. from retailers) etc.  While each source of data may not radically change an application’s view of its customers, the combination of all promises to do just that.

The significant increases in mobile devices and IoT (Internet of Things) capable endpoints will ensure exponential increases in data volumes will occur. Thus Digital applications will need to handle this data – not just to process it but also to be able to glean real time insights.  Some of the biggest technology investments in ensuring a unified customer journeys are in the areas of Big Data & Predictive Analytics. Enterprises should be able to leverage a common source of data that transcends silos (a data lake) to be able to drive customer decisions that drive system behavior in real time using advanced analytics such as Machine Learning techniques, Cognitive computing platforms etc which can provide accurate and personalized insights to drive the customer journey forward.

Can Datacenters incubate innovation ?

Finally, one of the key IT architectural foundation strategies companies need to invest in is modern application development. Gartner calls such a feasible approach “Bimodal IT”. According to Gartner, “infrastructure & operations leaders must ensure that their internal data centers are able to connect into a broader hybrid topology“.[2]  Let us consider Healthcare – a reasonably staid vertical as an example. In a report released by EY, “Order from Chaos – Where big data and analytics are heading, and how life sciences can prepare for the transformational tidal wave,” [1] the services firm noted that an agile environment can help organizations create opportunities to turn data into innovative insights. Typical software development life cycles that require lengthy validations and quality control testing prior to deployment can stifle innovation. Agile software development, which is adaptive and is rooted in evolutionary development and continuous improvement, can be combined with DevOps, which focuses on the the integration between the developers and the teams who deploy and run IT operations. Together, these can help life sciences organizations amp up their application development and delivery cycles. EY notes in its report that life sciences organizations can significantly accelerate project delivery, for example, “from three projects in 12 months to 12 projects in three months.”

Finally, Big Data has evolved to enable the processing of data in a batch, interactive, low latency manner depending on the business requirements – which is a massive gain for Digital projects. Big Data and DevOps will both go hand in hand to deliver new predictive capabilities.

Further, business can create digital models of client personas and integrate these with predictive analytic tiers in such a way that an API (Application Programming Interface) approach is provided to integrate these with the overall information architecture.

Conclusion..

More and more organizations are adopting a Digital first business strategy.  The current approach as in vogue – to treat these as one-off, tactical project investments – does not simply work or scale anymore. There are various organizational models that one could employ from the standpoint of developing analytical maturity. These ranging from a shared service to a line of business led approach. An approach that I have seen work very well is to build a Digital Center of Excellence (COE) to create contextual capabilities, best practices and rollout strategies across the larger organization.

References –

[1] E&Y – “Order From Chaos” http://www.ey.com/Publication/vwLUAssets/EY-OrderFromChaos/$FILE/EY-OrderFromChaos.pdf

[2] Gartner – ” Five Reasons Why a Modern Data Center Strategy Is Needed for the Digital World” – http://www.gartner.com/newsroom/id/3029231

Why Software Defined Infrastructure & why now..(1/6)

The ongoing digital transformation in key verticals like financial services, manufacturing, healthcare and telco has incumbent enterprises fending off a host of new market entrants. Enterprise IT’s best answer is to increase the pace of innovation as a way of driving increased differentiation in business processes. Though data analytics & automation remain the lynchpin of this approach – software defined infrastructure (SDI) built on the notions of cloud computing has emerged as the main infrastructure differentiator & that for a host of reasons which we will discuss in this two part blog.

Software Defined Infrastructure (SDI) is essentially an idea that brings together  advances in a host of complementary areas spanning both infrastructure software, data as well as development environments. It supports a new way of building business applications. The core idea in SDI is that massively scalable applications (in support of diverse customer needs) describe their behavior characteristics (via configuration & APIs) to underlying datacenter infrastructure which simply obeys those commands in an automated fashion while abstracting away the underlying complexities.

SDI as an architectural pattern was originally made popular by the web scale giants – the so-called FANG companies of tech — Facebook , Amazon , Netflix and Alphabet (the erstwhile Google) but has begun making it’s way into the enterprise world gradually.

Common Business IT Challenges prior to SDI – 
  1. Cost of hardware infrastructure is typically growing at a high percentage every year as compared to  growth in the total  IT budget. Cost pressures are driving an overall re look at the different tiers across the IT landscape.
  2. Infrastructure is not completely under the control of the IT-Application development teams as yet.  Business realities that dictate rapid app development to meet changing business requirements
  3. Even for small, departmental level applications, still needed to deploy expensive proprietary stacks which are not only cost and deployment footprint prohibitive but also take weeks to spin up in terms of provisioning cycles.
  4. Big box proprietary solutions leading to a hard look at Open Source technologies which are lean and easy to use with lightweight deployment footprint.Apps need to dictate footprint; not vendor provided containers.
  5. Concerns with acquiring developers who are tooled on cutting edge development frameworks & methodologies. You have zero developer mindshare with Big Box technologies.

Key characteristics of an SDI

  1. Applications built on a SDI can detect business events in realtime and respond dynamically by allocating additional resources in three key areas – compute, storage & network – based on the type of workloads being run.
  2. Using an SDI, application developers can seamlessly deploy apps while accessing higher level programming abstractions that allow for the rapid creation of business services (web, application, messaging, SOA/ Microservices tiers), user interfaces and a whole host of application elements.
  3. From a management standpoint, business application workloads are dynamically and automatically assigned to the available infrastructure (spanning public & private cloud resources) on the basis of the application requirements, required SLA in a way that provides continuous optimization across the life cycle of technology.
  4. The SDI itself optimizes the entire application deployment by both externally provisioned APIs & internal interfaces between the five essential pieces – Application, Compute, Storage, Network & Management.

The SDI automates the technology lifecycle –

Consider the typical tasks needed to create and deploy enterprise applications. This list includes but is not limited to –

  • onboarding hardware infrastructure,
  • setting up complicated network connectivity to firewalls, routers, switches etc,
  • making the hardware stack available for consumption by applications,
  • figure out storage requirements and provision those
  • guarantee multi-tenancy
  • application development
  • deployment,
  • monitoring
  • updates, failover & rollbacks
  • patching
  • security
  • compliance checking etc.
The promise of SDI is to automate all of this from a business, technology, developer & IT administrator standpoint.
 SDI Reference Architecture – 
 The SDI encompasses SDC (Software Defined Compute) , SDS (Software Defined Storage), SDN (Software Defined Networking), Software Defined Applications and Cloud Management Platforms (CMP) into one logical construct as can be seen from the below picture.
FS_SDDC

                      Illustration: The different tiers of Software Defined Infrastructure

The core of the software defined approach are APIs.  APIs control the lifecycle of resources (request, approval, provisioning,orchestration & billing) as well as the applications deployed on them. The SDI implies commodity hardware (x86) & a cloud based approach to architecting the datacenter.

The ten fundamental technology tenets of the SDI –

1. Highly elastic – scale up or scale down the gamut of infrastructure (compute – VM/Baremetal/Containers, storage – SAN/NAS/DAS, network – switches/routers/Firewalls etc) in near real time

2. Highly Automated – Given the scale & multi-tenancy requirements, automation at all levels of the stack (development, deployment, monitoring and maintenance)

3. Low Cost – Oddly enough, the SDI operates at a lower CapEx and OpEx compared to the traditional datacenter due to reliance on open source technology & high degree of automation. Further workload consolidation only helps increase hardware utilization.

4. Standardization –  The SDI enforces standardization and homogenization of deployment runtimes, application stacks and development methodologies based on lines of business requirements. This solves a significant IT challenge that has hobbled innovation at large financial institutions.

5. Microservice based applications –  Applications developed for a SDI enabled infrastructure are developed as small, nimble processes that communicate via APIs and over infrastructure like messaging & service mediation components (e.g Apache Kafka & Camel). This offers huge operational and development advantages over legacy applications. While one does not expect Core Banking applications to move over to a microservice model anytime soon, customer facing applications that need responsive digital UIs will need definitely consider such approaches.

6. ‘Kind-of-Cloud’ Agnostic –  The SDI does not enforce the concept of private cloud, or rather it encompasses a range of deployment options – public, private and hybrid.

7. DevOps friendly –  The SDI enforces not just standardization and homogenization of deployment runtimes, application stacks and development methodologies but also enables a culture of continuous collaboration among developers, operations teams and business stakeholders i.e cross departmental innovation. The SDI is a natural container for workloads that are experimental in nature and can be updated/rolled-back/rolled forward incrementally based on changing business requirements. The SDI enables rapid deployment capabilities across the stack leading to faster time to market of business capabilities.

8. Data, Data & Data –  The heart of any successful technology implementation is Data. This includes customer data, transaction data, reference data, risk data, compliance data etc etc. The SDI provides a variety of tools that enable applications to process data in a batch, interactive, low latency manner depending on what the business requirements are.

9. Security –  The SDI shall provide robust perimeter defense as well as application level security with a strong focus on a Defense In Depth strategy.

10. Governance –  The SDI enforces strong governance requirements for capabilities ranging from ITSM requirements – workload orchestration, business policy enabled deployment, autosizing of workloads to change management, provisioning, billing, chargeback & application deployments.

The next & second blog in this series will cover the challenges in running massive scale applications.

Financial Services IT begins to converge towards Software Defined Datacenters..

Previous posts in this blog have commented on the financial services industry as increasingly undergoing a gradual makeover if not outright transformation – both from a business and IT perspective.  This is being witnessed across the spectrum that makes up this crucial vertical –  Retail & Consumer Banking, Stock Exchanges, Wealth Management/ Private Banking & Cards etc.

The regulatory deluge (Basel III, Dodd Frank, CAT Reporting, AML & KYC etc) and the increasing sophistication of cybersecurity threats have completely changed the landscape that IT finds itself in – compared to even five years ago.

Brett King writes in his inimitable style about the age of the hyper-connected consumer i.e younger segments of the population who expect to be able to bank from anywhere, be it from a mobile device or via the Internet from their personal computers instead of just walking into a physical branch.

Further multiple Fintechs (like WealthFront, Kabbage, Square, LendingClub, Mint.com, Cyptocurrency based startups etc)  are leading the way in pioneering a better customer experience.  For an established institution that has huge early mover advantage, the ability to compete with innovative players by using fresh technology approaches is critical to engage customers.

All of these imperatives place a lot of pressure on Enterprise FS IT to move from an antiquated command and control model to being able to deliver on demand services with the speed of an Amazon Web Services.

These new services are composed of Applications that encompass paradigms ranging from Smart Middleware, Big Data, Realtime Analytics, Data Science, DevOps and Mobility. The common business thread to deploying all of these applications is to be able to react quickly and expeditiously to customer expectations and requirements.

Enter the Software Defined Datacenter (SDDC). Various definitions exist for this term but I wager that it means – “a highly automated & self-healing datacenter infrastructure that can quickly deliver on demand services to millions of end users, internal developers without  imposing significant headcount requirements on the enterprise“.

Let’s parse this below.

The SDDC encompasses SDC (Software Defined Compute) , SDS (Software Defined Storage), SDN (Software Defined Networking), Software Defined Applications and Cloud Management Platforms (CMP) into one logical construct as can be seen from the below picture.

FS_SDDC

The core of the software defined approach are APIs.  APIs control the lifecycle of resources (request, approval, provisioning,orchestration & billing) as well as the applications deployed on them. The SDDC implies commodity hardware (x86) & a cloud based approach to architecting the datacenter.

The ten fundamental technology differentiators of the SDDC –

1. Highly elastic – scale up or scale down the gamut of infrastructure (compute – VM/Baremetal/Containers, storage – SAN/NAS/DAS, network – switches/routers/Firewalls etc) in near real time

2. Highly Automated – Given the scale & multi-tenancy requirements, automation at all levels of the stack (development, deployment, monitoring and maintenance)

3. Low Cost – Oddly enough, the SDDC operates at a lower CapEx and OpEx compared to the traditional datacenter due to reliance on open source technology & high degree of automation. Further workload consolidation only helps increase hardware utilization.

4. Standardization –  The SDDC enforces standardization and homogenization of deployment runtimes, application stacks and development methodologies based on lines of business requirements. This solves a significant IT challenge that has hobbled innovation at large financial institutions.

5. Microservice based applications –  Applications developed for a SDDC enabled infrastructure are developed as small, nimble processes that communicate via APIs and over infrastructure like service mediation components (e.g Apache Camel). This offers huge operational and development advantages over legacy applications. While one does not expect Core Banking applications to move over to a microservice model anytime soon, customer facing applications that need responsive digital UIs will need definitely consider such approaches.

6. ‘Kind-of-Cloud’ Agnostic –  The SDDC does not enforce the concept of private cloud, or rather it encompasses a range of deployment options – public, private and hybrid.

7. DevOps friendly –  The SDDC enforces not just standardization and homogenization of deployment runtimes, application stacks and development methodologies but also enables a culture of continuous collaboration among developers, operations teams and business stakeholders i.e cross departmental innovation. The SDDC is a natural container for workloads that are experimental in nature and can be updated/rolled-back/rolled forward incrementally based on changing business requirements. The SDDC enables rapid deployment capabilities across the stack leading to faster time to market of business capabilities.

8. Data, Data & Data –  The heart of any successful technology implementation is Data. This includes customer data, transaction data, reference data, risk data, compliance data etc etc. The SDDC provides a variety of tools that enable applications to process data in a batch, interactive, low latency manner depending on what the business requirements are.

9. Security –  The SDDC shall provide robust perimeter defense as well as application level security with a strong focus on a Defense In Depth strategy. Further data at rest and in motion shall be

10. Governance –  The SDDC enforces strong governance requirements for capabilities ranging from ITSM requirements – workload orchestration, business policy enabled deployment, autosizing of workloads to change management, provisioning, billing, chargeback & application deployments.

So how is doing SDDC at the moment? Most major banks have initiatives in place to gradually evolve their infrastructures to an SDI paradigm. Bank of America (for one) have been vocal about their approach in using two stacks, one Open Source & OpenStack based and the other a proprietary stack[1].

To sum up the core benefit of the SDDC approach, it brings a large enterprise closer to web scale architectures and practices.

The business dividends of the latter include –

1. Digital Transformation – Every large Bank is under growing pressure to transform lines of business or their entire enterprise into a digital operation. I define digital in this context as being able to – “adapt high levels of automation while enabling the business to support multiple channels by which products and services can be delivered to customers. ”

Further the culture of digital encourages constant innovation and agility resulting high levels of customer & employee satisfaction.”

2. Smart Data & Analytics –  Techniques that ensure that the right data is in the hands of the right employee at the right time so that contextual services can be offered in real time to customers. This has the effect of optimizing existing workflows while also enabling the creation of new business models.

3. Cost Savings – Oddly enough, the move to web-scale only reduces business and IT costs. You not only end up doing more with less employees due to higher levels of automation but also are able to constantly cut costs due to adopting technologies like Cloud Computing which enable one to cut CapEx and OpEx. Almost all webscale IT is dominated by open source technologies & APIs, which are much more cost effective than proprietaty platforms.

4. A Culture of Collaboration – The most vibrant enterprises that have implemented web-scale practices not only offer “IT/Business As A Service” but also have instituted strong cultures of symbiotic relationships between customers (both current & prospective), employees , partners and developers etc.

5. Building for the Future – The core idea behind implementing web-scale architecture and data management practices is “Be disruptive in your business or be disrupted by competition”. Web-scale practices enable the building of business platforms around which ecosystems can be created and then sustained based on increasing revenue.

To quote wikipedia, a widespread transition to the SDDC will take years:

Enterprise IT will have to become truly business focused, automatically placing application workloads where they can be best processed. We anticipate that it will take about a decade until the SDDC becomes a reality. However, each step of the journey will lead to efficiency gains and make the IT organization more and more service oriented.

The virtuous loop encouraged by constant customer data & feedback enables business applications (and platforms) to behave like agile & growing organisms –  SDDC based architectures offer them the agility to get there.

References

1.http://blogs.wsj.com/cio/2015/06/26/bank-of-america-adding-workloads-to-software-defined-infrastructure/