Want to go Cloud or Digital Native? You’ll Need to Make These Six Key Investments…

The ability for an enterprise to become a Cloud Native (CN) or Digitally Native (DN) business implies the need to develop a host of technology capabilities and cultural practices in support of two goals. First, IT becomes aligned with & responsive to the business. Second, IT leads the charge on inculcating a culture of constant business innovation. Given these realities, large & complex enterprises that have invested into DN capabilities often struggle to identify the highest priority areas to target across lines of business or in shared services. In this post, I want to argue that there are six fundamental capabilities large enterprises need to adopt housewide in order to revamp legacy systems. 

Introduction..

The blog has discussed a range of digital applications and platforms at depth. We have covered a range of line of business use cases & architectures – ranging from Customer Journeys, Customer 360, Fraud Detection, Compliance, Risk Management, CRM systems etc. While the specific details will vary from industry to industry, the common themes to all these implementations include a seamless ability to work across multiple channels, to predictively anticipate client needs and support business models in real-time. In short, these are all Digital requirements which have been proven in the webscale world with Google, Facebook, Amazon and Netflix et al.  Most traditional companies are realizing that the adopting the practices of these pioneering enterprises are a must for them to survive and thrive.

However, the vast majority of Fortune 500 enterprises need to overcome significant challenges in their migrating their legacy architecture stacks to a Cloud Native mode.  While it is very easy to slap mobile UIs via static HTML on existing legacy systems, without a re-engineering of their core, they can never realize the true value of digital projects. The end goal of such initiatives is to ensure that underlying systems are agile and able to be responsive to business requirements. The key question then becomes how to develop and scale these capabilities across massive organizations.

Legacy Monolithic IT as a Digital Disabler…

From a top-down direction, business leadership is requesting agiler IT delivery and faster development mechanisms to deal with competitive pressures such as social media streams, a growing number of channels, disruptive competitors and demanding millennial consumers. When one compares the Cloud Native (CN) model (@ http://www.vamsitalkstech.com/?p=5632) to the earlier monolithic deployment stack (@ http://www.vamsitalkstech.com/?p=5617), it is easily noticeable that there are a sheer number of technical elements and trends that enterprise IT is being forced to devise strategies for.

This pressure is being applied on Enterprise IT from both directions.

Let me explain…

In most organizations, the process of identifying the correct set of IT capabilities needed for line of business projects looks like the below –

  1. Lines of business leadership works with product management teams to request IT for new projects to satisfy business needs either in support of new business initiatives or to revamp existing offerings
  2. IT teams follow a structured process to identify the appropriate (siloed) technology elements to create the solution
  3. Development teams follow a mix of agile and waterfall models to stand up the solution which then gets deployed and managed by an operations team
  4. Customer needs and update requests get slowly reflected causing customer dissatisfaction
    Given this reality, how can legacy systems and architectures reinvent themselves to become Cloud Native?

Complexity is inevitable & Enterprises that master complexity will win…

The correct way to creating a CN/DN architecture is that certain technology investments need to be made by complex organizations to speed up each step of the above process. The key challenge in the CN process is to help incumbent enterprises kickstart their digital products to disarm competition.

The sheer number of offerings of the digital IT challenge is due in large part to a large number of technology trends and developments that have begun to have a demonstrable impact on IT architectures today. There are no fewer than nine—including social media and mobile technology, the Internet of Things (IoT), open ecosystems, big data and advanced analytics, and cloud computing et al.

Thus, the CN movement is a complex mishmash of technologies that straddle infrastructure, storage, compute and management. This is an obstacle that must be surmounted by enterprise architects and IT leadership to be able to best position their enterprise for the transformation that must occur.

Six Foundational Technology Investments to go Cloud Native…

There are six foundational technology investments that predicate the creation of a Cloud Native Application Architecture – IaaS, PaaS & Containers, Container Orchestration, Data Analytics & BPM, API Management & DevOps.

There are six layers that large enterprises will need to focus on to improve their systems, processes, and applications in order to achieve a Digital Native architecture. These investments can proceed in parallel.

#1 First and foremost, you will need an IaaS platform

An agile IaaS is an organization-wide foundational layer which provides unlimited capacity across a range of infrastructure services – compute, network, storage, and management. IaaS provides an agile but scalable foundation to deploy everything else on it without incurring undue complexity in development, deployment & management. Key tenets of the private cloud approach include better resource utilization, self-service provisioning and a high degree of automation. Core IT processes such as the lifecycle of resource provisioning, deployment management, change management and monitoring will need to be redone for an enterprise-grade IaaS platform such as OpenStack.

#2 You will need to adopt a PaaS layer with Containers at its heart  –

Containers are possibly the first infrastructure software category created by developers in mind. The prominence of Linux Containers has Docker coincided with the onset of agile development practices under the DevOps umbrella – CI/CD etc. Containers are an excellent choice to create agile delivery pipelines and continuous deployment. It is a very safe bet to make that in a few years, the majority of digital applications (or mundane applications for that matter) will transition to hundreds of services deployed on and running on containers.

Adopting a market leading Platform As A Service (PaaS) platform such as Red Hat’s OpenShift or CloudFoundry can provide a range of benefits from helping with container adoption, tools to help with CI/CD process, reliable rollout with A/B testing, Green-Blue deployments. A PaaS such as OpenShift adds auto-scaling, failover & other kinds of infrastructure management.

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

#3 You will need an Orchestration layer for Containers –

At their core, Containers enable the creation of multiple self-contained execution environments over the same operating system. However, containers are not enough in and of themselves – to drive large-scale DN applications. An Orchestration layer at a minimum, organizes groups of containers into applications, schedules them on servers that match their resource requirements, places the containers on complex network topology etc. It also helps with complex tasks such as release management, Canary releases and administration. The actual tipping point for large-scale container adoption will vary from enterprise to enterprise. However, the common precursor to supporting containerized applications at scale has to be an enterprise-grade management and orchestration platform. Again, a PaaS technology such as OpenShift provides two benefits in one – a native container model and orchestration using Kubernetes.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

#4 Accelerate investments in and combine Big Data Analytics and BPM engines –

In the end, the ability to drive business processes is what makes an agile enterprise. Automation in terms of both Business Processes (BPM) and Data Driven decision making are proven approaches used at webscale,  data-driven organizations. This makes all the difference in terms of what is perceived to be a digital enterprise. Accordingly, the ability to tie in a range of front, mid and back-office processes such as Customer Onboarding, Claims Management & Fraud Detection to a BPM-based system and allowing applications to access these via a loosely coupled architecture based on microservices is key. Additionally leveraging Big Data architectures to process data streams in near real-time is another key capability to possess.

Why Big Data Analytics is the Future of CRM..

#5 Invest in APIs –

APIs enable companies to constantly churn out innovative offerings while still continuously adapting & learning from customer feedback. Internet-scale companies such as Facebook provide edge APIs that enable thousands of companies to write applications that drives greater customer volumes to the Facebook platform. The term API Economy is increasingly in vogue and it connotes a loosely federated ecosystem of companies, consumers, business models and channels

APIs are used to abstract out the internals of complex underlying platform services. Application Developers and other infrastructure services can be leveraged well defined APIs to interact with Digital platforms. These APIs enable the provisioning, deployment, and management of platform services.

Applications developed for a Digital infrastructure will be developed as small, nimble processes that communicate via APIs and over traditional infrastructure such as service mediation components (e.g Apache Camel). These microservices based applications will offer huge operational and development advantages over legacy applications. While one does not expect legacy but critical applications that still run on mainframes (e.g. Core Banking, Customer Order Processing etc) to move over to a microservices model anytime soon, customer-facing applications that need responsive digital UIs will definitely move.

Why APIs Are a Day One Capability In Digital Platforms..

#6 Be prepared, your development methodologies will gradually evolve to DevOps – 

The key non-technology component that is involved in delivering error-free and adaptive software is DevOps.  Currently, most traditional application development and IT operations happen in silos. DevOps with its focus on CI/CD practices requires engineers to communicate more closely, release more frequently, deploy & automate daily, reduce deployment failures and mean time to recover from failures.

Typical software development life cycles that require lengthy validations and quality control testing prior to deployment can stifle innovation. Agile software process, which is adaptive and is rooted in evolutionary development and continuous improvement, can be combined with DevOps. DevOps focuses on tight integration between developers and teams who deploy and run IT operations. DevOps is the only development methodology to drive large-scale Digital application development.

Conclusion..

By following a transformation roughly outlined as above, the vast majority of enterprises can derive a tremendous amount of value in their Digital initiatives. However, the current industry approach as in vogue – to treat Digital projects as a one-off, tactical project investments – does not simply work or scale anymore. There are various organizational models that one could employ from the standpoint of developing analytical maturity. These ranging from a shared service to a line of business led approach. An approach that I have seen work very well is to build a Digital Center of Excellence (COE) to create contextual capabilities, best practices and rollout strategies across the larger organization. The COE should be at the forefront of pushing the above technology boundaries within the larger framework of the organization.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

The fourth and previous blog in this seven part series on Software Defined Datacenters (@ http://www.vamsitalkstech.com/?p=5010)  discussed how Linux Containers & Docker, are emerging as a key component of digital applications. We looked at various drivers & challenges stemming from running Containerized Applications from both a development and IT operations standpoint. In the fifth blog in this series, we will discuss another key emergent technology – Google’s Kubernetes (k8s)– which acts as the foundational runtime orchestrator for large scale container infrastructure. We will take the discussion higher up the stack in the next blog with OpenShift – Red Hat’s PaaS (Platform as a Service) platform – which includes Kubernetes and provides a powerful, agile & polyglot environment to build and manage microservices based applications.

The Importance of Container Orchestration… 

With Cloud Native application development emerging as the key trend in Digital platforms, containers offer a natural choice for a variety of reasons within the development process. In a nutshell, Containers are changing the way applications are being architected, designed, developed, packaged, delivered and managed. That is the reason why Container Orchestration has become a critical “must have” since for enterprises to be able to derive tangible business value – they must be able to run large scale containerized applications.

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

While containers have existed in Unix based operating systems such as Solaris and FreeBSD, pioneering work in the Linux OS community has led to the mainstreaming of this disruptive technology. Now, despite all the benefits afforded to both developers and IT Operations by containers, there are critical considerations involved in running containers at scale in complex n-tier real world applications across multiple datacenters.

What are some of the key considerations in running containers at scale –

Consideration #1 – You need a Model/Paradigm/Platform for the lifecycle management of containers – 

This includes the ability to organize applications into groups of containers, scheduling these applications on host servers that match their resource requirements, deploy applications as changes happen, manage complex storage integration, network topologies and provide seamless ways to destroy, restart etc  etc

Consideration #2 – Administrative Lifecycle Management  –

This covers a range of lifecycle processes ranging from constant deployments to upgrades to monitoring and monitoring. Granular issues include support for application patching with minimal downtime, support for canary releases, graceful failures in cloud-native applications, (container) capacity scale up & scale down based on traffic patterns etc.

Consideration #3 – Support DevelopMENT PROCESSES moving to DevOps and microservices

These reasons vary from rapid feature development, ability to easily accommodate CI/CD approaches, flexibility (as highlighted in the above point). For instance,k8s removes one of the biggest challenges with using vanilla containers along with CI/CD tools like Jenkins –  the challenge of linking individual containers that run microservices with one another. Other useful features include load balancing, service discovery, rolling updates and red/green deployments.

While the above drivers are just general guidelines, the actual tipping point for large scale container adoption will vary from enterprise to enterprise. However, the common precursor to supporting containerized applications at scale has to be an enterprise grade management and orchestration platform. And for some very concrete reasons we will discuss,k8s is fast emerging as the defacto leader in this segment.

Introducing Kubernetes (K8s)…

Kubernetes (kube or k8s) is an open-source platform that aims to automate the scheduling, deploying and managing applications running on containers. Kubernetes (and platforms built leveraging it) are designed to bring both development and operations teams together. This affects how Cloud Native applications are architected, composed, deployed, and managed.

k8s was incubated at Google (given their expertise in running billions of container workloads at scale) over the last decade. One caveat, the famous cluster controller & container management system known as Borg is deployed extensively at Google. Borg is a predecessor to k8s but is generally believed that while k8s borrows its core design tenets from Borg, it only contains a subset of the features present in Borg. [4]

Again, from [4] – “Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We’ve incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.

However, k8s is not a Google-only project anymore. In 2015 it was donated to the Cloud Native Foundation. The next year, 2015 also saw the k8s foundational release 1.0. Since then the project has been moving with a fair degree of feature & release velocity. The next version 1.4 was released in 2016. With the current 1.7 release, k8s has found wider industry wide adoption. The last year has seen heavy contributions from the likes of Red Hat, Microsoft, Mirantis, and Fujitsu et al to thek8s codebase.

k8s is infrastructure agnostic with clusters deployable on pretty much any Linux distribution – Red Hat, CentOS, Debian, Ubuntu etc.  K8s also runs on all popular cloud platforms – AWS, Azure and Google Cloud. It is also virtually hypervisor agnostic supporting – VMWare, KVM, and libvirt. It also supports both Docker or Windows Containers or rocket (rkt) runtimes with expanding support as newer runtimes become available.[3]

After this brief preamble, let us now discuss the architecture and internals of this exciting technology. We will then discuss why it has begun to garner massive adoption and why it deserves a much closer look by enterprise IT teams.

The Architecture of Kubernetes…

As depicted in the below diagram, Kubernetes (k8s) follows a master-slave methodology much like Apache Mesos and Apache Hadoop.

Kubernetes Architecture

The k8s Master is the control plane of the architecture. It is responsible for scheduling deployments, acting as the gateway for the API, and for overall cluster management. As depicted in the below illustration, It consists of several components, such as an API server, a scheduler, and a controller manager. The master is responsible for the global, cluster-level scheduling of pods and handling of events. For high availability and load balancing, multiple masters can be setup.  The core API server which runs in the master hosts a RESTful service that can be queried to maintain the desired state of the cluster and to maintain workloads. The admin path always goes through the Master to access the worker nodes and never goes directly to the workers.  The Scheduler service is used to schedule workloads on containers running on the slave nodes. It works in conjunction with the API server to distribute applications across groups of containers working on the cluster. It’s important to note that the management functionality only accesses the master to initiate changes in the cluster and does not access the nodes directly.

The second primitive in the architecture is the concept of a Node. A node refers to a host which may be virtual or physical. The node is the worker in the architecture and runs application stack components on what are called Pods. It needs to be noted that each node runs several kubernetes components such as a kubelet and a kube proxy. The kubelet is an agent process that works to start and stop groups of containers running user applications, manages images etc and communicates with the Docker engine. The kube-proxy works as a proxy networking service that redirects traffic to specific services and pods (we will define these terms in a bit). Both these agents communicate with the Master via the API server.

Nodes (which are VMs or bare metal servers) are joined together to form Clusters. As the name connotes, Clusters are a pool of resources – compute, storage and networking – that are used by the Master to run application components. Nodes, which used to be known as minions in prior releases, are the workers. Nodes host end user applications using their local resources such as compute, network and storage. Thus they include components to aid in logging, service discovery etc. Most of the administrative and control interactions are done via the kubectl script or by performing RESTful calls to the API server. The state of the cluster and the workloads running on it is constantly synchronized with the Master using all these components.  Clusters can be easily made highly available and scaled up/down on demand. They can also be federated across cloud providers and data centers if a hybrid architecture is so desired.

The next and perhaps the most important runtime abstraction in k8s is called a Pod. It is recommended that applications deployed in a K8s infrastructure be composed of lightweight and stateless microservices. These microservices can be deployed in individual or multiple containers. If the former strategy is chosen, each container only performs a specialized business activity. Though k8s also supports stateful applications, stateless applications confer a variety of benefits including loose coupling, auto-scaling etc.

The Pod is essentially the unit of infrastructure that runs an application or a set of related applications and as such it always exists in the context of a set of Linux namespaces or cgroups. A Pod is a group of one or more containers which always run on the same host. They are always scheduled together and share a common IP address/port configuration. However, these IP assignments cannot be guaranteed to stay the same over time. This can lead to all kinds of communication issues over complex n-tier applications. Kubernetes provides an abstraction called a Service – which is a grouping of a set of pods mapped to a common IP address.

The pod level inter-communication happens over IPC mechanisms. Pods also share local storage running on the node with the shared storage essentially mounted on.  The infrastructure can provide services to the pod that span resources and process management.  The key advantage here is that Pods can run related groups of applications with the advantage that individual containers can be made not only more lightweight but also versioned independently, which greatly aids in complex software projects where multiple teams are working on their own microservices which can be created and updated on their own separate cadence.

Labels are key value pairs that k8s uses to identify a particular runtime element be it a node, pod or service. They are most frequently applied to pods and can be anything that makes sense to the user or the administrator. Example of a pod label would be –  (app=mongodb, cluster=eu3,language=python).  Label Selectors determine what Pods are targeted by a Service.

From an HA standpoint, administrators can declare a configuration policy that states the number of pods that they need to have running at any given point. This ensures that pod failures can be recovered from automatically by starting new pods. An important HA feature is the notion of replica sets. The Replication controller ensures that there are a specified set of pods available to a given application and in the event of failure, new pods can be started to ensure that the actual state matches the desired state. Such pods are called replica sets. Workloads that are stateful are covered for HA using what are called pet sets.

The Replication Controller component running in the Master node determines which pods it controls and then uses a pod template file (typically a JSON or YAML file) to create new pods. It also is in charge of ensuring that the number of pods stays in consonance with replica counts. It is important to note that while the Replication controller just replaces dead or dysfunctional pods on the nodes that hosted them, it does not more pods across nodes.

Storage & Networking in Kubernetes  –

Local pod storage is ephemeral and is reclaimed when the pod dies or is taken offline but if data needs to be persistent or shared between pods K8s provides Volumes. So really, depending on the use case, k8s supports a range of storage options from local storage to network storage (NFS, Ceph, Gluster, Ceph) to cloud storage (Google Cloud or AWS). More details around these emerging features are found at the K8s official documentation. [1]

Kubernetes has a pluggable networking implementation that works with the design of the underlying network. [2], there are four networking challenges to solve:

  • Container-container communication within a host – this is based purely on IPC & localhost mechanisms
  • Interpod communication across hosts – Here Kubernetes mandates that all pods be able to communicate with one another without NAT and that the IP of a pod is the same from within the pod and outside of it.
  • Pod to Service communications – provided by the Service implementation. As we have seen above, K8s services are provided with IP addresses that clients can reach them by.  These IP addresses are proxied by the kube-proxy process which runs on all nodes sends to the service which then routes the external request to the correct pod.
  • External to Service communication – again provided by the Service implementation. This is done primarily by mapping the load balancer configuration to services running in the cluster. As outlined above, when traffic is sent to a node, the the kube-proxy process ensures that the traffic is routed to the appropriate service.

Network administrators looking to implement the K8s cluster model have a variety of choices from open source projects such as – Flannel, OpenContrail etc.

Why is Kubernetes such an exciting (and important) Cloud technology –

We have discussed the business & technology advantages of building an SDDC over the previous posts in this series.  As a project, k8s has very lofty goals to simplify the lifecycle of not only containers but also to enable the deployment & management of distributed systems across any kind of modern datacenter infrastructure. It’s designed to promote extensibility and pluggability (via APIs) as we will see in the next blog with OpenShift.

There are three specific reasons why k8s is rapidly becoming a de facto choice for Container orchestration-

  1. Once containers are used to full-blown applications,  organizations need to deal with several challenges to enable efficiency in the overall development & deployment processes. These include enabling a rapid speed of application development among various teams working on APIs, UX front ends, business logic, data etc.
  2. The ability to scale application deployments and to ensure a very high degree of uptime by leveraging a self-healing & immutable infrastructure. A range of administrative requirements from monitoring, logging, auditing, patching and managing storage & networking all come to consideration.
  3. The need to abstract developers away from the infrastructure. This is accomplished by allowing dev teams to specific their infrastructure requirements via declarative configuration.

Conclusion…

Kubernetes is emerging as the most popular platform to deploy and manage digital applications based on a microservices architecture. As a sign of its increased adoption and acceptance, Kubernetes is being embedded in Platform as a Service (PaaS) offerings where it offers all of the same advantages for administrators (deploying application stacks) while also freeing up developers of complex underlying infrastructure. The next post in this series will discuss OpenShift, Red Hat’s market leading PaaS offering, which leverages best of breed projects such as Docker and Kubernetes.

References…

[1] Kubernetes Offical documentation – https://kubernetes.io/docs/concepts/storage/persistent-volumes/

[2] Kubernetes Networking
– https://kubernetes.io/docs/concepts/cluster-administration/networking/

[3] Key Commits to Kubernetes – http://stackalytics.com/?project_type=kubernetes-group&metric=commits

[4] Borg: The predecessor to Kubernetes – http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html

Financial Services IT begins to converge towards Software Defined Datacenters..

Previous posts in this blog have commented on the financial services industry as increasingly undergoing a gradual makeover if not outright transformation – both from a business and IT perspective.  This is being witnessed across the spectrum that makes up this crucial vertical –  Retail & Consumer Banking, Stock Exchanges, Wealth Management/ Private Banking & Cards etc.

The regulatory deluge (Basel III, Dodd Frank, CAT Reporting, AML & KYC etc) and the increasing sophistication of cybersecurity threats have completely changed the landscape that IT finds itself in – compared to even five years ago.

Brett King writes in his inimitable style about the age of the hyper-connected consumer i.e younger segments of the population who expect to be able to bank from anywhere, be it from a mobile device or via the Internet from their personal computers instead of just walking into a physical branch.

Further multiple Fintechs (like WealthFront, Kabbage, Square, LendingClub, Mint.com, Cyptocurrency based startups etc)  are leading the way in pioneering a better customer experience.  For an established institution that has huge early mover advantage, the ability to compete with innovative players by using fresh technology approaches is critical to engage customers.

All of these imperatives place a lot of pressure on Enterprise FS IT to move from an antiquated command and control model to being able to deliver on demand services with the speed of an Amazon Web Services.

These new services are composed of Applications that encompass paradigms ranging from Smart Middleware, Big Data, Realtime Analytics, Data Science, DevOps and Mobility. The common business thread to deploying all of these applications is to be able to react quickly and expeditiously to customer expectations and requirements.

Enter the Software Defined Datacenter (SDDC). Various definitions exist for this term but I wager that it means – “a highly automated & self-healing datacenter infrastructure that can quickly deliver on demand services to millions of end users, internal developers without  imposing significant headcount requirements on the enterprise“.

Let’s parse this below.

The SDDC encompasses SDC (Software Defined Compute) , SDS (Software Defined Storage), SDN (Software Defined Networking), Software Defined Applications and Cloud Management Platforms (CMP) into one logical construct as can be seen from the below picture.

FS_SDDC

The core of the software defined approach are APIs.  APIs control the lifecycle of resources (request, approval, provisioning,orchestration & billing) as well as the applications deployed on them. The SDDC implies commodity hardware (x86) & a cloud based approach to architecting the datacenter.

The ten fundamental technology differentiators of the SDDC –

1. Highly elastic – scale up or scale down the gamut of infrastructure (compute – VM/Baremetal/Containers, storage – SAN/NAS/DAS, network – switches/routers/Firewalls etc) in near real time

2. Highly Automated – Given the scale & multi-tenancy requirements, automation at all levels of the stack (development, deployment, monitoring and maintenance)

3. Low Cost – Oddly enough, the SDDC operates at a lower CapEx and OpEx compared to the traditional datacenter due to reliance on open source technology & high degree of automation. Further workload consolidation only helps increase hardware utilization.

4. Standardization –  The SDDC enforces standardization and homogenization of deployment runtimes, application stacks and development methodologies based on lines of business requirements. This solves a significant IT challenge that has hobbled innovation at large financial institutions.

5. Microservice based applications –  Applications developed for a SDDC enabled infrastructure are developed as small, nimble processes that communicate via APIs and over infrastructure like service mediation components (e.g Apache Camel). This offers huge operational and development advantages over legacy applications. While one does not expect Core Banking applications to move over to a microservice model anytime soon, customer facing applications that need responsive digital UIs will need definitely consider such approaches.

6. ‘Kind-of-Cloud’ Agnostic –  The SDDC does not enforce the concept of private cloud, or rather it encompasses a range of deployment options – public, private and hybrid.

7. DevOps friendly –  The SDDC enforces not just standardization and homogenization of deployment runtimes, application stacks and development methodologies but also enables a culture of continuous collaboration among developers, operations teams and business stakeholders i.e cross departmental innovation. The SDDC is a natural container for workloads that are experimental in nature and can be updated/rolled-back/rolled forward incrementally based on changing business requirements. The SDDC enables rapid deployment capabilities across the stack leading to faster time to market of business capabilities.

8. Data, Data & Data –  The heart of any successful technology implementation is Data. This includes customer data, transaction data, reference data, risk data, compliance data etc etc. The SDDC provides a variety of tools that enable applications to process data in a batch, interactive, low latency manner depending on what the business requirements are.

9. Security –  The SDDC shall provide robust perimeter defense as well as application level security with a strong focus on a Defense In Depth strategy. Further data at rest and in motion shall be

10. Governance –  The SDDC enforces strong governance requirements for capabilities ranging from ITSM requirements – workload orchestration, business policy enabled deployment, autosizing of workloads to change management, provisioning, billing, chargeback & application deployments.

So how is doing SDDC at the moment? Most major banks have initiatives in place to gradually evolve their infrastructures to an SDI paradigm. Bank of America (for one) have been vocal about their approach in using two stacks, one Open Source & OpenStack based and the other a proprietary stack[1].

To sum up the core benefit of the SDDC approach, it brings a large enterprise closer to web scale architectures and practices.

The business dividends of the latter include –

1. Digital Transformation – Every large Bank is under growing pressure to transform lines of business or their entire enterprise into a digital operation. I define digital in this context as being able to – “adapt high levels of automation while enabling the business to support multiple channels by which products and services can be delivered to customers. ”

Further the culture of digital encourages constant innovation and agility resulting high levels of customer & employee satisfaction.”

2. Smart Data & Analytics –  Techniques that ensure that the right data is in the hands of the right employee at the right time so that contextual services can be offered in real time to customers. This has the effect of optimizing existing workflows while also enabling the creation of new business models.

3. Cost Savings – Oddly enough, the move to web-scale only reduces business and IT costs. You not only end up doing more with less employees due to higher levels of automation but also are able to constantly cut costs due to adopting technologies like Cloud Computing which enable one to cut CapEx and OpEx. Almost all webscale IT is dominated by open source technologies & APIs, which are much more cost effective than proprietaty platforms.

4. A Culture of Collaboration – The most vibrant enterprises that have implemented web-scale practices not only offer “IT/Business As A Service” but also have instituted strong cultures of symbiotic relationships between customers (both current & prospective), employees , partners and developers etc.

5. Building for the Future – The core idea behind implementing web-scale architecture and data management practices is “Be disruptive in your business or be disrupted by competition”. Web-scale practices enable the building of business platforms around which ecosystems can be created and then sustained based on increasing revenue.

To quote wikipedia, a widespread transition to the SDDC will take years:

Enterprise IT will have to become truly business focused, automatically placing application workloads where they can be best processed. We anticipate that it will take about a decade until the SDDC becomes a reality. However, each step of the journey will lead to efficiency gains and make the IT organization more and more service oriented.

The virtuous loop encouraged by constant customer data & feedback enables business applications (and platforms) to behave like agile & growing organisms –  SDDC based architectures offer them the agility to get there.

References

1.http://blogs.wsj.com/cio/2015/06/26/bank-of-america-adding-workloads-to-software-defined-infrastructure/