Want to go Cloud or Digital Native? You’ll Need to Make These Six Key Investments…

The ability for an enterprise to become a Cloud Native (CN) or Digitally Native (DN) business implies the need to develop a host of technology capabilities and cultural practices in support of two goals. First, IT becomes aligned with & responsive to the business. Second, IT leads the charge on inculcating a culture of constant business innovation. Given these realities, large & complex enterprises that have invested into DN capabilities often struggle to identify the highest priority areas to target across lines of business or in shared services. In this post, I want to argue that there are six fundamental capabilities large enterprises need to adopt housewide in order to revamp legacy systems. 

Introduction..

The blog has discussed a range of digital applications and platforms at depth. We have covered a range of line of business use cases & architectures – ranging from Customer Journeys, Customer 360, Fraud Detection, Compliance, Risk Management, CRM systems etc. While the specific details will vary from industry to industry, the common themes to all these implementations include a seamless ability to work across multiple channels, to predictively anticipate client needs and support business models in real-time. In short, these are all Digital requirements which have been proven in the webscale world with Google, Facebook, Amazon and Netflix et al.  Most traditional companies are realizing that the adopting the practices of these pioneering enterprises are a must for them to survive and thrive.

However, the vast majority of Fortune 500 enterprises need to overcome significant challenges in their migrating their legacy architecture stacks to a Cloud Native mode.  While it is very easy to slap mobile UIs via static HTML on existing legacy systems, without a re-engineering of their core, they can never realize the true value of digital projects. The end goal of such initiatives is to ensure that underlying systems are agile and able to be responsive to business requirements. The key question then becomes how to develop and scale these capabilities across massive organizations.

Legacy Monolithic IT as a Digital Disabler…

From a top-down direction, business leadership is requesting agiler IT delivery and faster development mechanisms to deal with competitive pressures such as social media streams, a growing number of channels, disruptive competitors and demanding millennial consumers. When one compares the Cloud Native (CN) model (@ http://www.vamsitalkstech.com/?p=5632) to the earlier monolithic deployment stack (@ http://www.vamsitalkstech.com/?p=5617), it is easily noticeable that there are a sheer number of technical elements and trends that enterprise IT is being forced to devise strategies for.

This pressure is being applied on Enterprise IT from both directions.

Let me explain…

In most organizations, the process of identifying the correct set of IT capabilities needed for line of business projects looks like the below –

  1. Lines of business leadership works with product management teams to request IT for new projects to satisfy business needs either in support of new business initiatives or to revamp existing offerings
  2. IT teams follow a structured process to identify the appropriate (siloed) technology elements to create the solution
  3. Development teams follow a mix of agile and waterfall models to stand up the solution which then gets deployed and managed by an operations team
  4. Customer needs and update requests get slowly reflected causing customer dissatisfaction
    Given this reality, how can legacy systems and architectures reinvent themselves to become Cloud Native?

Complexity is inevitable & Enterprises that master complexity will win…

The correct way to creating a CN/DN architecture is that certain technology investments need to be made by complex organizations to speed up each step of the above process. The key challenge in the CN process is to help incumbent enterprises kickstart their digital products to disarm competition.

The sheer number of offerings of the digital IT challenge is due in large part to a large number of technology trends and developments that have begun to have a demonstrable impact on IT architectures today. There are no fewer than nine—including social media and mobile technology, the Internet of Things (IoT), open ecosystems, big data and advanced analytics, and cloud computing et al.

Thus, the CN movement is a complex mishmash of technologies that straddle infrastructure, storage, compute and management. This is an obstacle that must be surmounted by enterprise architects and IT leadership to be able to best position their enterprise for the transformation that must occur.

Six Foundational Technology Investments to go Cloud Native…

There are six foundational technology investments that predicate the creation of a Cloud Native Application Architecture – IaaS, PaaS & Containers, Container Orchestration, Data Analytics & BPM, API Management & DevOps.

There are six layers that large enterprises will need to focus on to improve their systems, processes, and applications in order to achieve a Digital Native architecture. These investments can proceed in parallel.

#1 First and foremost, you will need an IaaS platform

An agile IaaS is an organization-wide foundational layer which provides unlimited capacity across a range of infrastructure services – compute, network, storage, and management. IaaS provides an agile but scalable foundation to deploy everything else on it without incurring undue complexity in development, deployment & management. Key tenets of the private cloud approach include better resource utilization, self-service provisioning and a high degree of automation. Core IT processes such as the lifecycle of resource provisioning, deployment management, change management and monitoring will need to be redone for an enterprise-grade IaaS platform such as OpenStack.

#2 You will need to adopt a PaaS layer with Containers at its heart  –

Containers are possibly the first infrastructure software category created by developers in mind. The prominence of Linux Containers has Docker coincided with the onset of agile development practices under the DevOps umbrella – CI/CD etc. Containers are an excellent choice to create agile delivery pipelines and continuous deployment. It is a very safe bet to make that in a few years, the majority of digital applications (or mundane applications for that matter) will transition to hundreds of services deployed on and running on containers.

Adopting a market leading Platform As A Service (PaaS) platform such as Red Hat’s OpenShift or CloudFoundry can provide a range of benefits from helping with container adoption, tools to help with CI/CD process, reliable rollout with A/B testing, Green-Blue deployments. A PaaS such as OpenShift adds auto-scaling, failover & other kinds of infrastructure management.

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

#3 You will need an Orchestration layer for Containers –

At their core, Containers enable the creation of multiple self-contained execution environments over the same operating system. However, containers are not enough in and of themselves – to drive large-scale DN applications. An Orchestration layer at a minimum, organizes groups of containers into applications, schedules them on servers that match their resource requirements, places the containers on complex network topology etc. It also helps with complex tasks such as release management, Canary releases and administration. The actual tipping point for large-scale container adoption will vary from enterprise to enterprise. However, the common precursor to supporting containerized applications at scale has to be an enterprise-grade management and orchestration platform. Again, a PaaS technology such as OpenShift provides two benefits in one – a native container model and orchestration using Kubernetes.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

#4 Accelerate investments in and combine Big Data Analytics and BPM engines –

In the end, the ability to drive business processes is what makes an agile enterprise. Automation in terms of both Business Processes (BPM) and Data Driven decision making are proven approaches used at webscale,  data-driven organizations. This makes all the difference in terms of what is perceived to be a digital enterprise. Accordingly, the ability to tie in a range of front, mid and back-office processes such as Customer Onboarding, Claims Management & Fraud Detection to a BPM-based system and allowing applications to access these via a loosely coupled architecture based on microservices is key. Additionally leveraging Big Data architectures to process data streams in near real-time is another key capability to possess.

Why Big Data Analytics is the Future of CRM..

#5 Invest in APIs –

APIs enable companies to constantly churn out innovative offerings while still continuously adapting & learning from customer feedback. Internet-scale companies such as Facebook provide edge APIs that enable thousands of companies to write applications that drives greater customer volumes to the Facebook platform. The term API Economy is increasingly in vogue and it connotes a loosely federated ecosystem of companies, consumers, business models and channels

APIs are used to abstract out the internals of complex underlying platform services. Application Developers and other infrastructure services can be leveraged well defined APIs to interact with Digital platforms. These APIs enable the provisioning, deployment, and management of platform services.

Applications developed for a Digital infrastructure will be developed as small, nimble processes that communicate via APIs and over traditional infrastructure such as service mediation components (e.g Apache Camel). These microservices based applications will offer huge operational and development advantages over legacy applications. While one does not expect legacy but critical applications that still run on mainframes (e.g. Core Banking, Customer Order Processing etc) to move over to a microservices model anytime soon, customer-facing applications that need responsive digital UIs will definitely move.

Why APIs Are a Day One Capability In Digital Platforms..

#6 Be prepared, your development methodologies will gradually evolve to DevOps – 

The key non-technology component that is involved in delivering error-free and adaptive software is DevOps.  Currently, most traditional application development and IT operations happen in silos. DevOps with its focus on CI/CD practices requires engineers to communicate more closely, release more frequently, deploy & automate daily, reduce deployment failures and mean time to recover from failures.

Typical software development life cycles that require lengthy validations and quality control testing prior to deployment can stifle innovation. Agile software process, which is adaptive and is rooted in evolutionary development and continuous improvement, can be combined with DevOps. DevOps focuses on tight integration between developers and teams who deploy and run IT operations. DevOps is the only development methodology to drive large-scale Digital application development.

Conclusion..

By following a transformation roughly outlined as above, the vast majority of enterprises can derive a tremendous amount of value in their Digital initiatives. However, the current industry approach as in vogue – to treat Digital projects as a one-off, tactical project investments – does not simply work or scale anymore. There are various organizational models that one could employ from the standpoint of developing analytical maturity. These ranging from a shared service to a line of business led approach. An approach that I have seen work very well is to build a Digital Center of Excellence (COE) to create contextual capabilities, best practices and rollout strategies across the larger organization. The COE should be at the forefront of pushing the above technology boundaries within the larger framework of the organization.

The Seven Characteristics of Cloud Native Application Architectures..

We are in the middle of a series of blogs on Software Defined Datacenters (SDDC) @ http://www.vamsitalkstech.com/?p=1833. The key business imperative driving the SDDC architectures is their ability to natively support digital applications. Digital applications are “Cloud Native” (CN) in the sense that these platforms are originally being written for cloud frameworks – instead of being ported over to the Cloud as an afterthought. Thus, Cloud Native application development emerging as the most important trend in digital platforms. This blog post will define the seven key architectural characteristics of these CN applications.

Image Credit – Shutterstock

What is driving the need for Cloud Native Architectures… 

The previous post in the blog covered the monolithic architecture pattern. Monolithic architectures , which currently dominate the enterprise landscape, are coming under tremendous pressures in various ways and are increasingly being perceived to be brittle. Chief among these forces include – massive user volumes, DevOps style development processes, the need to open up business functionality locked within applications to partners and the heavy human requirement to deploy & manage monolithic architectures etc. Monolithic architectures also introduce technical debt into the datacenter – which makes it very difficult for the business lines to introduce changes as customer demands change – which is a key antipattern for digital deployments.

Why Legacy Monolithic Architectures Won’t Work For Digital Platforms..

Applications that require a high release velocity presenting many complex moving parts, which are worked on by few or many development teams are an ideal fit for the CN pattern.

Introducing Cloud Native Applications…

There is no single and universally accepted definition of a Cloud Native application. I would like to define a CN Application as “an application built using a combination of technology paradigms that are native to cloud computing – including distributed software development, a need to adopt DevOps practices, microservices architectures based on containers, API based integration between the layers of the application, software automation from infrastructure to code, and finally orchestration & management of the overall application infrastructure.”

Further, Cloud Native applications need to be architected, designed, developed, packaged, delivered and managed based on a deep understanding of the frameworks of cloud computing (IaaS and PaaS).

Characteristic #1 CN Applications dynamically adapt to & support massive scale…

The first & foremost characteristic of a CN Architecture is the ability to dynamically support massive numbers of users, large development organizations & highly distributed operations teams. This requirement is even more critical when one considers that cloud computing is inherently multi-tenant in nature.

Within this area, the typical concerns need to be accommodated –

  1. the ability to grow the deployment footprint dynamically (Scale-up)  as well as to decrease the footprint (Scale-down)
  2. the ability to gracefully handle failures across tiers that can disrupt application availability
  3. the ability to accommodate large development teams by ensuring that components themselves provide loose coupling
  4. the ability to work with virtually any kind of infrastructure (compute, storage and network) implementation

Characteristic #2 CN applications need to support a range of devices and user interfaces…

The User Experience (UX) is the most important part of a human facing application. This is particularly true of Digital applications which are omnichannel in nature. End users could not care less about the backend engineering of these applications as they are focused on an engaging user experience.

Demystifying Digital – the importance of Customer Journey Mapping…(2/3)

Accordingly, CN applications need to natively support mobile applications. This includes the ability to support a range of mobile backend capabilities – ranging from authentication & authorization services for mobile devices, location services, customer identification, push notifications, cloud messaging, toolkits for iOS and Android development etc.

Characteristic #3 They are automated to the fullest extent they can be…

The CN application needs to be abstracted completely from the underlying infrastructure stack. This is key as development teams can focus on solely writing their software and does not need to worry about the maintenance of the underlying OS/Storage/Network. One of the key challenges with monolithic platforms (http://www.vamsitalkstech.com/?p=5617) is their inability to efficiently leverage the underlying infrastructure as they have a high degree of dependency to it. Further, the lifecycle of infrastructure provisioning, configuration, deployment, and scaling is mostly manual with lots of scripts and pockets of configuration management.

The CN application, on the other hand, has to be very light on manual asks given its scale. The provision-deploy-scale cycle is highly automated with the application automatically scaling to meet demand and resource constraints and seamlessly recovering from failures. We discussed Kubernetes in one of the previous blogs.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

Frameworks like these support CN Applications in providing resiliency, fault tolerance and in generally supporting very low downtime.

Characteristic #4 They support Continuous Integration and Continuous Delivery…

The reduction of the vast amount of manual effort witnessed in monolithic applications is not just confined to their deployment as far as CN applications are concerned. From a CN development standpoint, the ability to quickly test and perform quality control on daily software updates is an important aspect. CN applications automate the application development and deployment processes using the paradigms of CI/CD (Continuous Integration and Continuous Delivery).

The goal of CI is that every time source code is added or modified, the build process kicks off & the tests are conducted instantly. This helps catch errors faster and improve quality of the application. Once the CI process is done, the CD process builds the application into an artifact suitable for deployment after combining it with suitable configuration. It then deploys it onto the execution environment with the appropriate identifiers for versioning in a manner that support rollback. CD ensures that the tested artifacts are instantly deployed with acceptance testing.

 Characteristic #5 They support multiple datastore paradigms…

The RDBMS has been a fixture of the monolithic application architecture. CN applications, however, need to work with data formats of the loosely structured kind as well as the regularly structured data. This implies the need to support data streams that are not just high speed but also are better suited to NoSQL/Hadoop storage. These systems provide Schema on Read (SOR) which is an innovative data handling technique. In this model, a format or schema is applied to data as it is accessed from a storage location as opposed to doing the same while it is ingested. As we will see later in the blog, individual microservices can have their own local data storage.

A Holistic New Age Technology Approach To Countering Payment Card Fraud (3/3)…

Characteristic #6 They support APIs as a key feature…

APIs have become the de facto model that provide developers and administrators with the ability to assemble Digital applications such as microservices using complicated componentry. Thus, there is a strong case to be made for adopting an API centric strategy when developing CN applications. CN applications use APIs in multiple ways – firstly as the way to interface loosely coupled microservices (which abstract out the internals of the underlying application components). Secondly, developers use well-defined APIs to interact with the overall cloud infrastructure services.Finally, APIs enable the provisioning, deployment, and management of platform services.

Why APIs Are a Day One Capability In Digital Platforms..

Characteristic #7 Software Architecture based on microservices…

As James Lewis and Martin Fowler define it – “..the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.” [1]

Microservices are a natural evolution of the Service Oriented Architecture (SOA) architecture. The application is decomposed into loosely coupled business functions and mapped to microservices. Each microservice is built for a specific granular business function and can be worked on by an independent developer or team. As such it is a separate code artifact and is thus loosely coupled not just from a communication standpoint (typically communication using a RESTful API with data being passed around using a JSON/XML representation) but also from a build, deployment, upgrade and maintenance process perspective. Each microservice can optionally have its localized datastore. An important advantage of adopting this approach is that each microservice can be created using a separate technology stack from the other parts of the application. Docker containers are the right choice to run these microservices on. Microservices confer a range of advantages ranging from easier build, independent deployment and scaling.

A Note on Security…

It goes without saying that security is a critical part of CN applications and needs to be considered and designed for as a cross-cutting concern from the inception. Security concerns impact the design & lifecycle of CN applications ranging from deployment to updates to image portability across environments. A range of technology choices is available to cover various areas such as Application level security using Role-Based Access Control, Multifactor Authentication (MFA), A&A (Authentication & Authorization)  using protocols such as OAuth, OpenID, SSO etc. The topic of Container Security is very fundamental one to this topic and there are many vendors working on ensuring that once the application is built as part of a CI/CD process as described above, they are packaged into labeled (and signed) containers which can be made part of a verified and trusted registry. This ensures that container image provenance is well understood as well as protecting any users who download the containers for use across their environments.

Conclusion…

In this post, we have tried to look at some architecture drivers for Cloud-Native applications. It is a given that organizations moving from monolithic applications will need to take nimble , small steps to realize the ultimate vision of business agility and technology autonomy. The next post, however, will look at some of the critical foundational investments enterprises will have to make before choosing the Cloud Native route as a viable choice for their applications.

References..

[1] Martin Fowler – https://martinfowler.com/intro.html

Why Legacy Monolithic Architectures Won’t Work For Digital Platforms..

As times change, so do architectural paradigms in software development. For the more than fifteen years the industry has been developing large scale JEE/.NET applications, the three-tier architecture has been the dominant design pattern. However, as enterprises embark or continue on their Digital Journey, they are facing a new set of business challenges which demand fresh technology approaches. We have looked into transformative data architectures at a great degree of depth in this blog, now let us now consider a rethink in the Applications themselves. Applications that were earlier deemed to be sufficiently well-architected are now termed as being monolithic.  This post solely focuses on the underpinnings of why legacy architectures will not work in the new software-defined world. My intention is not to merely criticize a model (the three-tier monolith) that has worked well in the past but merely to reason why it may be time for a generally well accepted newer paradigm.

Traditional Software Platform Architectures… 

Digital applications support a wider variety of frontends & channels, they need to accommodate larger volumes of users, they need wider support for a range of business actors  – partners, suppliers et al via APIs. Finally, these new age applications need to work with unstructured data formats (as opposed to the strictly structured relational format). From an operations standpoint, there is a strong need for a higher degree of automation in the datacenter. All of these requirements call for agility as the most important construct in the enterprise architecture.

As we will discuss, legacy applications (typically defined as created more than 5+ years ago) are beginning to emerge as one of the key obstacles in doing Digital. The issue is not just in the underlying architectures themselves but also in the development culture involved building and maintaining such applications.

Consider the vast majority of applications deployed in enterprise data centers. These applications deliver collections of very specific business functions – e.g. onboarding new customers, provisioning services, processing payments etc. Whatever be the choice of vendor application platform, the vast majority of existing enterprise applications & platforms essentially follows a traditional three-tier software architecture with specific separation of concerns at each tier (as the vastly simplified illustration depicts below).

Traditional three-tier Monolithic Application Architecture

The first tier is the Presentation tier which is depicted at the top of the diagram. The job of the presentation tier is to present the user experience. This includes the user interface components that present various clients with the overall web application flow and also renders UI components. A variety of UI frameworks that provide both flow and UI rendering is typically used here. These include Spring MVC, Apache Struts, HTML5, AngularJS et al.

The middle tier is the Business logic tier where all the business logic for the application is centralized while separating it from the user interface layer. The business logic is usually a mix of objects and business rules written in Java using frameworks such EJB3, Spring etc. The business logic is housed in an application server such as JBoss AS or Oracle WebLogic AS or IBM WebSphere AS – which provides enterprise services (such as caching, resource pooling, naming and identity services et al) to the business components run on these servers. This layer also contains data access logic and also initiates transactions to a range of supporting systems – message queues, transaction monitors, rules and workflow engines, ESB (Enterprise Service Bus) based integration, accessing partner systems using web services, identity, and access management systems et al.

The Data tier is where traditional databases and enterprise integration systems logically reside. The RDBMS rules this area in three-tier architectures & the data access code is typically written using an ORM (Object Relational Mapping) framework such as Hibernate or iBatis or plain JDBC code.

Across all of these layers, common utilities & agents are provided to address cross-cutting concerns such as logging, monitoring, security, single sign-on etc.

The application is packaged as an enterprise archive (EAR) which can be composed of a single or multiple WAR/JAR files. While most enterprise-grade applications are neatly packaged, the total package is typically compiled as a single collection of various modules and then shipped as one single artifact. It should bear mentioning that dependency & version management can be a painstaking exercise for complex applications.

Let us consider the typical deployment process and setup for a thee tier application.

From a deployment standpoint, static content is typically served from an Apache webserver which fronts a Java-based webserver (mostly Tomcat) and then a cluster of backend Java-based application servers running multiple instances of the application for High Availability. The application is Stateful (and Stateless in some cases) in most implementations. The rest of the setup with firewalls and other supporting systems is fairly standard.

While the above architectural template is fairly standard across industry applications built on Java EE, there are some very valid reasons why it has begun to emerge as an anti-pattern when applied to digital applications.

Challenges involved in developing and maintaining Monolithic Applications …

Let us consider what Digital business usecases demand of application architecture and where the monolith is inadequate at satisfying.

  1. The entire application is typically packaged as a single enterprise archive (EAR file), which is a combination of various WAR and JAR files. While this certainly makes the deployment easier given that there is only one executable to copy over, it makes the development lifecycle a nightmare. The reason being that even a simple change in the user interface can cause a rebuild of the entire executable. This results in not just long cycles but makes it extremely hard on teams that span various disciplines from the business to QA.
  2. What follows from such long “code-test & deploy” cycles are that the architecture becomes change resistant, the code very complex over time and as a whole the system subsequently becomes not agile at all in responding to rapidly changing business requirements.
  3. Developers are constrained in multiple ways. Firstly the architecture becomes very complex over a period of time which inhibits quick new developer onboarding. Secondly,  the architecture force-fits developers from different teams into working in lockstep thus forgoing their autonomy in terms of their planning and release cycles. Services across tiers are not independently deployable which leads to big bang releases in short windows of time. Thus it is no surprise that failures and rollbacks happen at an alarming rate.
  4. From an infrastructure standpoint, the application is tightly coupled to the underlying hardware. From a software clustering standpoint, the application scales better vertically while also supporting limited horizontal scale-out. As volumes of customer traffic increase, performance across clusters can degrade.
  5. The Applications are neither designed nor tested to operate gracefully under failure conditions. This is a key point which does not really get that much attention during design time but causes performance headaches later on.
  6. An important point is that Digital applications & their parts are beginning to be created using different languages such as Java, Scala, and Groovy etc. The Monolith essentially limits such a choice of languages, frameworks, platforms and even databases.
  7. The Architecture does not natively support the notion of API externalization or Continuous Integration and Delivery (CI/CD).
  8. As highlighted above, the architecture primarily supports the relational model. If you need to accommodate alternative data approaches such as NoSQL or Hadoop, you are largely out of luck.

Operational challenges involved in running a Monolithic Application…

The difficulties in running a range of monolithic applications across an operational infrastructure have already been summed up in the other posts on this blog.

The primary issues include –

  1. The Monolithic architecture typically dictates a vertical scaling model which ensures limits on its scalability as users increase. The typical traditional approach to ameliorate this has been to invest in multiple sets of hardware (servers, storage arrays) to physically separate applications which results in increases in running cost, a higher personnel requirement and manual processes around system patch and maintenance etc.
  2. Capacity management tends to be a bit of challenge as there are many fine-grained resources competing for compute, network and storage resources (vCPU, vRAM, virtual Network etc) that are essentially running on a single JVM. Lots of JVM tuning is needed from a test and pre-production standpoint.
  3. A range of functions needed to be performed around monolithic Applications lack any kind of policy-driven workload and scheduling capability. This is because the Application does very little to drive the infrastructure.
  4. The vast majority of the work needed to be done to provision, schedule and patch these applications is done by system administrators and consequently, automation is minimal at best.
  5. The same is true in Operations Management. Functions like log administration, other housekeeping, monitoring, auditing, app deployment, and rollback are vastly manual with some scripting.

Conclusion…

It deserves mention that the above Monolithic design pattern will work well for Departmental (low user volume) applications which have limited business impact and for applications serving a well-defined user base with well delineated workstreams. The next blog post will consider the microservices way of building new age architectures. We will introduce and discuss Cloud Native Application development which has been popularized across web-scale enterprises esp Netflix. We will also discuss how this new paradigm overcomes many of the above-discussed limitations from both a development and operations standpoint.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

The fourth and previous blog in this seven part series on Software Defined Datacenters (@ http://www.vamsitalkstech.com/?p=5010)  discussed how Linux Containers & Docker, are emerging as a key component of digital applications. We looked at various drivers & challenges stemming from running Containerized Applications from both a development and IT operations standpoint. In the fifth blog in this series, we will discuss another key emergent technology – Google’s Kubernetes (k8s)– which acts as the foundational runtime orchestrator for large scale container infrastructure. We will take the discussion higher up the stack in the next blog with OpenShift – Red Hat’s PaaS (Platform as a Service) platform – which includes Kubernetes and provides a powerful, agile & polyglot environment to build and manage microservices based applications.

The Importance of Container Orchestration… 

With Cloud Native application development emerging as the key trend in Digital platforms, containers offer a natural choice for a variety of reasons within the development process. In a nutshell, Containers are changing the way applications are being architected, designed, developed, packaged, delivered and managed. That is the reason why Container Orchestration has become a critical “must have” since for enterprises to be able to derive tangible business value – they must be able to run large scale containerized applications.

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

While containers have existed in Unix based operating systems such as Solaris and FreeBSD, pioneering work in the Linux OS community has led to the mainstreaming of this disruptive technology. Now, despite all the benefits afforded to both developers and IT Operations by containers, there are critical considerations involved in running containers at scale in complex n-tier real world applications across multiple datacenters.

What are some of the key considerations in running containers at scale –

Consideration #1 – You need a Model/Paradigm/Platform for the lifecycle management of containers – 

This includes the ability to organize applications into groups of containers, scheduling these applications on host servers that match their resource requirements, deploy applications as changes happen, manage complex storage integration, network topologies and provide seamless ways to destroy, restart etc  etc

Consideration #2 – Administrative Lifecycle Management  –

This covers a range of lifecycle processes ranging from constant deployments to upgrades to monitoring and monitoring. Granular issues include support for application patching with minimal downtime, support for canary releases, graceful failures in cloud-native applications, (container) capacity scale up & scale down based on traffic patterns etc.

Consideration #3 – Support DevelopMENT PROCESSES moving to DevOps and microservices

These reasons vary from rapid feature development, ability to easily accommodate CI/CD approaches, flexibility (as highlighted in the above point). For instance,k8s removes one of the biggest challenges with using vanilla containers along with CI/CD tools like Jenkins –  the challenge of linking individual containers that run microservices with one another. Other useful features include load balancing, service discovery, rolling updates and red/green deployments.

While the above drivers are just general guidelines, the actual tipping point for large scale container adoption will vary from enterprise to enterprise. However, the common precursor to supporting containerized applications at scale has to be an enterprise grade management and orchestration platform. And for some very concrete reasons we will discuss,k8s is fast emerging as the defacto leader in this segment.

Introducing Kubernetes (K8s)…

Kubernetes (kube or k8s) is an open-source platform that aims to automate the scheduling, deploying and managing applications running on containers. Kubernetes (and platforms built leveraging it) are designed to bring both development and operations teams together. This affects how Cloud Native applications are architected, composed, deployed, and managed.

k8s was incubated at Google (given their expertise in running billions of container workloads at scale) over the last decade. One caveat, the famous cluster controller & container management system known as Borg is deployed extensively at Google. Borg is a predecessor to k8s but is generally believed that while k8s borrows its core design tenets from Borg, it only contains a subset of the features present in Borg. [4]

Again, from [4] – “Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We’ve incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.

However, k8s is not a Google-only project anymore. In 2015 it was donated to the Cloud Native Foundation. The next year, 2015 also saw the k8s foundational release 1.0. Since then the project has been moving with a fair degree of feature & release velocity. The next version 1.4 was released in 2016. With the current 1.7 release, k8s has found wider industry wide adoption. The last year has seen heavy contributions from the likes of Red Hat, Microsoft, Mirantis, and Fujitsu et al to thek8s codebase.

k8s is infrastructure agnostic with clusters deployable on pretty much any Linux distribution – Red Hat, CentOS, Debian, Ubuntu etc.  K8s also runs on all popular cloud platforms – AWS, Azure and Google Cloud. It is also virtually hypervisor agnostic supporting – VMWare, KVM, and libvirt. It also supports both Docker or Windows Containers or rocket (rkt) runtimes with expanding support as newer runtimes become available.[3]

After this brief preamble, let us now discuss the architecture and internals of this exciting technology. We will then discuss why it has begun to garner massive adoption and why it deserves a much closer look by enterprise IT teams.

The Architecture of Kubernetes…

As depicted in the below diagram, Kubernetes (k8s) follows a master-slave methodology much like Apache Mesos and Apache Hadoop.

Kubernetes Architecture

The k8s Master is the control plane of the architecture. It is responsible for scheduling deployments, acting as the gateway for the API, and for overall cluster management. As depicted in the below illustration, It consists of several components, such as an API server, a scheduler, and a controller manager. The master is responsible for the global, cluster-level scheduling of pods and handling of events. For high availability and load balancing, multiple masters can be setup.  The core API server which runs in the master hosts a RESTful service that can be queried to maintain the desired state of the cluster and to maintain workloads. The admin path always goes through the Master to access the worker nodes and never goes directly to the workers.  The Scheduler service is used to schedule workloads on containers running on the slave nodes. It works in conjunction with the API server to distribute applications across groups of containers working on the cluster. It’s important to note that the management functionality only accesses the master to initiate changes in the cluster and does not access the nodes directly.

The second primitive in the architecture is the concept of a Node. A node refers to a host which may be virtual or physical. The node is the worker in the architecture and runs application stack components on what are called Pods. It needs to be noted that each node runs several kubernetes components such as a kubelet and a kube proxy. The kubelet is an agent process that works to start and stop groups of containers running user applications, manages images etc and communicates with the Docker engine. The kube-proxy works as a proxy networking service that redirects traffic to specific services and pods (we will define these terms in a bit). Both these agents communicate with the Master via the API server.

Nodes (which are VMs or bare metal servers) are joined together to form Clusters. As the name connotes, Clusters are a pool of resources – compute, storage and networking – that are used by the Master to run application components. Nodes, which used to be known as minions in prior releases, are the workers. Nodes host end user applications using their local resources such as compute, network and storage. Thus they include components to aid in logging, service discovery etc. Most of the administrative and control interactions are done via the kubectl script or by performing RESTful calls to the API server. The state of the cluster and the workloads running on it is constantly synchronized with the Master using all these components.  Clusters can be easily made highly available and scaled up/down on demand. They can also be federated across cloud providers and data centers if a hybrid architecture is so desired.

The next and perhaps the most important runtime abstraction in k8s is called a Pod. It is recommended that applications deployed in a K8s infrastructure be composed of lightweight and stateless microservices. These microservices can be deployed in individual or multiple containers. If the former strategy is chosen, each container only performs a specialized business activity. Though k8s also supports stateful applications, stateless applications confer a variety of benefits including loose coupling, auto-scaling etc.

The Pod is essentially the unit of infrastructure that runs an application or a set of related applications and as such it always exists in the context of a set of Linux namespaces or cgroups. A Pod is a group of one or more containers which always run on the same host. They are always scheduled together and share a common IP address/port configuration. However, these IP assignments cannot be guaranteed to stay the same over time. This can lead to all kinds of communication issues over complex n-tier applications. Kubernetes provides an abstraction called a Service – which is a grouping of a set of pods mapped to a common IP address.

The pod level inter-communication happens over IPC mechanisms. Pods also share local storage running on the node with the shared storage essentially mounted on.  The infrastructure can provide services to the pod that span resources and process management.  The key advantage here is that Pods can run related groups of applications with the advantage that individual containers can be made not only more lightweight but also versioned independently, which greatly aids in complex software projects where multiple teams are working on their own microservices which can be created and updated on their own separate cadence.

Labels are key value pairs that k8s uses to identify a particular runtime element be it a node, pod or service. They are most frequently applied to pods and can be anything that makes sense to the user or the administrator. Example of a pod label would be –  (app=mongodb, cluster=eu3,language=python).  Label Selectors determine what Pods are targeted by a Service.

From an HA standpoint, administrators can declare a configuration policy that states the number of pods that they need to have running at any given point. This ensures that pod failures can be recovered from automatically by starting new pods. An important HA feature is the notion of replica sets. The Replication controller ensures that there are a specified set of pods available to a given application and in the event of failure, new pods can be started to ensure that the actual state matches the desired state. Such pods are called replica sets. Workloads that are stateful are covered for HA using what are called pet sets.

The Replication Controller component running in the Master node determines which pods it controls and then uses a pod template file (typically a JSON or YAML file) to create new pods. It also is in charge of ensuring that the number of pods stays in consonance with replica counts. It is important to note that while the Replication controller just replaces dead or dysfunctional pods on the nodes that hosted them, it does not more pods across nodes.

Storage & Networking in Kubernetes  –

Local pod storage is ephemeral and is reclaimed when the pod dies or is taken offline but if data needs to be persistent or shared between pods K8s provides Volumes. So really, depending on the use case, k8s supports a range of storage options from local storage to network storage (NFS, Ceph, Gluster, Ceph) to cloud storage (Google Cloud or AWS). More details around these emerging features are found at the K8s official documentation. [1]

Kubernetes has a pluggable networking implementation that works with the design of the underlying network. [2], there are four networking challenges to solve:

  • Container-container communication within a host – this is based purely on IPC & localhost mechanisms
  • Interpod communication across hosts – Here Kubernetes mandates that all pods be able to communicate with one another without NAT and that the IP of a pod is the same from within the pod and outside of it.
  • Pod to Service communications – provided by the Service implementation. As we have seen above, K8s services are provided with IP addresses that clients can reach them by.  These IP addresses are proxied by the kube-proxy process which runs on all nodes sends to the service which then routes the external request to the correct pod.
  • External to Service communication – again provided by the Service implementation. This is done primarily by mapping the load balancer configuration to services running in the cluster. As outlined above, when traffic is sent to a node, the the kube-proxy process ensures that the traffic is routed to the appropriate service.

Network administrators looking to implement the K8s cluster model have a variety of choices from open source projects such as – Flannel, OpenContrail etc.

Why is Kubernetes such an exciting (and important) Cloud technology –

We have discussed the business & technology advantages of building an SDDC over the previous posts in this series.  As a project, k8s has very lofty goals to simplify the lifecycle of not only containers but also to enable the deployment & management of distributed systems across any kind of modern datacenter infrastructure. It’s designed to promote extensibility and pluggability (via APIs) as we will see in the next blog with OpenShift.

There are three specific reasons why k8s is rapidly becoming a de facto choice for Container orchestration-

  1. Once containers are used to full-blown applications,  organizations need to deal with several challenges to enable efficiency in the overall development & deployment processes. These include enabling a rapid speed of application development among various teams working on APIs, UX front ends, business logic, data etc.
  2. The ability to scale application deployments and to ensure a very high degree of uptime by leveraging a self-healing & immutable infrastructure. A range of administrative requirements from monitoring, logging, auditing, patching and managing storage & networking all come to consideration.
  3. The need to abstract developers away from the infrastructure. This is accomplished by allowing dev teams to specific their infrastructure requirements via declarative configuration.

Conclusion…

Kubernetes is emerging as the most popular platform to deploy and manage digital applications based on a microservices architecture. As a sign of its increased adoption and acceptance, Kubernetes is being embedded in Platform as a Service (PaaS) offerings where it offers all of the same advantages for administrators (deploying application stacks) while also freeing up developers of complex underlying infrastructure. The next post in this series will discuss OpenShift, Red Hat’s market leading PaaS offering, which leverages best of breed projects such as Docker and Kubernetes.

References…

[1] Kubernetes Offical documentation – https://kubernetes.io/docs/concepts/storage/persistent-volumes/

[2] Kubernetes Networking
– https://kubernetes.io/docs/concepts/cluster-administration/networking/

[3] Key Commits to Kubernetes – http://stackalytics.com/?project_type=kubernetes-group&metric=commits

[4] Borg: The predecessor to Kubernetes – http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html

Why Platform as a Service (PaaS) Adoption will take off in 2017..

???????????????????????????

Since the time Steve Ballmer went ballistic professing his love for developers, it has been a virtual mantra in the technology industry that developer adoption is key to the success of a given platform. On the face of it – Platform as a Service(PaaS) is a boon to enterprise developers who are tired of the inefficiencies of old school application development environments & stacks. Further, a couple of years ago, PaaS seemed to be the flavor of the future given the focus on Cloud Computing. This blogpost focuses on the advantages of the generic PaaS approach while discussing its lagging slow rate of adoption in the cloud computing market – as compared with it’s cloud cousins – IaaS (Infrastructure as a Service) and SaaS (Software as a Service).

Platform as a Service (PaaS) as the foundation for developing Digital, Cloud Native Applications…

Call them Digital or Cloud Native or Modern. The nature of applications in the industry is slowly changing. So are the cultural underpinnings of the development process and culture themselves- from waterfall to agile to DevOps. At the same time, Cloud Computing and Big Data are enabling the creation of smart data applications. Leading business organizations are cognizant of the need to attract and retain the best possible talent – often competing with the FANGs (Facebook, Amazon, Netflix & Google).

Couple all this with the immense industry and venture capital interest around container oriented & cloud native technologies like Docker – you have a vendor arms race in the making. And the prize is to be chosen as the standard for building industry applications.

Thus, infrastructure is enabling but in the end- it is the applications that are Queen or King.

That is where PaaS comes in.

Why Digital Disruption is the Cure for the Common Data Center..

Enter Platform as a Service (PaaS)…

Platform as a Service (PaaS) is one of the three main cloud delivery models, the other two being IaaS (Infrastructure such as compute, network & storage services) and SaaS (Business applications delivered over a cloud). A collection of different cloud technologies, PaaS focuses exclusively on application development & delivery. PaaS advocates a new kind of development based on native support for concepts like agile development, unit testing, continuous integration, automatic scaling, while providing a range of middleware capabilities. Applications developed on these can be deployed out as services & managed across thousands of application instances.

In short, PaaS is the ideal platform for creating & hosting digital applications. What can PaaS provide that older application development toolchains and paradigms cannot?

While the overall design approach and features vary across every PaaS vendor – there are five generic advantages from a high level –

  1. PaaS enables a range of Application, Data & Middleware components to be delivered as API based services to developers on any given Infrastructure as a Service (IaaS).  These capabilities include-  Messaging as a service, Database as a service, Mobile capabilities as a service, Integration as a service, Workflow as a service, Analytics as a service for data driven applications etc. Some PaaS vendors also provide ability to automate & manage APIs for business applications deployment on them – API Management.
  2. PaaS provides easy & agile access to the entire suite of technologies used while creating complex business applications. These range from programming languages to application server (and lightweight) runtimes to programming languages to CI/CD toolchains to source control repositories.
  3. PaaS provides the services which enables a seamless & highly automated manner of building the complete life cycle of building and delivering web applications and services on the internet. Industry players are infusing software delivery processes with practices such as continuous delivery (CD) and continuous integration (CI). For large scale applications such as those built in web scale shops, financial services, manufacturing, telecom etc – PaaS abstracts away the complexities of building, deploying & orchestrating infrastructure thus enabling instantaneous developer productivity. This is a key point – with it’s focus on automation – PaaS can save application and system administrators precious time and resources in managing the lifecycle of elastic applications
  4. PaaS enables your application to be ‘kind of cloud’ agnostic & can enable applications to be run on any cloud platform whether public or private. This means that a PaaS application developed on Amazon AWS can easily be ported to Microsoft Azure to VMWare vSphere to Red Hat RHEV etc
  5. PaaS can help smoothen organizational Culture and Barriers – The adoption of a PaaS forces an agile culture in your organization – one that pushes cross pollination among different business, dev and ops teams. Most organizations are just now beginning to go bimodal for greenfield applications can benefit immensely from choosing a PaaS as a platform standard.

The Barriers to PaaS Adoption Will Continue to Fall In 2017..

In general, PaaS market growth rates do not seem to line up well when compared with the other broad sections of the cloud computing space, namely IaaS (Infrastructure as a Service) and SaaS (Software as a Service). 451 Research’s Market Monitor forecasts that the total market for cloud computing (including PaaS, IaaS and infrastructure software as a service (ITSM, backup, archiving) – will hit $21.9B in 2016 more than doubling to $44.2bB by 2020. Of that, some analyst estimates contend that PaaS will be a relatively small $8.1 billion.

451-research-paas_vs_saas_iaas

  (Source – 451 Research)

The advantages that PaaS confers have sadly also caused its relatively low rate of adoption as compared to IaaS and SaaS.

The reasons for this anemic rate of adoption include, in my opinion  –  

  1. Poor Conception of the Business Value of PaaS –  This is the biggest factor holding back explosive growth in this category. PaaS is a tremendously complicated technology & vendors have not helped by stressing on the complex technology underpinnings (containers, supported programming languages, developer workflow, orchestration, scheduling etc etc) as opposed to helping clients understand the tangible business drivers & value that enterprise CIOs can derive from this technology. Common drivers include increased time to market for digital capabilities, man hours saved in maintaining complex applications, ability to attract new talent etc. These factors will vary for every customer but it is up to frontline Sales teams to help deliver this message in a manner that is appropriate to the client.
  2. Yes, you can do DevOps without PaaS but PaaS helps a long way  – Many Fortune 500 organizations are drawing up DevOps strategies which do not include a PaaS & are based on a simplified CI/CD pipeline. This is to the detriment of both the customer organization & the industry as PaaS can vastly simplify a range of complex runtime & lifecycle services that would otherwise need to be cobbled together by the customer as the application moves from development to production. There is simply a lack of knowledge in the customer community about where a PaaS fits in a development & deployment toolchain.
  3. Smorgasbord of Complex Infrastructure Choices – The average leading PaaS includes a range of open source technologies ranging from containers to runtimes to datacenter orchestration to scheduling to cluster management tools. This makes it very complex from the perspective of Corporate IT – not just it terms of running POCs and initial deployments but also to manage a highly complex stack. It is incumbent on the open source projects to abstract away the complex inner workings to drive adoption  -whether by design or by technology alliances.
  4. You don’t need Cloud for PaaS but not enough Technology Leaders get that – This one is perception. The presence of an infrastructural cloud computing strategy is not a necessary condition for PaaS. 
  5. The false notion that PaaS is only fit for massively scalable, greenfield applications – Industry leading PaaS’s (like Red Hat’s OpenShift) support a range of technology approaches that can help cut technical debt. They donot limit deployment on an application server platform such as JBOSS EAP or WebSphere or WebLogic, or a lightweight framework like Spring.
  6. PaaS will help increase automation thus cutting costs – For developers of applications in Greenfield/ New Age spheres such as IoT, PaaS can enable the creation of thousands of instances in a “Serverless” fashion. PaaS based applications can be composed of microservices which are essentially self maintaining – i.e self healing and self scalable up or down; these microservices are delivered (typically) by IT as Docker containers using automated toolchains. The biggest requirement in large datacenters – human involvement – is drastically reduced if PaaS is used – while increasing agility, business responsiveness and efficiencies.

Conclusion…

My goal for this post was to share a few of my thoughts on the benefits of adopting a game changing technology. Done right, PaaS can provide a tremendous boost to building digital applications thus boosting the bottom line. Beginning 2017, we will witness PaaS satisfying critical industry use cases as leading organizations build end-to-end business solutions that covers many architectural layers.

References…

[1] http://www.forbes.com/sites/louiscolumbus/2016/03/13/roundup-of-cloud-computing-forecasts-and-market-estimates-2016/#3d75915274b0

The Three Core Competencies of Digital – Cloud, Big Data & Intelligent Middleware

Ultimately, the cloud is the latest example of Schumpeterian creative destruction: creating wealth for those who exploit it; and leading to the demise of those that don’t.” – Joe Weiman author of Cloudonomics: The Business Value of Cloud Computing

trifacta_digital

The  Cloud As a Venue for Digital Workloads…

As 2016 draws to a close, it can safely be said that no industry leader questions the existence of the new Digital Economy and the fact that every firm out there needs to create a digital strategy. Myriad organizations are taking serious business steps to making their platforms highly customer-centric via a renewed operational metrics focus. They are also working on creating new business models using their Analytics investments. Examples of these verticals include Banking, Insurance, Telecom, Healthcare, Energy etc.

As a general trend, the Digital Economy brings immense opportunities while exposing firms to risks as well. Customers now demanding highly contextual products, services and experiences – all accessible via an easy API (Application Programming Interfaces).

Big Data Analytics (BDA) software revenues will grow from nearly $122B in 2015 to more than $187B in 2019 – according to Forbes [1].  At the same time, it is clear that exploding data generation across the global economy has become a clear & present business phenomenon. Data volumes are rapidly expanding across industries. However, while the production of data itself that has increased but it is also driving the need for organizations to derive business value from it. As IT leaders know well, digital capabilities need low cost yet massively scalable & agile information delivery platforms – which only Cloud Computing can provide.

For a more detailed technical overview- please visit below link.

http://www.vamsitalkstech.com/?p=1833

Big Data & Big Data Analytics drive consumer interactions.. 

The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous online interactions with global consumers/customers/clients or patients. The goal is not just provide engaging visualization but also to personalize services clients care about across multiple channels of interaction. The only way to attain digital success is to understand your customers at a micro level while constantly making strategic decisions on your offerings to the market. Big Data has become the catalyst in this massive disruption as it can help business in any vertical solve their need to understand their customers better & perceive trends before the competition does. Big Data thus provides the foundational  platform for successful business platforms.

The three key areas where Big Data & Cloud Computing intersect are – 

  • Data Science and Exploration
  • ETL, Data Backups and Data Preparation
  • Analytics and Reporting

Big Data drives business usecases in Digital in myriad ways – key examples include  –  

  1. Obtaining a realtime Single View of an entity (typically a customer across multiple channels, product silos & geographies)
  2. Customer Segmentation by helping businesses understand their customers down to the individual micro level as well as at a segment level
  3. Customer sentiment analysis by combining internal organizational data, clickstream data, sentiment analysis with structured sales history to provide a clear view into consumer behavior.
  4. Product Recommendation engines which provide compelling personal product recommendations by mining realtime consumer sentiment, product affinity information with historical data.
  5. Market Basket Analysis, observing consumer purchase history and enriching this data with social media, web activity, and community sentiment regarding past purchase and future buying trends.

Further, Digital implies the need for sophisticated, multifactor business analytics that need to be performed in near real time on gigantic data volumes. The only deployment paradigm capable of handling such needs is Cloud Computing – whether public or private. Cloud was initially touted as a platform to rapidly provision compute resources. Now with the advent of Digital technologies, the Cloud & Big Data will combine to process & store all this information.  According to the IDC , by 2020 spending on Cloud based Big Data Analytics will outpace on-premise by a factor of 4.5. [2]

Intelligent Middleware provides Digital Agility.. 

Digital Applications are applications modular, flexible and responsive to a variety of access methods – mobile & non mobile. These applications are also highly process driven and support the highest degree of automation. The need of the hour is to provide enterprise architecture capabilities around designing flexible digital platforms that are built around efficient use of data, speed, agility and a service oriented architecture. The choice of open source is key as it allows for a modular and flexible architecture that can be modified and adopted in a phased manner – as you will shortly see.

The intention in adopting a SOA (or even a microservices) architecture for Digital capabilities is to allow lines of business an ability to incrementally plug in lightweight business services like customer on-boarding, electronic patient records, performance measurement, trade surveillance, risk analytics, claims management etc.

Intelligent Middleware adds significant value in six specific areas –

  1. Supports a high degree of Process Automation & Orchestration thus enabling the rapid conversion of paper based business processes to a true digital form in a manner that lends itself to continuous improvement & optimization
  2. Business Rules help by adding a high degree of business flexibility & responsiveness
  3. Native Mobile Applications  enables platforms to support a range of devices & consumer behavior across those front ends
  4. Platforms As a Service engines which enable rapid application & business capability development across a range of runtimes and container paradigms
  5. Business Process Integration engines which enable rapid application & business capability development
  6. Middleware brings the notion of DevOps into the equation. Digital projects bring several technology & culture challenges which can be solved by a greater degree of collaboration, continuous development cycles & new toolchains without giving up proven integration with existing (or legacy)systems.

Intelligent Middleware not only enables Automation & Orchestration but also provides an assembly environment to string different (micro)services together. Finally, it also enables less technical analysts to drive application lifecycle as much as possible.

Further, Digital business projects call out for mobile native applications – which a forward looking middleware stack will support.Middleware is a key component for driving innovation and improving operational efficiency.

Five Key Business Drivers for combining Big Data, Intelligent Middleware & the Cloud…

The key benefits of combining the above paradigms to create new Digital Applications are –

  • Enable Elastic Scalability Across the Digital Stack
    Cloud computing can handle the storage and processing of any amount of data & any kind of data.This calls for the collection & curation of data from dynamic and highly distributed sources such as consumer transactions, B2B interactions, machines such as ATM’s & geo location devices, click streams, social media feeds, server & application log files and multimedia content such as videos etc. It needs to be noted that data volumes here consist of multi-varied formats, differing schemas, transport protocols and velocities. Cloud computing provides the underlying elastic foundation to analyze these datasets.
  • Support Polyglot Development, Data Science & Visualization
    Cloud technologies are polyglot in nature. Developers can choose from a range of programming languages (Java, Python, R, Scala and C# etc) and development frameworks (such as Spark and Storm). Cloud offerings also enable data visualization using a range of tools from Excel to BI Platforms.
  • Reduce Time to Market for Digital Business Capabilities
    Enterprises can avoid time consuming installation, setup & other upfront procedures. consuming  can deploy Hadoop in the cloud without buying new hardware or incurring other up-front costs. On the same vein, even big data analytics should be able to support self service across the lifecycle – from data acquisition, preparation, analysis & visualization.
  • Support a multitude of Deployment Options – Private/Public/Hybrid Cloud 
    A range of scenarios for product development, testing, deployment, backup or cloudbursting are efficiently supported in pursuit of cost & flexibility goals.
  • Fill the Talent Gap
    Open Source technology is the common thread across Cloud, Big Data and Middleware. The hope is that the ubiquity of open source will serve as a critical level in enabling the filling up of the IT-Business skills scarcity gap.

As opposed to building standalone or one-off business applications, a ‘Digital Platform Mindset’ is a more holistic approach capable of producing higher rates of adoption & thus revenues. Platforms abound in the web-scale world at shops like Apple, Facebook & Google etc. Digital Applications are constructed like lego blocks  and they reuse customer & interaction data to drive cross sell and up sell among different product lines. The key components here are to ensure that one starts off with products with high customer attachment & retention. While increasing brand value, it is key to ensure that customers & partners can also collaborate in the improvements in the various applications hosted on top of the platform.

References

[1] Forbes Roundup of Big Data Analytics (BDA) Report

http://www.forbes.com/sites/louiscolumbus/2016/08/20/roundup-of-analytics-big-data-bi-forecasts-and-market-estimates-2016/#b49033b49c5f

[2] IDC FutureScape: Worldwide Big Data and Analytics 2016 Predictions

Why Software Defined Infrastructure & why now..(1/6)

The ongoing digital transformation in key verticals like financial services, manufacturing, healthcare and telco has incumbent enterprises fending off a host of new market entrants. Enterprise IT’s best answer is to increase the pace of innovation as a way of driving increased differentiation in business processes. Though data analytics & automation remain the lynchpin of this approach – software defined infrastructure (SDI) built on the notions of cloud computing has emerged as the main infrastructure differentiator & that for a host of reasons which we will discuss in this two part blog.

Software Defined Infrastructure (SDI) is essentially an idea that brings together  advances in a host of complementary areas spanning both infrastructure software, data as well as development environments. It supports a new way of building business applications. The core idea in SDI is that massively scalable applications (in support of diverse customer needs) describe their behavior characteristics (via configuration & APIs) to underlying datacenter infrastructure which simply obeys those commands in an automated fashion while abstracting away the underlying complexities.

SDI as an architectural pattern was originally made popular by the web scale giants – the so-called FANG companies of tech — Facebook , Amazon , Netflix and Alphabet (the erstwhile Google) but has begun making it’s way into the enterprise world gradually.

Common Business IT Challenges prior to SDI – 
  1. Cost of hardware infrastructure is typically growing at a high percentage every year as compared to  growth in the total  IT budget. Cost pressures are driving an overall re look at the different tiers across the IT landscape.
  2. Infrastructure is not completely under the control of the IT-Application development teams as yet.  Business realities that dictate rapid app development to meet changing business requirements
  3. Even for small, departmental level applications, still needed to deploy expensive proprietary stacks which are not only cost and deployment footprint prohibitive but also take weeks to spin up in terms of provisioning cycles.
  4. Big box proprietary solutions leading to a hard look at Open Source technologies which are lean and easy to use with lightweight deployment footprint.Apps need to dictate footprint; not vendor provided containers.
  5. Concerns with acquiring developers who are tooled on cutting edge development frameworks & methodologies. You have zero developer mindshare with Big Box technologies.

Key characteristics of an SDI

  1. Applications built on a SDI can detect business events in realtime and respond dynamically by allocating additional resources in three key areas – compute, storage & network – based on the type of workloads being run.
  2. Using an SDI, application developers can seamlessly deploy apps while accessing higher level programming abstractions that allow for the rapid creation of business services (web, application, messaging, SOA/ Microservices tiers), user interfaces and a whole host of application elements.
  3. From a management standpoint, business application workloads are dynamically and automatically assigned to the available infrastructure (spanning public & private cloud resources) on the basis of the application requirements, required SLA in a way that provides continuous optimization across the life cycle of technology.
  4. The SDI itself optimizes the entire application deployment by both externally provisioned APIs & internal interfaces between the five essential pieces – Application, Compute, Storage, Network & Management.

The SDI automates the technology lifecycle –

Consider the typical tasks needed to create and deploy enterprise applications. This list includes but is not limited to –

  • onboarding hardware infrastructure,
  • setting up complicated network connectivity to firewalls, routers, switches etc,
  • making the hardware stack available for consumption by applications,
  • figure out storage requirements and provision those
  • guarantee multi-tenancy
  • application development
  • deployment,
  • monitoring
  • updates, failover & rollbacks
  • patching
  • security
  • compliance checking etc.
The promise of SDI is to automate all of this from a business, technology, developer & IT administrator standpoint.
 SDI Reference Architecture – 
 The SDI encompasses SDC (Software Defined Compute) , SDS (Software Defined Storage), SDN (Software Defined Networking), Software Defined Applications and Cloud Management Platforms (CMP) into one logical construct as can be seen from the below picture.
FS_SDDC

                      Illustration: The different tiers of Software Defined Infrastructure

The core of the software defined approach are APIs.  APIs control the lifecycle of resources (request, approval, provisioning,orchestration & billing) as well as the applications deployed on them. The SDI implies commodity hardware (x86) & a cloud based approach to architecting the datacenter.

The ten fundamental technology tenets of the SDI –

1. Highly elastic – scale up or scale down the gamut of infrastructure (compute – VM/Baremetal/Containers, storage – SAN/NAS/DAS, network – switches/routers/Firewalls etc) in near real time

2. Highly Automated – Given the scale & multi-tenancy requirements, automation at all levels of the stack (development, deployment, monitoring and maintenance)

3. Low Cost – Oddly enough, the SDI operates at a lower CapEx and OpEx compared to the traditional datacenter due to reliance on open source technology & high degree of automation. Further workload consolidation only helps increase hardware utilization.

4. Standardization –  The SDI enforces standardization and homogenization of deployment runtimes, application stacks and development methodologies based on lines of business requirements. This solves a significant IT challenge that has hobbled innovation at large financial institutions.

5. Microservice based applications –  Applications developed for a SDI enabled infrastructure are developed as small, nimble processes that communicate via APIs and over infrastructure like messaging & service mediation components (e.g Apache Kafka & Camel). This offers huge operational and development advantages over legacy applications. While one does not expect Core Banking applications to move over to a microservice model anytime soon, customer facing applications that need responsive digital UIs will need definitely consider such approaches.

6. ‘Kind-of-Cloud’ Agnostic –  The SDI does not enforce the concept of private cloud, or rather it encompasses a range of deployment options – public, private and hybrid.

7. DevOps friendly –  The SDI enforces not just standardization and homogenization of deployment runtimes, application stacks and development methodologies but also enables a culture of continuous collaboration among developers, operations teams and business stakeholders i.e cross departmental innovation. The SDI is a natural container for workloads that are experimental in nature and can be updated/rolled-back/rolled forward incrementally based on changing business requirements. The SDI enables rapid deployment capabilities across the stack leading to faster time to market of business capabilities.

8. Data, Data & Data –  The heart of any successful technology implementation is Data. This includes customer data, transaction data, reference data, risk data, compliance data etc etc. The SDI provides a variety of tools that enable applications to process data in a batch, interactive, low latency manner depending on what the business requirements are.

9. Security –  The SDI shall provide robust perimeter defense as well as application level security with a strong focus on a Defense In Depth strategy.

10. Governance –  The SDI enforces strong governance requirements for capabilities ranging from ITSM requirements – workload orchestration, business policy enabled deployment, autosizing of workloads to change management, provisioning, billing, chargeback & application deployments.

The next & second blog in this series will cover the challenges in running massive scale applications.

My take on Gartner’s Top 10 Strategic Technology Trends for 2016

Gartner_top_2016

Dream no small dreams for they have no power to move the hearts of men.” — Goethe

It is that time of the year again when the mavens at Gartner make their annual predictions regarding the top Strategic trends for the upcoming year. The definition of ‘strategic’ as in an emerging technology trend that will impact Iong term business thus influencing plans & budgets. As before, I will be offering up my own take on these while solidifying the discussion in terms of the Social, Mobile, Big Data Analytics & Cloud (SMAC) stack that is driving ongoing industry revolution.
  1. The Digital Mesh
    The rise of the machines has been well documented but enterprises are waking up to the possibilities only recently.  Massive data volumes are now being reliably generated from diverse sources of telemetry as well as endpoints at corporate offices (as a consequence of BYOD). The former devices include sensors used in manufacturing, personal fitness devices like FitBit, Home and Office energy management sensors, Smart cars, Geo-location devices etc. Couple these with the ever growing social media feeds, web clicks, server logs and more – one sees a clear trend forming which Gartner terms the Digital Mesh.  The Digital Mesh leads to an interconnected information deluge which encompasses classical IoT endpoints along with audio, video & social data streams. This leads to huge security challenges and opportunity from a business perspective  for forward looking enterprises (including Governments). Applications will need to combine these into one holistic picture of an entity – whether individual or institution. 
  2. Information of Everything
    The IoT era brings an explosion of data that flows across organizational, system and application boundaries. Look for advances in technology especially in Big Data and Visualization to help consumers harness this information in the right form enriched with the right contextual information.In the Information of Everything era, massive amounts of efforts will thus be expended on data ingestion, quality and governance challenges.
  3. Ambient User Experiences
    Mobile applications first begun forcing the need for enterprise to begin supporting multiple channels of interaction with their consumers. For example Banking now requires an ability to engage consumers in a seamless experience across an average of four to five channels – Mobile, eBanking, Call Center, Kiosk etc. The average enterprise user is familiar with BYOD in the age of self service. The Digital Mesh only exacerbates this gap in user experiences as information consumers navigate applications as they consume services across a mesh that is both multi-channel as well as provides Customer 360 across all these engagement points.Applications developed in 2016 and beyond must take an approach to ensuring a smooth experience across the spectrum of endpoints and the platforms that span them from a Data Visualization standpoint.
  4. Autonomous Agents and Things

    Smart machines like robots,personal assistants like Apple Siri,automated home equipment will rapidly evolve & become even more smarter as their algorithms get more capable and understanding of their own environments. In addition, Big Data & Cloud computing will continue to mature and offer day to day capabilities around systems that employ machine learning to make predictions & decisions. We will see increased application of Smart Agents in diverse fields like financial services,healthcare, telecom and media.

  5. Advanced Machine Learning
    Most business problems are data challenges and an approach centered around data analysis helps extract meaningful insights from data thus helping the business It is a common capability now for many enterprises to possess the capability to acquire, store and process large volumes of data using a low cost approach leveraging Big Data and Cloud Computing.  At the same time the rapid maturation of scalable processing techniques allows us to extract richer insights from data. What we commonly refer to as Machine Learning – a combination of  of econometrics, machine learning, statistics, visualization, and computer science – extract valuable business insights hiding in data and builds operational systems to deliver that value. Data Science has evolved to a new branch called “Deep Neural Nets” (DNN). DNN Are what makes possible the ability of smart machines and agents to learn from data flows and to make products that use them even more automated & powerful. Deep Machine Learning involves the art of discovering data insights in a human-like pattern. The web scale world (led by Google and Facebook) have been vocal about their use of Advanced Data Science techniques and the move of Data Science into Advanced Machine Learning.
  6. 3D Printing Materials

    3D printing continues to evolve and advance across a wide variety of industries.2015 saw a wider range of materials including carbon fiber, glass, nickel alloys, electronics & other materials used in the 3D printing process . More and more industries continue to incorporate the print and assembly of composite parts constructed using such materials – prominent examples including Tesla and SpaceX. We are at the beginning of a 20 year revolution which will lead to sea changes in industrial automation.

  7. Adaptive Security
    A cursory study of the top data breaches in 2015 reads like a “Who’s Who”of actors in society across Governments, Banks, Retail establishments etc. The enterprise world now understands that an comprehensive & strategic approach to Cybersecurity has  now far progressed from being an IT challenge a few years ago to a business imperative. As Digital and IoT ecosystems evolve to loose federations of API accessible and cloud native applications, more and more assets are at danger of being targeted by extremely well funded and sophisticated adversaries. For instance – it is an obvious truth that data from millions of IoT endpoints requires data ingest & processing at scale. The challenge from a security perspective is multilayered and arises not just from malicious actors but also from a lack of a holistic approach that combines security with data governance, audit trails and quality attributes. Traditional solutions cannot handle this challenge which is exacerbated by the expectation that in an IoT & DM world, data flows will be multidirectional across a grid of application endpoints. Expect to find applications in 2016 and beyond incorporating Deep Learning and Real Time Analytics into their core security design with a view to analyzing large scale data at a very low latency.
  8. Advanced System Architecture
    The advent of the digital mesh and ecosystem technologies like autonomous agents (powered by Deep Neural Nets) will make increasing demands on computing architectures from a power consumption, system intelligence as well as a form factor perspective. The key is to provide increased performance while mimicking neuro biological architectures. The name given this style of building electronic circuits is neuromorphic computing. Systems designers will have increased choice in terms of using field programmable gate arrays (FPGAs) or graphics processing units (GPUs). While both FGPAs and GPUs have their pros and cons, devices & computing architectures using these as a foundation are both suited to deep learning and other pattern matching algorithms leveraged by advanced machine learning. Look for more reductions in form factors at less power consumption while allowing advanced intelligence in the IoT endpoint ecosystem.
  9. Mesh App and Service Architecture
    The micro services architecture approach which combines the notion of autonomous, cooperative yet loosely coupled applications built as a conglomeration of business focused services is a natural fit for the Digital Mesh.  The most important additive and consideration to micro services based architectures in the age of the Digital Mesh is what I’d like to term –  Analytics Everywhere. Applications in 2016 and beyond will need to recognize that Analytics are pervasive, relentless, realtime and thus embedded into our daily lives. Every interaction a user has with a micro services based application will need a predictive capability built into the application architecture itself. Thus, 2016 will be the year when Big Data techniques are no longer be the preserve of classical Information Management teams but move to the umbrella Application development area which encompasses the DevOps and Continuous Integration & Delivery (CI-CD) spheres.

  10. IoT Architecture and Platforms
    There is no doubt in anyone’s mind that IoT (Internet Of Things) is a technology megatrend that will reshape enterprises, government and citizens for years to come. IoT platforms will complement Mesh Apps and Service Architectures with a common set of platform capabilities built around open communication, security, scalability & performance requirements. These will form the basic components of IoT infrastructure including but not limited to machine to machine interfaces,location based technology, micro controllers , sensors, actuators and the communication protocols (based on an all IP standard).


The Final Word
– 

One feels strongly that  Open Source will drive the various layers that make up the Digital Mesh stack (Big Data, Operating Systems, Middleware, Advanced Machine Learning & BPM). IoT will be a key part of Digital Transformation initiatives.

However, the challenge for developing Vertical capabilities on these IoT platforms is three fold.  Specifically in areas of augmenting micro services based Digital Mesh applications- which are largely lacking at the time of writing:

  • Data Ingest in batch or near realtime (NRT) or realtime from dynamically changing, disparate and physically distributed sensors, machines, geo location devices, clickstreams, files, and social feeds via highly secure lightweight agents
  • Provide secure data transfer using point-to-point and bidirectional data flows in real time
  • Curate these flows with Simple Event Processing (SEP) capabilities via tracing, parsing, filtering, joining, transforming, forking or cloning of data flows while adding business context to these flows. As mobile clients, IoT applications, social media feeds etc are being brought onboard into existing applications from an analytics perspective, traditional IT operations face pressures from both business and development teams to provide new and innovative services.

The creation of these smart services will further depend on the vertical industries that these products serve as well as requirements for the platforms that host them. E.g industrial automation, remote healthcare, public transportation, connected cars, home automation etc.

Finally, 2016 also throws up some interesting questions around Cyber Security, namely –

a. Can an efficient Cybersecurity be a lasting source of competitive advantage;
b. Given that most breaches are long running in nature where systems are slowly compromised over months. How does one leverage Big Data and Predictive Modeling to rewire and re-architect creaky defenses?
c. Most importantly, how can applications implement security in a manner that they constantly adapt and learn;

If there were just a couple of sentences to sum up Gartner’s forecast for 2016 in a succinct manner, it would be “The emergence of the Digital Mesh & the rapid maturation of IoT will serve to accelerate business transformation across industry verticals. The winning enterprises will begin to make smart technology investments in Big Data, DevOps & Cloud practices  to harness these changes “.

Design & Architecture of a Next Gen Market Surveillance System..(2/2)

This article is the final installment in a two part series that covers one of the most critical issues facing the financial industry – Investor & Market Integrity Protection via Global Market Surveillance. While the first (and previous) post discussed the global scope of the problem across multiple global jurisdictions –  this post will discuss a candidate Big Data & Cloud Computing Architecture that can help market participants (especially the front line regulators – the Stock Exchanges themselves) & SROs (Self Regulatory Authorities) implement these capabilities in their applications & platforms.

Business Background –

The first article in this two part series laid out the five business trends that are causing a need to rethink existing Global & Cross Asset Surveillance based systems.

To recap them below –

  1. The rise of trade lifecycle automation across the Capital Markets value chain and the increasing use of technology across the lifecycle contributes to an environment where speeds and feeds are contributing to a huge number of securities changing hands (in huge quantities) in milliseconds across 25+ global venues of trading; automation leads to increase in trading volumes which adds substantially to the increased risk of fraud
  2. The presence of multiple avenues of trading (ATF – alternative trading facilities and MTF – multilateral trading facilities) creates opportunities for information and price arbitrage that were never a huge problem before in terms of multiple markets and multiple products across multiple geographies with different regulatory requirements.This has been covered in a previous post in this blog at –
    http://www.vamsitalkstech.com/?p=412
  3. As a natural consequence of all of the above – (the globalization of trading where market participants are spread across multiple geographies) it makes it all the more difficult to provide a consolidated audit trail (CAT) to view all activity under a single source of truth ;as well as traceability of orders across those venues; this is extremely key as fraud is becoming increasingly sophisticated e.g the rise of insider trading rings
  4. Existing application (e.g ticker plants, surveillance systems, DevOps) architectures are becoming brittle and underperforming as data and transaction volumes continue to go up & data storage requirements keep rising every year. This leads to massive gaps in compliance data. Another significant gap is found while performing a range of post trade analytics – many of which are beyond the simple business rules being leveraged right now and now increasingly need to move into the machine learning & predictive domain. Surveillance now needs to include non traditional sources of data e.g trader email/chat/link analysis etc that can point to under the radar rogue trading activity before that causes the financial system huge losses. E.g. the London Whale, the LIBOR fixing scandal etc 
  5. Again as a consequence of increased automation, backtesting of data has become a challenge – as well as being able to replay data across historical intervals. This is key in mining for patterns of suspicious activity like bursty spikes in trading as well as certain patterns that could indicate illegal insider selling

The key issue becomes – how do antiquated surveillance systems move into the era of Cloud & Big Data enabled innovation as a way of overcoming these business challenges?

Technology Requirements –

An intelligent surveillance system needs to store trade data, reference data, order data, and market data, as well as all of the relevant communications from all the disparate systems, both internally and externally, and then match these things appropriately. The system needs to account for multiple levels of detection capabilities starting with a) configuring business rules (that describe a fraud pattern) as well as b) dynamic capabilities based on machine learning models (typically thought of as being more predictive). Such a system also needs to parallelize execution at scale to be able to meet demanding latency requirements for a market surveillance platform.

The most important technical essentials for such a system are –

  1. Support end to end monitoring across a variety of financial instruments across multiple venues of trading. Support a wide variety of analytics that enable the discovery of interrelationships between customers, traders & trades as the next major advance in surveillance technology.
  2. Provide a platform that can ingest from tens of millions to billions of market events (spanning a range of financial instruments – Equities, Bonds, Forex, Commodities and Derivatives etc) on a daily basis from thousands of institutional market participants
  3. The ability to add new business rules (via either a business rules engine and/or a model based system that supports machine learning) is a key requirement. As we can see from the first post, market manipulation is an activity that seems to constantly push the boundaries in new and unforseen ways
  4. Provide advanced visualization techniques thus helping Compliance and Surveillance officers manage the information overload.
  5. The ability to perform deep cross-market analysis i.e. to be able to look at financial instruments & securities trading on multiple geographies and exchanges e.g.
  6. The ability to create views and correlate data that are both wide and deep. A wide view will look at related securities across multiple venues; a deep view will look for a range of illegal behaviors that threaten market integrity such as market manipulation, insider trading, watch/restricted list trading and unusual pricing.
  7. The ability to provide in-memory caches of data  for rapid pre-trade compliance checks.
  8. Ability to create prebuilt analytical models and algorithms that pertain to trading strategy (pre- trade models –. e.g. best execution and analysis). The most popular way to link R and Hadoop is to use HDFS as the long-term store for all data, and use MapReduce jobs (potentially submitted from Hive or Pig) to encode, enrich, and sample data sets from HDFS into R.
  9. Provide Data Scientists and Quants with development interfaces using tools like SAS and R.
  10. The results of the processing and queries need to be exported in various data formats, a simple CSV/txt format or more optimized binary formats, JSON formats, or even into custom formats.  The results will be in the form of standard relational DB data types (e.g. String, Date, Numeric, Boolean).
  11. Based on back testing and simulation, analysts should be able to tweak the model and also allow subscribers (typically compliance personnel) of the platform to customize their execution models.
  12. A wide range of Analytical tools need to be integrated that allow the best dashboards and visualizations.

Application & Data Architecture –

The dramatic technology advances in Big Data & Cloud Computing enable the realization of the above requirements.  Big Data is dramatically changing that approach with advanced analytic solutions that are powerful and fast enough to detect fraud in real time but also build models based on historical data (and deep learning) to proactively identify risks.

To enumerate the various advantages of using Big Data  –

a) Real time insights –  Generate insights at a latency of a few milliseconds
b) A Single View of Customer/Trade/Transaction 
c) Loosely coupled yet Cloud Ready Architecture
d) Highly Scalable yet Cost effective

The technology reasons why Hadoop is emerging as the best choice for fraud detection: From a component perspective Hadoop supports multiple ways of running models and algorithms that are used to find patterns of fraud and anomalies in the data to predict customer behavior. Examples include Bayesian filters, Clustering, Regression Analysis, Neural Networks etc. Data Scientists & Business Analysts have a choice of MapReduce, Spark (via Java,Python,R), Storm etc and SAS to name a few – to create these models. Fraud model development, testing and deployment on fresh & historical data become very straightforward to implement on Hadoop. The last few releases of enterprise Hadoop distributions (e.g. Hortonworks Data Platform) have seen huge advances from a Governance, Security and Monitoring perspective.

A shared data repository called a Data Lake is created, that can capture every order creation, modification, cancelation and ultimate execution across all exchanges. This lake provides more visibility into all data related to intra-day trading activities. The trading risk group accesses this shared data lake to processes more position, execution and balance data. This analysis can be performed on fresh data from the current workday or on historical data, and it is available for at least five years—much longer than before. Moreover, Hadoop enables ingest of data from recent acquisitions despite disparate data definitions and infrastructures. All the data that pertains to trade decisions and trade lifecycle needs to be made resident in a general enterprise storage pool that is run on the HDFS (Hadoop Distributed Filesystem) or similar Cloud based filesystem. This repository is augmented by incremental feeds with intra-day trading activity data that will be streamed in using technologies like Sqoop, Kafka and Storm.

The above business requirements can be accomplished leveraging the many different technology paradigms in the Hadoop Data Platform. These include technologies such as enterprise grade message broker – Kafka, in-memory data processing via Spark & Storm etc.

Market_Surveillance

                  Illustration :  Candidate Architecture  for a Market Surveillance Platform 

The overall logical flow in the system –

  • Information sources are depicted at the left. These encompass a variety of institutional, system and human actors potentially sending thousands of real time messages per second or sending over batch feeds.
  • A highly scalable messaging system to help bring these feeds into the architecture as well as normalize them and send them in for further processing. Apache Kafka is chosen for this tier.Realtime data is published by Payment Processing systems over Kafka queues. Each of the transactions has 100s of attributes that can be analyzed in real time to  detect patterns of usage.  We leverage Kafka integration with Apache Storm to read one value at a time and perform some kind of storage like persist the data into a HBase cluster.In a modern data architecture built on Apache Hadoop, Kafka ( a fast, scalable and durable message broker) works in combination with Storm, HBase (and Spark) for real-time analysis and rendering of streaming data. 
  • Trade data is thus streamed into the platform (on a T+1 basis), which thus ingests, collects, transforms and analyzes core information in real time. The analysis can be both simple and complex event processing & based on pre-existing rules that can be defined in a rules engine, which is invoked with Storm. A Complex Event Processing (CEP) tier can process these feeds at scale to understand relationships among them; where the relationships among these events are defined by business owners in a non technical or by developers in a technical language. Apache Storm integrates with Kafka to process incoming data. Storm architecture is covered briefly in the below section.
  • HBase provides near real-time, random read and write access to tables (or ‘maps’) storing billions of rows and millions of columns. In this case once we store this rapidly and continuously growing dataset from the information producers, we are able  to do perform super fast lookup for analytics irrespective of the data size.
  • Data that has analytic relevance and needs to be kept for offline or batch processing can be handled using the storage platform based on Hadoop Distributed Filesystem (HDFS) or Amazon S3. The idea to deploy Hadoop oriented workloads (MapReduce, or, Machine Learning) to understand trading patterns as they occur over a period of time.Historical data can be fed into Machine Learning models created above and commingled with streaming data as discussed in step 1.
  • Horizontal scale-out (read Cloud based IaaS) is preferred as a deployment approach as this helps the architecture scale linearly as the loads placed on the system increase over time. This approach enables the Market Surveillance engine to distribute the load dynamically across a cluster of cloud based servers based on trade data volumes.
  • To take an incremental approach to building the system, once all data resides in a general enterprise storage pool and makes the data accessible to many analytical workloads including Trade Surveillance, Risk, Compliance, etc. A shared data repository across multiple lines of business provides more visibility into all intra-day trading activities. Data can be also fed into downstream systems in a seamless manner using technologies like SQOOP, Kafka and Storm. The results of the processing and queries can be exported in various data formats, a simple CSV/txt format or more optimized binary formats, json formats, or you can plug in custom SERDE for custom formats. Additionally, with HIVE or HBASE, data within HDFS can be queried via standard SQL using JDBC or ODBC. The results will be in the form of standard relational DB data types (e.g. String, Date, Numeric, Boolean). Finally, REST APIs in HDP natively support both JSON and XML output by default.
  • Operational data across a bunch of asset classes, risk types and geographies is thus available to risk analysts during the entire trading window when markets are still open, enabling them to reduce risk of that day’s trading activities. The specific advantages to this approach are two-fold: Existing architectures typically are only able to hold a limited set of asset classes within a given system. This means that the data is only assembled for risk processing at the end of the day. In addition, historical data is often not available in sufficient detail. HDP accelerates a firm’s speed-to-analytics and also extends its data retention timeline
  • Apache Atlas is used to provide governance capabilities in the platform that use both prescriptive and forensic models, which are enriched by a given businesses data taxonomy and metadata.  This allows for tagging of trade data  between the different businesses data views, which is a key requirement for good data governance and reporting. Atlas also provides audit trail management as data is processed in a pipeline in the lake
  • Another important capability that Hadoop can provide is the establishment and adoption of a lightweight entity ID service – which aids dramatically in the holistic viewing & audit tracking of trades. The service will consist of entity assignment for both institutional and individual traders. The goal here is to get each target institution to propagate the Entity ID back into their trade booking and execution systems, then transaction data will flow into the lake with this ID attached providing a way to do Customer & Trade 360.
  • Output data elements can be written out to HDFS, and managed by HBase. From here, reports and visualizations can easily be constructed.One can optionally layer in search and/or workflow engines to present the right data to the right business user at the right time.  

The Final Word [1] –

We have discussed FINRA as an example of a forward looking organization that has been quite vocal about their usage of Big Data. So how successful has this approach been for them?

The benefits Finra has seen from big data and cloud technologies prompted the independent regulator to use those technologies as the basis for its proposal to build the Consolidated Audit Trail, the massive database project intended to enable the SEC to monitor markets in a high-frequency world. Over the summer, the number of bids to build the CAT was narrowed down to six in a second round of cuts. (The first round of cuts brought the number to 10 from more than 30.) The proposal that Finra has submitted together with the Depository Trust and Clearing Corporation (DTCC) is still in contention. Most of the bids to build and run the CAT for five years are in the range of $250 million, and Finra’s use of AWS and Hadoop makes its proposal the most cost-effective, Randich says.

References –

[1] http://www.fiercefinanceit.com/story/finra-leverages-cloud-and-hadoop-its-consolidated-audit-trail-proposal/2014-10-16