Why Legacy Monolithic Architectures Won’t Work For Digital Platforms..

As times change, so do architectural paradigms in software development. For the more than fifteen years the industry has been developing large scale JEE/.NET applications, the three-tier architecture has been the dominant design pattern. However, as enterprises embark or continue on their Digital Journey, they are facing a new set of business challenges which demand fresh technology approaches. We have looked into transformative data architectures at a great degree of depth in this blog, now let us now consider a rethink in the Applications themselves. Applications that were earlier deemed to be sufficiently well-architected are now termed as being monolithic.  This post solely focuses on the underpinnings of why legacy architectures will not work in the new software-defined world. My intention is not to merely criticize a model (the three-tier monolith) that has worked well in the past but merely to reason why it may be time for a generally well accepted newer paradigm.

Traditional Software Platform Architectures… 

Digital applications support a wider variety of frontends & channels, they need to accommodate larger volumes of users, they need wider support for a range of business actors  – partners, suppliers et al via APIs. Finally, these new age applications need to work with unstructured data formats (as opposed to the strictly structured relational format). From an operations standpoint, there is a strong need for a higher degree of automation in the datacenter. All of these requirements call for agility as the most important construct in the enterprise architecture.

As we will discuss, legacy applications (typically defined as created more than 5+ years ago) are beginning to emerge as one of the key obstacles in doing Digital. The issue is not just in the underlying architectures themselves but also in the development culture involved building and maintaining such applications.

Consider the vast majority of applications deployed in enterprise data centers. These applications deliver collections of very specific business functions – e.g. onboarding new customers, provisioning services, processing payments etc. Whatever be the choice of vendor application platform, the vast majority of existing enterprise applications & platforms essentially follows a traditional three-tier software architecture with specific separation of concerns at each tier (as the vastly simplified illustration depicts below).

Traditional three-tier Monolithic Application Architecture

The first tier is the Presentation tier which is depicted at the top of the diagram. The job of the presentation tier is to present the user experience. This includes the user interface components that present various clients with the overall web application flow and also renders UI components. A variety of UI frameworks that provide both flow and UI rendering is typically used here. These include Spring MVC, Apache Struts, HTML5, AngularJS et al.

The middle tier is the Business logic tier where all the business logic for the application is centralized while separating it from the user interface layer. The business logic is usually a mix of objects and business rules written in Java using frameworks such EJB3, Spring etc. The business logic is housed in an application server such as JBoss AS or Oracle WebLogic AS or IBM WebSphere AS – which provides enterprise services (such as caching, resource pooling, naming and identity services et al) to the business components run on these servers. This layer also contains data access logic and also initiates transactions to a range of supporting systems – message queues, transaction monitors, rules and workflow engines, ESB (Enterprise Service Bus) based integration, accessing partner systems using web services, identity, and access management systems et al.

The Data tier is where traditional databases and enterprise integration systems logically reside. The RDBMS rules this area in three-tier architectures & the data access code is typically written using an ORM (Object Relational Mapping) framework such as Hibernate or iBatis or plain JDBC code.

Across all of these layers, common utilities & agents are provided to address cross-cutting concerns such as logging, monitoring, security, single sign-on etc.

The application is packaged as an enterprise archive (EAR) which can be composed of a single or multiple WAR/JAR files. While most enterprise-grade applications are neatly packaged, the total package is typically compiled as a single collection of various modules and then shipped as one single artifact. It should bear mentioning that dependency & version management can be a painstaking exercise for complex applications.

Let us consider the typical deployment process and setup for a thee tier application.

From a deployment standpoint, static content is typically served from an Apache webserver which fronts a Java-based webserver (mostly Tomcat) and then a cluster of backend Java-based application servers running multiple instances of the application for High Availability. The application is Stateful (and Stateless in some cases) in most implementations. The rest of the setup with firewalls and other supporting systems is fairly standard.

While the above architectural template is fairly standard across industry applications built on Java EE, there are some very valid reasons why it has begun to emerge as an anti-pattern when applied to digital applications.

Challenges involved in developing and maintaining Monolithic Applications …

Let us consider what Digital business usecases demand of application architecture and where the monolith is inadequate at satisfying.

  1. The entire application is typically packaged as a single enterprise archive (EAR file), which is a combination of various WAR and JAR files. While this certainly makes the deployment easier given that there is only one executable to copy over, it makes the development lifecycle a nightmare. The reason being that even a simple change in the user interface can cause a rebuild of the entire executable. This results in not just long cycles but makes it extremely hard on teams that span various disciplines from the business to QA.
  2. What follows from such long “code-test & deploy” cycles are that the architecture becomes change resistant, the code very complex over time and as a whole the system subsequently becomes not agile at all in responding to rapidly changing business requirements.
  3. Developers are constrained in multiple ways. Firstly the architecture becomes very complex over a period of time which inhibits quick new developer onboarding. Secondly,  the architecture force-fits developers from different teams into working in lockstep thus forgoing their autonomy in terms of their planning and release cycles. Services across tiers are not independently deployable which leads to big bang releases in short windows of time. Thus it is no surprise that failures and rollbacks happen at an alarming rate.
  4. From an infrastructure standpoint, the application is tightly coupled to the underlying hardware. From a software clustering standpoint, the application scales better vertically while also supporting limited horizontal scale-out. As volumes of customer traffic increase, performance across clusters can degrade.
  5. The Applications are neither designed nor tested to operate gracefully under failure conditions. This is a key point which does not really get that much attention during design time but causes performance headaches later on.
  6. An important point is that Digital applications & their parts are beginning to be created using different languages such as Java, Scala, and Groovy etc. The Monolith essentially limits such a choice of languages, frameworks, platforms and even databases.
  7. The Architecture does not natively support the notion of API externalization or Continuous Integration and Delivery (CI/CD).
  8. As highlighted above, the architecture primarily supports the relational model. If you need to accommodate alternative data approaches such as NoSQL or Hadoop, you are largely out of luck.

Operational challenges involved in running a Monolithic Application…

The difficulties in running a range of monolithic applications across an operational infrastructure have already been summed up in the other posts on this blog.

The primary issues include –

  1. The Monolithic architecture typically dictates a vertical scaling model which ensures limits on its scalability as users increase. The typical traditional approach to ameliorate this has been to invest in multiple sets of hardware (servers, storage arrays) to physically separate applications which results in increases in running cost, a higher personnel requirement and manual processes around system patch and maintenance etc.
  2. Capacity management tends to be a bit of challenge as there are many fine-grained resources competing for compute, network and storage resources (vCPU, vRAM, virtual Network etc) that are essentially running on a single JVM. Lots of JVM tuning is needed from a test and pre-production standpoint.
  3. A range of functions needed to be performed around monolithic Applications lack any kind of policy-driven workload and scheduling capability. This is because the Application does very little to drive the infrastructure.
  4. The vast majority of the work needed to be done to provision, schedule and patch these applications is done by system administrators and consequently, automation is minimal at best.
  5. The same is true in Operations Management. Functions like log administration, other housekeeping, monitoring, auditing, app deployment, and rollback are vastly manual with some scripting.

Conclusion…

It deserves mention that the above Monolithic design pattern will work well for Departmental (low user volume) applications which have limited business impact and for applications serving a well-defined user base with well delineated workstreams. The next blog post will consider the microservices way of building new age architectures. We will introduce and discuss Cloud Native Application development which has been popularized across web-scale enterprises esp Netflix. We will also discuss how this new paradigm overcomes many of the above-discussed limitations from both a development and operations standpoint.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

The fourth and previous blog in this seven part series on Software Defined Datacenters (@ http://www.vamsitalkstech.com/?p=5010)  discussed how Linux Containers & Docker, are emerging as a key component of digital applications. We looked at various drivers & challenges stemming from running Containerized Applications from both a development and IT operations standpoint. In the fifth blog in this series, we will discuss another key emergent technology – Google’s Kubernetes (k8s)– which acts as the foundational runtime orchestrator for large scale container infrastructure. We will take the discussion higher up the stack in the next blog with OpenShift – Red Hat’s PaaS (Platform as a Service) platform – which includes Kubernetes and provides a powerful, agile & polyglot environment to build and manage microservices based applications.

The Importance of Container Orchestration… 

With Cloud Native application development emerging as the key trend in Digital platforms, containers offer a natural choice for a variety of reasons within the development process. In a nutshell, Containers are changing the way applications are being architected, designed, developed, packaged, delivered and managed. That is the reason why Container Orchestration has become a critical “must have” since for enterprises to be able to derive tangible business value – they must be able to run large scale containerized applications.

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

While containers have existed in Unix based operating systems such as Solaris and FreeBSD, pioneering work in the Linux OS community has led to the mainstreaming of this disruptive technology. Now, despite all the benefits afforded to both developers and IT Operations by containers, there are critical considerations involved in running containers at scale in complex n-tier real world applications across multiple datacenters.

What are some of the key considerations in running containers at scale –

Consideration #1 – You need a Model/Paradigm/Platform for the lifecycle management of containers – 

This includes the ability to organize applications into groups of containers, scheduling these applications on host servers that match their resource requirements, deploy applications as changes happen, manage complex storage integration, network topologies and provide seamless ways to destroy, restart etc  etc

Consideration #2 – Administrative Lifecycle Management  –

This covers a range of lifecycle processes ranging from constant deployments to upgrades to monitoring and monitoring. Granular issues include support for application patching with minimal downtime, support for canary releases, graceful failures in cloud-native applications, (container) capacity scale up & scale down based on traffic patterns etc.

Consideration #3 – Support DevelopMENT PROCESSES moving to DevOps and microservices

These reasons vary from rapid feature development, ability to easily accommodate CI/CD approaches, flexibility (as highlighted in the above point). For instance,k8s removes one of the biggest challenges with using vanilla containers along with CI/CD tools like Jenkins –  the challenge of linking individual containers that run microservices with one another. Other useful features include load balancing, service discovery, rolling updates and red/green deployments.

While the above drivers are just general guidelines, the actual tipping point for large scale container adoption will vary from enterprise to enterprise. However, the common precursor to supporting containerized applications at scale has to be an enterprise grade management and orchestration platform. And for some very concrete reasons we will discuss,k8s is fast emerging as the defacto leader in this segment.

Introducing Kubernetes (K8s)…

Kubernetes (kube or k8s) is an open-source platform that aims to automate the scheduling, deploying and managing applications running on containers. Kubernetes (and platforms built leveraging it) are designed to bring both development and operations teams together. This affects how Cloud Native applications are architected, composed, deployed, and managed.

k8s was incubated at Google (given their expertise in running billions of container workloads at scale) over the last decade. One caveat, the famous cluster controller & container management system known as Borg is deployed extensively at Google. Borg is a predecessor to k8s but is generally believed that while k8s borrows its core design tenets from Borg, it only contains a subset of the features present in Borg. [4]

Again, from [4] – “Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We’ve incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.

However, k8s is not a Google-only project anymore. In 2015 it was donated to the Cloud Native Foundation. The next year, 2015 also saw the k8s foundational release 1.0. Since then the project has been moving with a fair degree of feature & release velocity. The next version 1.4 was released in 2016. With the current 1.7 release, k8s has found wider industry wide adoption. The last year has seen heavy contributions from the likes of Red Hat, Microsoft, Mirantis, and Fujitsu et al to thek8s codebase.

k8s is infrastructure agnostic with clusters deployable on pretty much any Linux distribution – Red Hat, CentOS, Debian, Ubuntu etc.  K8s also runs on all popular cloud platforms – AWS, Azure and Google Cloud. It is also virtually hypervisor agnostic supporting – VMWare, KVM, and libvirt. It also supports both Docker or Windows Containers or rocket (rkt) runtimes with expanding support as newer runtimes become available.[3]

After this brief preamble, let us now discuss the architecture and internals of this exciting technology. We will then discuss why it has begun to garner massive adoption and why it deserves a much closer look by enterprise IT teams.

The Architecture of Kubernetes…

As depicted in the below diagram, Kubernetes (k8s) follows a master-slave methodology much like Apache Mesos and Apache Hadoop.

Kubernetes Architecture

The k8s Master is the control plane of the architecture. It is responsible for scheduling deployments, acting as the gateway for the API, and for overall cluster management. As depicted in the below illustration, It consists of several components, such as an API server, a scheduler, and a controller manager. The master is responsible for the global, cluster-level scheduling of pods and handling of events. For high availability and load balancing, multiple masters can be setup.  The core API server which runs in the master hosts a RESTful service that can be queried to maintain the desired state of the cluster and to maintain workloads. The admin path always goes through the Master to access the worker nodes and never goes directly to the workers.  The Scheduler service is used to schedule workloads on containers running on the slave nodes. It works in conjunction with the API server to distribute applications across groups of containers working on the cluster. It’s important to note that the management functionality only accesses the master to initiate changes in the cluster and does not access the nodes directly.

The second primitive in the architecture is the concept of a Node. A node refers to a host which may be virtual or physical. The node is the worker in the architecture and runs application stack components on what are called Pods. It needs to be noted that each node runs several kubernetes components such as a kubelet and a kube proxy. The kubelet is an agent process that works to start and stop groups of containers running user applications, manages images etc and communicates with the Docker engine. The kube-proxy works as a proxy networking service that redirects traffic to specific services and pods (we will define these terms in a bit). Both these agents communicate with the Master via the API server.

Nodes (which are VMs or bare metal servers) are joined together to form Clusters. As the name connotes, Clusters are a pool of resources – compute, storage and networking – that are used by the Master to run application components. Nodes, which used to be known as minions in prior releases, are the workers. Nodes host end user applications using their local resources such as compute, network and storage. Thus they include components to aid in logging, service discovery etc. Most of the administrative and control interactions are done via the kubectl script or by performing RESTful calls to the API server. The state of the cluster and the workloads running on it is constantly synchronized with the Master using all these components.  Clusters can be easily made highly available and scaled up/down on demand. They can also be federated across cloud providers and data centers if a hybrid architecture is so desired.

The next and perhaps the most important runtime abstraction in k8s is called a Pod. It is recommended that applications deployed in a K8s infrastructure be composed of lightweight and stateless microservices. These microservices can be deployed in individual or multiple containers. If the former strategy is chosen, each container only performs a specialized business activity. Though k8s also supports stateful applications, stateless applications confer a variety of benefits including loose coupling, auto-scaling etc.

The Pod is essentially the unit of infrastructure that runs an application or a set of related applications and as such it always exists in the context of a set of Linux namespaces or cgroups. A Pod is a group of one or more containers which always run on the same host. They are always scheduled together and share a common IP address/port configuration. However, these IP assignments cannot be guaranteed to stay the same over time. This can lead to all kinds of communication issues over complex n-tier applications. Kubernetes provides an abstraction called a Service – which is a grouping of a set of pods mapped to a common IP address.

The pod level inter-communication happens over IPC mechanisms. Pods also share local storage running on the node with the shared storage essentially mounted on.  The infrastructure can provide services to the pod that span resources and process management.  The key advantage here is that Pods can run related groups of applications with the advantage that individual containers can be made not only more lightweight but also versioned independently, which greatly aids in complex software projects where multiple teams are working on their own microservices which can be created and updated on their own separate cadence.

Labels are key value pairs that k8s uses to identify a particular runtime element be it a node, pod or service. They are most frequently applied to pods and can be anything that makes sense to the user or the administrator. Example of a pod label would be –  (app=mongodb, cluster=eu3,language=python).  Label Selectors determine what Pods are targeted by a Service.

From an HA standpoint, administrators can declare a configuration policy that states the number of pods that they need to have running at any given point. This ensures that pod failures can be recovered from automatically by starting new pods. An important HA feature is the notion of replica sets. The Replication controller ensures that there are a specified set of pods available to a given application and in the event of failure, new pods can be started to ensure that the actual state matches the desired state. Such pods are called replica sets. Workloads that are stateful are covered for HA using what are called pet sets.

The Replication Controller component running in the Master node determines which pods it controls and then uses a pod template file (typically a JSON or YAML file) to create new pods. It also is in charge of ensuring that the number of pods stays in consonance with replica counts. It is important to note that while the Replication controller just replaces dead or dysfunctional pods on the nodes that hosted them, it does not more pods across nodes.

Storage & Networking in Kubernetes  –

Local pod storage is ephemeral and is reclaimed when the pod dies or is taken offline but if data needs to be persistent or shared between pods K8s provides Volumes. So really, depending on the use case, k8s supports a range of storage options from local storage to network storage (NFS, Ceph, Gluster, Ceph) to cloud storage (Google Cloud or AWS). More details around these emerging features are found at the K8s official documentation. [1]

Kubernetes has a pluggable networking implementation that works with the design of the underlying network. [2], there are four networking challenges to solve:

  • Container-container communication within a host – this is based purely on IPC & localhost mechanisms
  • Interpod communication across hosts – Here Kubernetes mandates that all pods be able to communicate with one another without NAT and that the IP of a pod is the same from within the pod and outside of it.
  • Pod to Service communications – provided by the Service implementation. As we have seen above, K8s services are provided with IP addresses that clients can reach them by.  These IP addresses are proxied by the kube-proxy process which runs on all nodes sends to the service which then routes the external request to the correct pod.
  • External to Service communication – again provided by the Service implementation. This is done primarily by mapping the load balancer configuration to services running in the cluster. As outlined above, when traffic is sent to a node, the the kube-proxy process ensures that the traffic is routed to the appropriate service.

Network administrators looking to implement the K8s cluster model have a variety of choices from open source projects such as – Flannel, OpenContrail etc.

Why is Kubernetes such an exciting (and important) Cloud technology –

We have discussed the business & technology advantages of building an SDDC over the previous posts in this series.  As a project, k8s has very lofty goals to simplify the lifecycle of not only containers but also to enable the deployment & management of distributed systems across any kind of modern datacenter infrastructure. It’s designed to promote extensibility and pluggability (via APIs) as we will see in the next blog with OpenShift.

There are three specific reasons why k8s is rapidly becoming a de facto choice for Container orchestration-

  1. Once containers are used to full-blown applications,  organizations need to deal with several challenges to enable efficiency in the overall development & deployment processes. These include enabling a rapid speed of application development among various teams working on APIs, UX front ends, business logic, data etc.
  2. The ability to scale application deployments and to ensure a very high degree of uptime by leveraging a self-healing & immutable infrastructure. A range of administrative requirements from monitoring, logging, auditing, patching and managing storage & networking all come to consideration.
  3. The need to abstract developers away from the infrastructure. This is accomplished by allowing dev teams to specific their infrastructure requirements via declarative configuration.

Conclusion…

Kubernetes is emerging as the most popular platform to deploy and manage digital applications based on a microservices architecture. As a sign of its increased adoption and acceptance, Kubernetes is being embedded in Platform as a Service (PaaS) offerings where it offers all of the same advantages for administrators (deploying application stacks) while also freeing up developers of complex underlying infrastructure. The next post in this series will discuss OpenShift, Red Hat’s market leading PaaS offering, which leverages best of breed projects such as Docker and Kubernetes.

References…

[1] Kubernetes Offical documentation – https://kubernetes.io/docs/concepts/storage/persistent-volumes/

[2] Kubernetes Networking
– https://kubernetes.io/docs/concepts/cluster-administration/networking/

[3] Key Commits to Kubernetes – http://stackalytics.com/?project_type=kubernetes-group&metric=commits

[4] Borg: The predecessor to Kubernetes – http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html

Blockchain For the Enterprise: Key Considerations..

With advances in various Blockchain based DLTs (distributed ledger technology) platforms such as HyperLedger & Etherium et al, enterprises have begun to take baby steps to adapt the Blockchain (BC) to industrial scale applications. This post discusses some of the stumbling blocks the author has witnessed enterprises are running into as they look to get started on this journey. 

Image Credit – Blockchain Technologies

Blockchain meets the Enterprise…

The Blockchain is a system & architectural design pattern for recording (immutable) transactions while providing an unalterable historical audit trail. This approach (proven with the hugely successful Bitcoin) guarantees a high degree of security, transparency, and anonymity for distributed applications purpose built for it. Bitcoin is but the first application of this ground breaking design pattern.

Due to its origins in the Bitcoin ecosystem, there has been a high degree association of the Blockchain with the cryptocurrency movement. However, a wide range of potential enterprise applications has been identified in industries such as financial services, healthcare, manufacturing, and retail etc – as depicted in the below illustration.

The Evolution of Blockchain from a crypto-anarchist platform to a platform for distributed business applications.

Last year, we took an in-depth look into the business potential of the Blockchain design pattern at the below post.

The immense potential of the Blockchain..(3/5)

We can then define the Enterprise Blockchain as “a highly secure, resilient, algorithmic & accurate globally distributed ledger (or global database or the biggest filesystem or the largest spreadsheet) that provides an infrastructure pattern to build multiple types of applications that help companies (across every vertical), consumers and markets discover new business models, transact, trade & exchange information & assets.”

While some early deployments and initial standards making activity have been seen in financial services and healthcare, it is also finding significant adoption in optimizing internal operations for globally diversified conglomerates. For instance, tech major IBM claims to host one of the largest blockchain enterprise deployments. The application known as IGF provides working capital to about to 4,000+ customers, distributors, and partners. IBM uses its blockchain platform to manage disputes in the $48 billion IGF program. [1]. The near linear scalability of the blockchain ensures that the IGF can gradually increase the number of members participating in the network.

Image Credit – Mark Morris and IBM [3]
In particular, the Financial Services Industry has had several bodies aiming to create standards around use cases such as consumer and correspondent banking payments and around the trade lifecycle. Some examples of these are R3 Corda, HyperLedger, and Ethereum. However, there is still a large amount of technology innovation, adoption and ecosystem development that needs to happen before the technology is consumable by your everyday bank or manufacturer or insurer.

The Four Modes of Blockchain Adoption in the Enterprise…

There are certain criteria that need to be met for a business process to benefit from a distributed ledger. First off, the business process should comprise various actors (both internal and external to the organization), secondly, there should be no reason to have a central authority or middleman to verify daily transactions except when disputes arise. Third, the process should call for strict audit trail as well as transaction immutability. The assets stored on the blockchain can really be anything – data, contracts or transactions etc.

At a high level, there are four modes of adoption, or, ways in which a BC technology can make its way into an enterprise –

  • Organic Proof Of Concept’s (POC) – These are driven by innovation groups inside the company tasked with exploring the latest technology advances. Oftentimes, these are technology-driven initiatives in search of a business problem. The approach works like this – management targets specific areas in technology where the firm needs to develop capabilities around. The innovation team works on defining an appropriate technical approach, reference stack & architecture (in this case for applications that have been determined to be suitable to be POC’d on a DLT) et al. The risk in this approach is that much of the best practices, learnings etc from other organizations, vendors, and solution providers are not leveraged.
  • Participation in Industry Consortia – A consortium is a group of companies engaged in a similar business task. These kinds of initiatives are being driven by like minded enterprises banding together (within specific sectors such as financial services, insurance, and healthcare) to define common use-cases that can benefit from sector specific common standards from a DLT standpoint & the ensuing network effects. Consortiums tend to mitigate risk both from a business and a cost standpoint as several companies typically band together to explore the technology. However, these can be difficult to pull off many a time due to competitive and cultural reasons.
  • In many cases, Regulators are pushing industry leaders to look into use cases (such as Risk Management, BackOffice Processes, and Fraud Detection) which can benefit from adopting distributed ledger technology (DLT).
  • Partnerships with Blockchain start-ups – These arrangements enable the (slow to move) incumbent market leading enterprises to partner with the brightest entrepreneurial minds in the BC world who are building path-breaking applications which will upend business models. The focus of such efforts has been to identify a set of use-cases & technology approaches that would immensely help the organization from applying BC technology to their internal and external business challenges. The advantages of this approach are that the skills shortage when established companies tackle immature technology projects can be ameliorated by working with younger organizations.

Having noted all this, the majority of proof of concepts driven out of enterprises are failing or performing suboptimally.

I feel that this is due to various reasons some of which we will discuss below. Point to be noted is that we are assuming that there is strong buy-in around BC and DLT at the highest levels of the organization. Scepticism about this proven design pattern and overcoming it is quite another topic altogether.

The Key Considerations for a Successful Enterprise Blockchain or Distributed Ledger  (DLT)…

CONSIDERATION #1 – Targeting the right business use case for the DLT…

As we saw in the above sections, the use cases identified for DLT need to reflect a few foundational themes -non-reliance on a middleman, a business process supporting a truly distributed deployment, building trust among a large number of actors/counterparties, ability to support distributed consensus, and transparency. Due to its flat, peer to peer nature – Blockchain/DLT conclusively eliminates the need for any middleman.It is important that a target use case be realistic from both a functional requirement standpoint as well as from a business process understanding. The majority of enterprise applications can do perfectly well with a centralized database and applying DLT technology to them can cause projects to fail.

CONSIDERATION #2 – The Revenge of the Non Functional requirements…

Generally speaking, the current state of DLT platforms is that they fall short in a few key areas that enterprises usually take for granted in other platforms such as Cloud Computing, Middleware, Data platforms etc. These include key areas such as data privacy, transaction throughput, high speed of performance etc. If one recalls, the community Blockchain (that Bitcoin was built on) prioritized anonymity over privacy. This can sometimes be undesirable in areas such as payments processing or healthcare where the identitiy of consumers is governed by strict KYC (Know Your Customer) mandates. Thus, from an industry standpoint most DLT platforms are 24 months or so away from coming up to par in these areas in a manner that enterprises can leverage them.

Some of the other requirements, such as performance and scalability, are sometimes not directly tied to business features but lack of support for them can stymie any ambitious intended use of the technology. For instance, a key requirement in payments processing and supplier management is the ability for the platform to process a large number of transactions per second. Most DLT’s can only process around ten transactions per second on a permissionless network. This is far far from the ideal throughput needed in use-cases such as payments processing, IoT etc.

The good news is that the DLT community are acutely aware of enhancements that need to be done to the underlying platforms (e.g reduced block size etc) to increase throughput but these changes will take time to make their way into the mass market given the rigorous engineering work that needs to happen.

CONSIDERATION #3 – Neglecting Enterprise Integration Requirements…

The Blockchain/DLT is not a data management paradigm. This is important for adopters to understand. Also, there currently exist very few standards and guidance on integrating distributed applications (Dapps) custom built for DLTs with underlying enterprise assets. These assets include enterprise middleware stacks, identity management platforms, corporate security systems, application data silos, BPM (Business Process Management) and Robotic Process Automation systems etc. For the BC to work for any business capability and as a complete business solution, necessary integration must be provided with a reasonable number of backend systems that influence the business cases- most such integration is sorely lacking. Interoperability is still in its infancy despite vendor claims.

CONSIDERATION #4 – Understand that Smart Contracts are still in their infancy…

The blockchain introduces the important notion of programmable digital instruments or contracts. An important illustration of the possibilities of blockchain is this concept of a “Smart contract”. Instead of static data objects that are inserted into the distributed ledger, a Smart Contract is a program that can perform the generation of downstream actions when appropriate conditions are met. They only become immutable once accepted into the ledger. Business rules are embedded in a contract that can automatically trigger based on certain conditions being met. E.g. a credit pre-qualification or assets transferred after a payment is made or after legal approval is provided etc.

Smart Contracts are being spoken about as the key functionality for any DLT platform based on Blockchain. While this hype is justified in some sense, it should be noted that smart contracts are again not standards based across major DLT platforms. Which means that they’re not auditable & verifiable across both local and global jurisdictions or when companies use different underlying commercial DLTs. The technology will evolve over the next few years but it is still very early days to run large scale production grade applications that leverage Smart Contracts.

CONSIDERATION #5 – SECURITY and DATA PRIVACY CONCERNS…

The promise of the original blockchain platform which ran Bitcoin was very simple. It provided a truly secure, trustable and immutable record on which any digital asset could be run. Parties using the system were all in a permissionless mode which meant that their identities were hidden from one another and from any central authority. While this may work for Bitcoin like projects, the vast majority of industry verticals will need strong legal agreements and membership management capabilities which follow them. Accordingly, these platforms will need to be permission-ed.

CONSIDERATION #6 – Blockchain Implementations need to be treated as AN INTEGRAL part of Digital Transformation…

Blockchain as a technology definitely sounds way more exotic than Digital projects which have all the idea currency at the moment. However, an important way to visualize the organizational BC is that it provides an environment of instantaneous collaboration with business partners and customers. That is a core theme of Digital Transformation as one can appreciate. Accordingly, Blockchain/DLT proof of concepts themselves should be centrally funded & governed, skills should be grown in this area from both a development, administration and project management standpoint. Projects should be tracked using fair business metrics and appropriate governance mechanisms instituted as with any other digital initiative.

Conclusion…

Surely, Blockchain based distributed ledgers are going to usher in the next generation of distributed business processes. These will enable the easy transaction, exchange, and contraction of digital assets. However, before enterprises rush in, they need to perform an adequate degree of due diligence to avoid some of the pitfalls highlighted above.

References…

[1] IDC Insights – “IBM Wants to Make 2017 the Year of Blockchain Enterprise Deployment” https://www.idc.com/getdoc.jsp?containerId=EMEA42454617

[2] Coindesk – “Spanish Bank BBVA Joins Hyperledger Blockchain Project” –  https://www.coindesk.com/bbva-hyperledger-blockchain-project/

[3] “Blockchain: Supply Chain Dispute Resolution Killer Solution” – Mark Morris

https://www.linkedin.com/pulse/blockchain-supply-chain-dispute-resolution-killer-solution-morris

The Tao of Data Monetization in Banking and Insurance & Strategies to Achieve the Same…

“We live in a world awash with data. Data is proliferating at an astonishing rate—we have more and more data all the time, and much of it was collected in order to improve decisions about some aspect of a business, government, or society. If we can’t turn that data into better decision making through quantitative analysis, we are both wasting data and probably creating suboptimal performance.”
Tom Davenport, 2013  – Professor Babson College, Best Selling Author and Leader at Deloitte Analytics

Data Monetization is the organizational ability to turn data into cost savings & revenues in existing lines of business and to create new revenue streams.

Digitization is driving Banks and Insurance companies to reinvent themselves…

Enterprises operating in the financial services and the insurance industry have typically taken a very traditional view of their businesses. As waves of digitization have begun slowly upending their established business models, firms have begun to recognize the importance of harnessing their substantial data assets which have been built over decades. These assets include fine-grained data about internal operations, customer information and external sources (as depicted in the below illustration). So what does the financial opportunity look like? PwC’s Strategy& estimates that the incremental revenue from monetizing data could potentially be as high as US$ 300 billion [1] every year beginning 2019. This is across all the important segments of financial services-  capital markets, commercial banking, consumer finance & banking, and insurance. FinTechs are also looking to muscle into this massive data opportunity,

The compelling advantages of Data Monetization have been well articulated across new business lines, customer experience, cost reduction et al. One of the key aspects of Digital transformation is data and the ability to create new revenue streams or to save costs using data assets.

..Which leads to a huge Market Opportunity for Data Monetization…

In 2013, PwC estimated that the market opportunity in data monetization was a nascent – US $175 million. This number has begun to grow immensely over the next five years with consumer banking and capital markets leading the way.

Digital first has been a reality in the Payments industry with Silicon Valley players like Google and Apple launching their own payments solutions (in the form of Google Pay and Apple Pay).

Visionary Banks & FinTechs are taking the lead in Data Monetization…

Leader firms such as Goldman Sachs & AIG have heavily invested in capabilities around data monetization. In 2012, Goldman purchased the smallest of the three main credit-reporting firms – TransUnion. In three years, Goldman has converted TransUnion into a data-mining machine. In addition to credit-reporting, TransUnion now gathers billions of data points about Americans consumers. This data is constantly analyzed and then sold to lenders, insurers, and others. Using data monetization, Goldman Sachs has made nearly $600 million in profit. It is expected to make about five times its initial $550 million investment. [2]

From the WSJ article…

By the time of its IPO in 2015, TransUnion had 30 million gigabytes of data, growing at 25% a year and ranging from voter registration in India to drivers’ accident records in the U.S. The company’s IPO documents boasted that it had anticipated the arrival of online lenders and “created solutions that catered to these emerging providers.”

As are forward looking Insurers …

The insurance industry is reckoning with a change in consumer behavior. Younger consumers are very comfortable with using online portals to shop for plans, compare them, purchase them and do other activities that increase the amount of data being collected by the companies. While data and models that operate on them have been critical in the insurance industry, it has been stronger around the actuarial areas. The industry has now begun heavily leveraging data monetization strategies across areas such as new customer acquisition, customer Underwriting, Dynamic Pricing et al. A new trend is for them to form partnerships with Automakers to tap into a range of telematics information such as driver behavior, vehicle performance, and location data. In fact, Automakers are already ingesting and constantly analyzing this data with the intention of leveraging it for a range of use-cases which include selling this data to insurance companies.

Leading carriers such as AXA are leveraging their data assets to strengthen broker and other channel relationships. AXA’s EB360 platform helps brokers with a range of analytic infused functions – e.g. help brokers track the status of applications, manage compensation, and commissions and monitor progress on business goals. AXA has also optimized user interfaces to ensure that data entry is minimized while supporting rapid quoting thus helping brokers easily manage their business thus strengthening the broker-carrier relationship.[3]

Introducing Five Data Monetization Strategies across Financial Services & Insurance…

Let us now identify and discuss five strategies that enable financial services firms to progressively monetize their data assets. It must be mentioned that doing so requires an appropriate business transformation strategy to be put into place. Such a strategy includes clear business goals such as improving core businesses to entering lateral business areas.

Monetization Strategy #1 – Leverage Data Collected during Business Operations to Ensure Higher Efficiency in Business Operations…

The simplest and easiest way to monetize on data is to begin collecting disparate data generated during the course of regular operations. An example in Retail Banking is to collect data on customer branch visits, online banking usage logs, clickstreams etc. Once collected, the newer data needs to be fused with existing Book of Record Transaction (BORT) data to then obtain added intelligence on branch utilization, branch design & optimization, customer service improvements etc. It is very important to ensure that the right metrics are agreed upon and tracked across the monetization journey.

Monetization Strategy #2 – Leverage Data to Improve Customer Service and Satisfaction…

The next progressive step in leveraging both internal and external data is to use it to drive new revenue streams in existing lines of business. This requires fusing both internal and external data to create new analytics and visualization. This is used to drive use cases relating to cross sell and up-sell of products to existing customers.

Demystifying Digital – Reference Architecture for Single View of Customer / Customer 360..(3/3)

Monetization Strategy #3 – Use Data to Enter New Markets…

A range of third-party data needs to be integrated and combined with internal data to arrive at a true picture of a customer. Once the Single View of a Customer has been created, the Bank/Insurer has the ability to introduce marketing and customer retention and other loyalty programs in a dynamic manner. These include the ability to combine historical data with real time data about customer interactions and other responses like clickstreams – to provide product recommendations and real time offers.

Demystifying Digital – the importance of Customer Journey Mapping…(2/3)

An interesting angle on this is to provide new adjacent products much like the above TransUnion example illustrates.

Monetization Strategy #4 – Establish a Data Exchange…

The Data Exchange is a mechanism where firms can fill in holes in their existing data about customers, their behaviors, and preferences. Data exchanges can be created using a consortium based approach that includes companies that span various verticals. Companies in the consortium can elect to share specific datasets in exchange for data while respecting data privacy and regulatory constraints.

Monetization Strategy #5 – Offer Free Products to Gather Customer Data…

Online transactions in both Banking and Insurance are increasing in number year on year. If Data is true customer gold then it must be imperative on companies to collect as much of it as they can. The goal is to create products that can drive longer & continuous online interactions with global customers. Tools like Personal Financial Planning products, complementary banking and insurance services are examples of where firms can offer free products that augment existing offerings.

A recent topical example in Telecom is Verizon Up, a program from the wireless carrier where consumers can earn credits (that they can use for a variety of paid services – phone upgrades, concert tickets, uber credits and movie premieres etc) in exchange for providing access to their browsing history, app usage, and location data. Verizon also intends to use the data to deliver targeted advertising to their customers. [4]

Consumers can win Lady Gaga tickets in Verizon’s new rewards program, which requires that they enroll in its targeted advertising program. PHOTO: ADREES LATIF/REUTERS

How Data Science Is a Core Capability for any Data Monetization Strategy…

Data Science and Machine learning approaches are the true differentiators and the key ingredients in any data monetization strategy. Further, it is a given that technological investments in Big Data Platforms, analytic investments in areas such as machine learning, artificial intelligence are also needed to stay on the data monetization curve.

How does this tie into practical use-cases discussed above? Let us consider the following usecases of common Data Science algorithms –

  • Customer Segmentation– For a given set of data, predict for each individual in a population, a discrete set of classes that this individual belongs to. An example classification is – “For all retail banking clients in a given population, who are most likely to respond to an offer to move to a higher segment”.
  • Pattern recognition and analysis – discover new combinations of business patterns within large datasets. E.g. combine a customer’s structured data with clickstream data analysis. A major bank in NYC is using this data to bring troubled mortgage loans to quick settlements.
  • Customer Sentiment analysis is a technique used to find degrees of customer satisfaction and how to improve them with a view of increasing customer net promoter scores (NPS).
  • Market basket analysis is commonly used to find out associations between products that are purchased together with a view to improving marketing products. E.g Recommendation engines which to understand what banking products to recommend to customers.
  • Regression algorithms aim to characterize the normal or typical behavior of an individual or group within a larger population. It is frequently used in anomaly detection systems such as those that detect AML (Anti Money Laundering) and Credit Card Fraud.
  • Profiling algorithms divide data into groups, or clusters, of items that have similar properties.
  • Causal Modeling algorithms attempt to find out what business events influence others.

Conclusion..

Banks and Insurers who develop data monetization capabilities will be positioned to create new service offerings and revenues. Done right (while maintaining data privacy & consumer considerations), the monetization of data represents a truly transformational opportunity for financial services players in the quest to become highly profitable.

References..

[1] PwC Strategy& – “The Data Gold Rush” – https://www.strategyand.pwc.com/media/file/Strategyand_The-Data-Gold-Rush.pdf

[2] WSJ – “How Goldman Sachs Made More Than $1 Billion With Your Credit Score”

https://www.wsj.com/articles/how-goldman-sachs-made-more-than-1-billion-with-your-credit-score-1491742835

[3] McKinsey Quarterly – “Harnessing the potential of data in insurance..”

http://www.mckinsey.com/industries/financial-services/our-insights/harnessing-the-potential-of-data-in-insurance

[4] Verizon Wants to Build an Advertising Juggernaut. It Needs Your Data First

https://www.wsj.com/articles/verizon-wants-to-build-an-advertising-juggernaut-it-needs-your-data-first-1504603801

Why Data Garbage-In means Analytics Garbage-Out..

This is the third in a series of blogs on Data Science that I am jointly authoring with Maleeha Qazi, (https://www.linkedin.com/in/maleehaqazi/). We have previously covered some of the inefficiencies that result from a siloed data science process @ http://www.vamsitalkstech.com/?p=5046 & the ideal way Data Scientists would like their models deployed for the maximal benefit and use – as a Service @ http://www.vamsitalkstech.com/?p=5321. As the name of this third blog post suggests, the success of a data science initiative depends on data. If the data going into the process is “bad” then the results cannot be relied upon. Our goal is to also suggest some practical steps that enterprises can take from a data quality & governance process standpoint. 

However, under the strong influence of the current AI hype, people try to plug in data that’s dirty & full of gaps, that spans years while changing in format and meaning, that’s not understood yet, that’s structured in ways that don’t make sense, and expect those tools to magically handle it. ” – Monica Rogati (Data Science Advisor and ex-VP  Jawbone – 2017) [1]

Image Credit – The Daily Omnivore

Introduction

Different posts in this blog have discussed Data Science and other Analytical approaches to some degree of depth. What is apparent is that whatever the kind of analytics – descriptive, predictive, or prescriptive – the availability of a wide range of quality data sources is key. However, along with volume and variety of data, the veracity, or the truth, in the data is as important. This blog post discusses the main factors that determine the quality of data from a Data Scientist’s perspective.  

The Top Issues of Data Quality

As highlighted in the above illustration, the top quality issues that data assets typically face are the following:

  1. Incomplete Data: The data provided for analysis should span the entire cross-section of known data about how the organization views its customers and products. This would include data generated from various applications that belong to the business, and external data bought from various vendors to enriched the knowledge base. The completeness criteria measures if all of the information about entities under consideration is available and useable.
  2. Inconsistent & Inaccurate Data: Consistency measures what data values give conflicting information and must be fixed. It also measures if all the data elements conform to specific and uniform formats and are stored in a consistent manner. Inaccurate data either has duplicate, missing or erroneous values. It also does not reflect an accurate picture of the state of the business at the point in time it was pulled.
  3. Lack of Data Lineage & Auditability: The data framework needs to support audit-ability, i.e provide an audit trail of how the data values were derived from source to analysis point; the various transformations performed on it to arrive at the data set being considered for analysis.
  4. Lack of Contextuality: Data needs to be accompanied by meaningful metadata – data that describes the concepts within the dataset.
  5. Temporally Inconsistent: This measures if the data was temporally consistent and meaningful given the time it was recorded.

What Business Challenges does Poor Data Quality Cause…

Image Credit – DataMartist

Data Quality causes the following business challenges in enterprises:

  1. Customer dissatisfaction: Across industries like Banking, Insurance, Telecom & Manufacturing, the ability to get a unified view of the customer & their journey is at the heart of the enterprise’s ability to promote relevant offerings & detect customer dissatisfaction. Currently, most industry players are woeful at putting together this comprehensive Single View of their Customers (SVC). Due to operational silos, each department possesses its own siloed & limited view of the customer across multiple channels. These views are typically inconsistent, lack synchronization with other departments, & miss a high amount of potential cross-sell and upsell opportunities. This is a data quality challenge at its core.
  2. Lost revenue: The Customer Journey problem has been an age-old issue which has gotten exponentially more complicated over the last five years as the staggering rise of mobile technology and the Internet of Things (IoT) have vastly increased the number of enterprise touch points that customers are exposed to in terms of being able to discover and purchase new products/services. In an OmniChannel world, an increasing number of transactions are being conducted online. In verticals like the Retail industry and Banking & Insurance industries, the number of online transactions conducted approaches an average of 40%. Adding to the problem, more and more consumers are posting product reviews and feedback online. Companies thus need to react in real-time to piece together the source of consumer dissatisfaction.
  3. Time and cost in data reconciliation: Every large enterprise nowadays runs expensive data re-engineering projects due to their data quality challenges. These are an inevitable first step in other digital projects which cause huge cost and time overheads.
  4. Increased time to market for key projects: Poor data quality causes poor data agility, which increases the time to market for key projects.
  5. Poor data means suboptimal analytics: Poor data quality causes the analytics done using it to be suboptimal – algorithms will end up giving wrong conclusions because the input provided to them is incorrect at best & inconsistent at worst.

Why is Data Quality a Challenge in Enterprises

Image Credit – DataMartist

The top reasons why data quality has been a huge challenge in the industry are:

  1. Prioritization conflicts: For most enterprises, the focus of their business is the product(s)/service(s) being provided, book-keeping is a mandatory but secondary concern. And since keeping the business running is the most important priority, keeping the books accurate for financial matters is the only aspect that gets most of the technical attention it deserves. Other data aspects are usually ignored.
  2. Organic growth of systems: Most enterprises have gone through a series of book-keeping methods and applications, most of which have no compatibility with one another. Warehousing data from various systems as they are deprecated, merging in data streams from new systems, and fixing data issues as these processes happen is not prioritized till something on the business end fundamentally breaks. Band-aids are usually cheaper and easier to apply than to try and think ahead to what the business will need in the future, build it, and back-fill it with all the previous systems’ data in an organized fashion.
  3. Lack of time/energy/resources: Nobody has infinite time, energy, or resources. Doing the work of making all the systems an enterprise chooses to use at any point in time talk to one another, share information between applications, and keep a single consistent view of the business is a near-impossible task. Many well-trained resources, time & energy is required to make sure this can be setup and successfully orchestrated on a daily basis. But how much is a business willing to pay for this? Most do not see short-term ROI and hence lose sight of the long-term problems that could be caused by ignoring the quality of data collected.
  4. What do you want to optimize?: There are only so many balls an enterprise can have up in the air to focus on without dropping one, and prioritizing those can be a challenge. Do you want to optimize the performance of the applications that need to use, gather and update the data, OR do you want to make sure data accuracy/consistency (one consistent view of the data for all applications in near real-time) is maintained regardless? One will have to suffer for the other.

How to Tackle Data Quality

Image Credit – DataMartist

                                                   

With the advent of Big Data and the need to derive value from ever increasing volumes and a variety of data, data quality becomes an important strategic capability. While every enterprise is different, certain common themes emerge as we consider the quality of data:

  1. The sheer number of transaction systems found in a large enterprise causes multiple challenges across the data quality dimensions. Organizations need to have valid frameworks and governance models to ensure the data’s quality.
  2. Data quality has typically been thought of as just data cleansing and fixing missing fields. However, it is very important to address the originating business processes that cause this data to take multiple dimensions of truth. For example, centralize customer onboarding in one system across channels rather than having every system do its own onboarding.
  3. It is clear from the above that data quality and its management is not a one time or siloed application exercise. As part of a structured governance process, it is very important to adopt data profiling and other capabilities to ensure high-quality data.

Conclusion

Enterprises need to define both quantitative and qualitative metrics to ensure that data quality goals are captured across the organization. Once this is done, an iterative process needs to be followed to ensure that a set of capabilities dealing with data governance, auditing, profiling, and cleansing is applied to continuously ensure that data is brought up to, and kept at, a high standard. Doing so can have salubrious effects on customer satisfaction, product growth, and regulatory compliance.

References

[1] Monica Rogati “The AI hierarchy of Needs” – https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007

Anti Money Laundering (AML) – Industry Insights & Reference Architectures…

This blog has from time to time discussed issues around the defensive portion of financial services industry  (Banking, Payment Processing, and Insurance etc). Anti Money Laundering (AML) is a critical area where institutions need to protect themselves and their customers from malicious activity. This post summarizes eight key blogs on the topic of AML published at VamsiTalksTech.com. It aims to serve as a handy guide for business and technology audiences tasked with implementing complex AML projects.

Image Credit – FIBA Anti-Money Laundering Compliance Conference

Introduction

Money laundering has emerged as an umbrella crime which facilitates public corruption, drug trafficking, tax evasion, terrorism financing etc. Banks and other financial institutions are expected to conduct business in a manner that protects their countries of operations and consumers from security risks such as laundering, terrorist financing, and corruption (the ML/TF risks). Given the global reach of financial products, a variety of regulatory authorities is concerned about money laundering.  Technology has become key to meeting the regulatory expectations as well as reducing costs in these onerous programs. As the below graphic from PwC [1] demonstrates this is one of the most pressing issues facing financial services industry.

The above infographic from PwC provides a handy visual guide to the state of global AML programs.

The Six Critical Gaps in Global AML Programs…

From an industry standpoint, the highest priority issues that are being pointed out by regulators include the following –

  1. Institutions failing to develop AML frameworks that are unique to the risks run by organizations given their product and geographic mix
  2. Failure to develop real-time insights into business transactions and assigning them elevated risks based on their elements
  3. Developing AML models that draw from the widest possible sources of data – both internal and external – to understand a true picture of the business
  4. Demonstrating a consistent approach across geographies
  5. Leveraging the latest developments in analytics including Machine Learning to enable the automation of AML programs
  6. Lack of appropriate business governance & change management in setting, monitoring and managing AML compliance programs, policies and procedures

With this background in mind, the complete list of AML blogs on VamsiTalksTech is included below.

# 1 – Why Banks should Digitize their Operations and how this will help their AML programs –

Digitization implies a mix of business models predicated on agile systems, rapid & iterative development and more importantly – a Data First strategy. These have significant impacts on AML programs as well in addition to helping increase market share.

A Digital Bank is a Data Centric Bank..

# 2 – Why Data Silos are a huge challenge in many cross organization projects such as AML –

Organizational Data Silos inhibit the effectiveness of AML programs as compliance officers cannot gain a single view of a customer or single view of a suspicious transaction or view the social graph in critical areas such as trade finance. This blog discusses the Silo anti-pattern and ways to mitigate silos from proliferating.

Why Data Silos Are Your Biggest Source of Technical Debt..

# 3 – The Major Workstreams Around AML Programs

The headline is self-explanatory but we discuss the five major work streams on global AML projects – Customer Due Diligence, Entity Analysis, Downstream Analytics, Ongoing Monitoring and Investigation Lifecycle.

Deter Financial Crime by Creating an Effective Anti Money Laundering (AML) Program…(1/2)

# 4 – Predictive Analytics Across the AML workstreams –

Here we examine how Predictive Analytics can be applied across all of the five work streams.

How Big Data & Predictive Analytics transform AML Compliance in Banking & Payments..(2/2)

# 5 – The Business Need for Big Data in AML programs  –

This post discusses the most important developments in building AML systems using Big Data Technology-

Building AML Regulatory Platforms For The Big Data Era

# 6 – A Detailed Look at how Enterprises can use Big Data and Advanced Analytics to reduce AML costs –

How to leverage Big Data and Advanced Analytics to detect a range of suspicious transactions and actors.

Big Data – Banking’s New Weapon In War Against Financial Crime..(1/2)

# 7 – Reference Architecture for AML  –

We discuss a Big Data enabled Reference Architecture of an enterprise-wide AML program.

Big Data – Banking’s New Weapon In War Against Financial Crime..(2/2)

Conclusion

According to Pricewaterhouse Coopers, the estimates of global money laundering flows were between 2-5% of global GDP [1] in 2016 – however, only 1% of these transactions were caught. Certainly, the global financial industry has a long way to go before they effectively stop these nefarious actors but there should be no mistaking that technology is a huge part of the answer.

References –

  1. Pricewaterhouse Coopers 2016 AML Survey  http://www.pwc.com/gx/en/services/advisory/forensics/economic-crime-survey/anti-money-laundering.html

Data Science in the Cloud A.k.a. Models as a Service (MaaS)..

This is second in a series of blogs on Data Science that I am jointly authoring with Maleeha Qazi, (https://www.linkedin.com/in/maleehaqazi/). We have previously covered some of the inefficiencies that result from a siloed data science process @ http://www.vamsitalkstech.com/?p=5046. All of the actors in the data science space can agree that becoming responsive to business demands is the overarching goal of the process. In this second blog post, we will discuss Model as a Service (MaaS), an approach to ensuring that models and their insights can be leveraged throughout a large organization.

Image Credit – Logistics Industry Blog

Introduction

Hardware as a Service (HaaS), Software as a Service (SaaS), Database as a Service (DBaaS), Infrastructure as a Service (IaaS), Platform as a service (PaaS), Network as a Service (NaaS), Backend as a service (BaaS), Storage as a Service (STaaS). While every IT delivery model is going the way of the cloud, does Data Science lag behind in this movement?  In such an environment, what do Data Scientists dream of to ensure that their models are constantly being trained on high quality and high volume production grade data?… Models as a Service (MaaS).

The Predictive Analytics workflow…

The Predictive Analytics workflow always starts with a business problem in mind. For example: “A marketing project to detect which customers are likely to buy new products or services in the next six months based on their historical & real time product usage patterns” or “Detect real-time fraud in credit card transactions.”

Illustration – The Predictive Analysis Workflow in a financial services setting

In use cases like these, the goal of the data science process is to be able to segment & filter customers by corralling them into categories that enable easy ranking. Once this is done, the business can setup easy and intuitive visualizations to present the results.

A lot of times, business groups have a hard time explaining what they would like to see – both in terms of input data and output format. In such cases, a prototype makes things easier from a requirement gathering standpoint.  Once the problem is defined, the data scientist/modeler identifies the raw data sources (both internal and external) which are pertinent to the business challenge.  They spend a lot of time in the process of collating the data (from a variety of sources like Oracle/SQL Server, DB2, Mainframes, Greenplum, Excel sheets, external datasets, etc.). The cleanup process involves dealing with missing values, corrupted data elements, formatting fields to be homogenous in terms of format, etc.

This data wrangling phase involves writing code to join various data elements so that a complete dataset is gathered in the Data Lake from a raw features standpoint, at the correct granularity for the problem at hand.  If more data is obtained as the development cycle is underway, the Data Science team has to go back & redo the process to incorporate the new data feeds. The modeling phase is where sophisticated algorithms come into play. Feature engineering takes in business concepts & raw data features and creates predictive features from them. The Data Scientist takes the raw & engineered features and creates a model by applying various algorithms & testing to find the best one. Once the model has been refined, & tested for accuracy and performance, it is ideally deployed as a service.

Challenges with the existing approach

The challenges with the above approach are:

  1. Business Scalability – Predictive analytics as highlighted above resembles a typical line of business project or initiative. The benefits of the learning from localized application initiatives are largely lost to the larger organization if you don’t allow multiple applications and business initiatives to access the models built.
  2. Lack of Data Richness – The models created by individual teams are not always enriched by cross organizational data constantly being generated by different business applications. In addition to that, the vast majority of industrial applications do not leverage all possible kinds of unstructured data & 3rd party data in their business applications. Enabling the models to be exposed to a range of data (both internal and external) can only enrich the insights generated.
  3. Cross Application Applicability – This challenge deals with how business intelligence insights from disparate applications (which leverage different models), to enhance business areas they weren’t originally created for. This could allow for customer centered insights in real-time. For example, consider a customer sales application and a call center application. Can cross application insights be used to understand that customers are calling into the call center because it has been hard to use the website to order products?
  4. Data Monetization  – What is critical in the ability to create new commercial business models is agile analytics around existing and new data sources. If it follows that enterprise businesses are being increasingly built around data assets, then it must naturally follow that data as a commodity can be traded or re-imagined to create revenue streams off of it. As an example, pioneering payment providers now offer retailers analytical services to help them understand which products perform best and how to improve the micro-targeting of customers. Thus, data is the critical prong of any digital initiative. This has led to efforts to monetize on data by creating platforms that support ecosystems of capabilities. To vastly oversimplify this discussion, the ability to monetize data needs two prongs – to centralize it in the first place and then to perform strong predictive modeling at large scale where systems need to constantly learn and optimize their interactions, responsiveness & services based on client needs & preferences. Thus, centralizing models offer more benefits than the typical enterprise can imagine.

    Enter Model As A Service…

    The MaaS takes in business variables (often hundreds or thousands of inputs) and provides as output model results upon which business decisions can be predicated upon. And also visualizations that augment and support business decision support systems. As depicted in the above illustration, once different predictive models are built, tested and validated, they are ready to be used in real world production deployments. MaaS is essentially a way of deploying these advanced models as a part of software applications where they are offered as a software subscription.

    MaaS also enables cleaner separation of the Application development process and the Data Science workflow.

    Business Benefits from a MaaS approach

    1. Exposing models to different lines of business thus increasing their usefulness and opening them up to feedback to help increase their accuracy.
    2. MaaS opens the models to any application that wants to take advantage of them. This forces Data scientists to work with business teams that are much broader than they otherwise would have access to work with normally.
    3. The provision of dashboards and business intelligence across the organization becomes much easier than with a siloed approach.
    4. MaaS as an approach fundamentally encourages an agile approach to managing data assets and also to rationalizing them. For any MaaS initiative to succeed, timely access needs to be provided to potentially hundreds of data sources in an organization. MaaS encourages a move to viewing data as a reusable asset across the organization.

    Technical advantages of the MaaS approach

    • Separation of concerns : software & data feeds maintained by IT, models maintained by Data Scientists.
    • Versioning of models can be separated from versioning of system(s) using models.
    • Same models can be utilized by multiple software packages for consistency.
    • Consistent handling of data sources: e.g. which “master” source provides what types of data for all the models so that a customer looks the same regardless of the model acting on the data for insights.
    • Single point for putting a “watch” on the performance of a model.
    • Controlled usage of model.
    • MaaS ensures that the analytic process can be automated from a deployment standpoint.  

    Conclusion

    MaaS can enable organizations to move their analytic practices and capabilities to the next level. It enables the best of both worlds – the ability to centralize the data science capabilities across an organization while keeping customer data securely inside the organization. Done right, it can enable the democratization of data science insights across a large enterprise.

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

The third and previous blog in this seven part series (@ http://www.vamsitalkstech.com/?p=4659)  discussed Apache Mesos, a project that aims to abstract away various system resources – CPU, memory, network and disk resources to provide consuming digital applications with a giant cluster from which they can utilize capacity – a key requirement of the Software Defined Datacenter (SDDC). In this fourth blog, we will discuss another important ecosystem technology & project – Linux Containers and Docker – which forms the foundational runtime component in the SDDC. The next blog will discuss Kubernetes – Google’s container orchestration platform.

Much like shipping goods in Containers over Oceans, Linux Containers offer a portable, lightweight & convenient way to ship business applications. (Image Credit – WallPapers 13)

Executive Summary…

We can agree that the Digital application is inherently a distributed application. Such applications have historically been extremely hard to develop, setup and manage across a large fleet of data center servers that are a mix of platforms and technologies. Thus it is no surprise that one of the most disruptive developments in the last five years has been the innovation in the Linux container space. Containers now enable the running distributed applications at scale. 

Due to business reasons, Digital applications demand constant updates, changes and incremental revisions in response to changing customer needs. The Software Defined Datacenter (SDDC) thus needs a runtime paradigm that enables not just efficient hardware usage but also supports standardized application environments that are portable simplified and consistent across hybrid clouds and hypervisors.  Containers fill this need and are thus emerging to be the natural unit of deployment across the SDDC. Much has been written on the topic of Docker and Linux Container technology. My goal for this blog post is to distill key insights in the container ecosystem.

The Technologies of Linux Containers & Docker

Unlike Virtual Machines, Container Engines such as Docker share a common OS (Image Credit – MSFT Azure)

Linux Containers are alike and yet different from virtual machines. They are alike in the sense that each Container shares system resources on the underlying hardware platform – CPU, RAM, and Network – as with VMs. However, while each VM maintains its separate copy of the Operating System (OS), containers share the same OS kernel while keeping themselves separate from other containers running on the same OS.  How do they do that?

Though the terms ‘Docker’ and ‘Container’ have become almost synonymous – it needs to be noted that Docker is a company focused on developing technology enablement around containers in areas such as orchestration, networking, and management. Docker was an open source project (now renamed to Moby [1]) that provided capabilities such as a standard description of container formats, utilities for application packaging, deployment & lifecycle management of applications inside Linux Containers. It provides a Docker CLI command line tool for the lifecycle management of image-based containers.

Prior to the explosion of interest in Linux containers & the founding of Docker, traditional Linux distributions (with a minimum kernel level of 3.8) supported two foundational paradigms – control groups (cgroups) and kernel namespaces.  Linux containers use both these features to achieve their goal of isolation and portability. Cgroups enables the host to limit the resources each container process can use from a CPU, Memory, Filesystem, User ID components and Network standpoint. This ensures that containers running on a host cannot starve others of resources thus avoiding the “Noisy Neighbor” problem that bedeviled a lot of cloud deployments.

Kernel Namespaces ensure another kind of isolation for process interactions within the OS. Containers can only view and modify resources in the same namespace. This ensures a security mechanism where other containers and processes on the host cannot launch attacks on a given application running on a tenant container or on the host itself. Thus the combination of both these technologies ensures that multiple applications running within their individual containers can share CPU and Memory without needing the overhead of virtualization. Docker also grants each container its own networking implementation thus ensuring that resources such as socket and interfaces can also be protected.

Companies including Red Hat, IBM, Google, Cisco, VMware, and CoreOS have greatly aided with the development of and accessibility of containers in their platforms and products.

Layered Filesystems..

Various Image Layers in Docker. Each layer in the file system is mounted on the previous.The topmost is the Writable Container. (Image Credit – Docker)

We discussed how Container Images are Immutable. This is the key advantage of using container technology such as Docker & is made possible by the notion of a Union filesystem. What are Union filesystems and how do they enforce immutability? Much like the image in a Virtual Machine sense, Containers also run from an image, which typically are a snapshot of a filesystem but tend to be much smaller than VM images since the Container is installed on a host kernel.

Union filesystems are best described as a layered architecture – in that each layer is created independently and then added atop of the previous layer.  An example of a Union filesystem is a Linux Kernel – an OS – then a data base like Oracle – then Tomcat – and a web application on it. The top layer is always the Writable layer. The real advantage in using a union filesystem is that using these images becomes super efficient from a storage and execution standpoint. Union filesystems also help in sharing portions of the OS across containers. Simply put, an image contains everything an application needs – from it’s dependencies and external libraries. When an Image is run, it is called a Container. In the case of Docker, it uses a layered copy on write filesystem called AUFS (Another Union Filesystem).

Containers and Developers..

Containers are possibly the first infrastructure software category created by developers in mind. The prominence of Linux Containers has Docker coincided with the onset of agile development practices under the DevOps umbrella – CI/CD etc. Containers are an excellent choice to create agile delivery pipelines and continuous deployment. At their core, Containers enable the creation of multiple self-contained execution environments over the same operating system.

Developers are naturally excited about Linux Containers for five specific reasons –

  1. Containers allow for image consistency across OS environments. This is a huge help in accelerating the development process from development to debugging to production. Developers can just focus on building their applications (in dev environments that match the test and prod) and packaging them in containers. This just takes a lot of the inefficiency around environment dissimilarities out of the equation.
  2. Containers are treated as a standard linux process by the kernel & thus are orders of magnitude quicker from a startup time when compared to VMs. This means that developers can start their applications in a manner of seconds as long as they run them in a container.
  3. Containers also provide development organizations the ability to standardize application development workflows and update processes. This solves the scalability problem that digital applications have caused large organizations.
  4. Digital applications are leading the move to adopt microservices. Microservices offer a way to build applications as a collection of discrete services as opposed to a monolithic architecture. By there very nature, microservices can be built and managed by different teams. Containerization affords a lightweight way of building and deploying microservices.
  5. Containers offer a portable way of delivering applications (across Operating Systems) as well as provide horizontal scalability.

    Digital Application development using Containers..

Digital Application Development and Deployment Workflow using Containers.

There are a few key runtime components involved in operationalizing a small to medium to large scale container infrastructure as the above illustration depicts.

  1. Firstly, developers create container images. These images describe an application and it’s dependencies. An easy way to conceptualize an image is to think of it as a basic deployment template. Image are also immutable in that they are read only and any changes happen in the top most layer which is writable. Modifying an image is to create a new one. Images thus have a Parent Child relationship. Developers create images by building their applications on their developer environments, performing unit tests and then pushing to a repository. Once the container is built with the necessary dependencies, these tools run a battery of tests to validate business functionality. A large part of this process is usually best automated using CI/CD tools like Jenkins, CruiseControl or Buildbot etc.
  2. The built images are then made available in a Container Registry. This is either maintained internally or sourced from a trusted external source. As the name suggests, Registries maintain a catalog of container images of frequently used software – e.g. Custom applications and other software packages such as WordPress, Relational databases, Web Servers, Big Data technologies and Application Servers etc
  3. The next step is to create and deploy (runtime) containers from these images on a set of servers. Once images are released as a result of application development, sys admins work on the provisioning of the servers to run these images. Once a Container engine is installed on the server, images are loaded on and they take the runtime shape of containers. The mode of getting these images on these servers follows either a push/pull mechanism.
  4. Scheduling of containers on servers is also a process that usually done by Sys Admins. This involves running containers of certain kinds on servers that match up to certain CPU, I/O and Network capacity requirements
  5. To create complex real world deployments, not only do the servers and networking have to be created but these containers are also interconnected (e.g. a web server container to an application server) using Discovery mechanisms. These containers then need to also connect to a host of enterprise services. Customer traffic is then routed to the clustered containers running on these servers. Monitor the logs and performance of these containers and the microservices running on them.
  6. The process repeats from step #1 above.

Industry Adoption of Containers.

In a few years, containers will deliver the bulk of compute workloads across public cloud providers such as Amazon AWS, Google Compute Engine and Microsoft Azure. Given that the VM options on these clouds can run multiple containers which can scale on demand, the industry will begin to gravitate to higher utilization density. The SDDC has already begun incorporating hybrid architectures that run both containers and VMs in a complementary fashion.The Software Defined Datacenter will incorporate a hybrid model consisting of applications running on both Linux Containers and Virtual Machines.

Customers also have choices of traditional enterprise operating systems such as Red Hat Enterprise Linux or Microsoft Windows or can also run containers on OS’s developed for the purpose of hosting containers at hyper scale. These OS’s just provide tools to manage containers and nothing else. Examples include Red Hat Atomic Platform and CoreOS. Moving up the stack, pioneers such as Google and Red Hat have added core support for containers in projects such as OpenStack, Kubernetes, Mesos, OpenShift & CloudFoundry by helping with networking and persistent storage. Kubernetes (which we will cover in the next post) also handles provisioning on multiple public cloud platforms. Config Mgmt platforms such as Ansible, Chef and Puppet now support containerized deployments.

Technical Considerations for Container Adoption

Some key considerations that industry players are tackling from the standpoint of running containers at scale –

  1. Container Orchestration –  Organize groups of containers into compassable applications, scheduling them on servers that match their resource requirements, placement of containers based on network topology etc
  2. Container Networking – Containers follow a pluggable model and the network is no different. Key considerations – an enterprise network connectivity stack is needed to not only provide the interconnect between different containers but also to integrate them with existing Layer 2/3 networks. Additionally, network isolation needs to be provided for microservices running on these containers using either a dedicated IP address for each or an overlay network.
  3. Management and Monitoring -Life cycle processes ranging from Management and Monitoring encompass a range of questions – application patching with low downtime, graceful failures in cloud native applications, container scale up & scale down based on traffic patterns etc.

Containers and your Enterprise…

So what is the best way to adopt containers across a large enterprise?

  • Develop your container strategy in the context of the Nexus of Forces (i.e., information, mobile, social and cloud) initiatives in your organization — Containers are at the junction of these technologies.
  • Institute an organizational process to examine the business value of any initiative to adopt Containers. Understand what tools and platforms to adopt that will abstract away the complexities of using containers.
  • Understanding skills required to leverage containers. Containers are a new way for both developers and SysOps. Dependency management moves to the developers but they realize tremendous benefits in adopting these for high-velocity Digital applications
  • Identifying, measuring and benchmarking key success metrics that measure the ROI of the overall container investments.

Conclusion..

To sum up, the Linux (and Windows) container space is exploding both from a mindshare as well as an adoption standpoint. What is hugely encouraging is that a host of next generation platform technologies (ranging from IaaS to PaaS) are not just choosing to support containers as their basic runtime unit but are also focusing on becoming the defacto solution supporting a host of container ecosystem usecases – provisioning, orchestration, management, CI/CD et al. The next two blogs will respectively discuss how Google Kubernetes and Red Hat OpenShift overcome these challenges and abstract away much of the complexity around container deployments.

The next blog post in this series will discuss Google Kubernetes, the dominant project in the container orchestration space.

References

[1] Introducing Moby Project –  https://blog.docker.com/2017/04/introducing-the-moby-project/

The Deployment Architecture of an Enterprise API Management Platform..

We discussed the emergence of Application Programming Interfaces (APIs) as providing a key business capability in Digital Platforms @ http://www.vamsitalkstech.com/?p=3834. The next post then discussed the foundational technology, integration & governance capabilities that any Enterprise API Platform must support @ http://www.vamsitalkstech.com/?p=5102.  This final post in the API series will discuss a deployment model for an API Management Platform.

Background..

The first two posts in this series discussed the business background to API Management and the need for an Enterprise API Strategy. While details will vary across vendor platforms, the intention of this post is to discuss key runtime components of an API management platform and the overall developer workflow in creating APIs & runtime workflow to that enables client applications to access them.

Architectural Components of an API Management Platform..

The important runtime components of an API management platform are depicted in the below illustration. Note that we have abstracted out network components (firewalls, reverse proxies, VLANs, switches etc) as well as the internal details of application architecture which would normally be impacted by an API Platform.

The major components of an API Management Platform and the request flow across the architecture.

Let us cover the core components of the above:

  1. API Gateway -The API Gateway has emerged as the dominant deployment artifact in API Architectures. As the name suggests Gateways are based on a facade design pattern. The Gateway (or typically a set of highly available Gateways) acts as a proxy to traffic between client applications (used by customers, partners and employees) and back end services (ranging from mainframes to microservices). The Gateway is essentially an appliance or a software process that abstracts all API traffic into an organization and exposes business capabilities typically via a REST interface. Clients are exposed different views of the same API – coarse grained or granular – depending on the kind of client application (thick/thin) and access control permissions.  Gateways include protocol translation and request routing as their core functionality. It is also not uncommon to deploy multiple Gateways – in an internal and external fashion – depending on business requirements in terms of partner interactions etc. Gateways also include functionality such as caching requests for performance, load balancing, authentication, serving static content etc. The API Gateway can thus be managed using a set of policy controls. Performance characteristics such as throughput, scalability, caching, load balancing and failover are managed using a cluster of API Gateways.  The introduction of an API Gateway also ensures that application design is impacted going forward. API Gateways can be implemented in many forms – as a software platform or as an appliance. Public cloud providers have also begun offering mature API Gateways that integrate well with a range of backend services that they provide both from an IaaS and a PaaS standpoint. For instance, Amazon’s API Gateway integrates natively with AWS Lambda and EC2 Container Service for microservice deployments on AWS.
  2. Security -Though it is not a standalone runtime artifact, Security ends to be called out as one of the most important logical requirements of an API Management platform. APIs have to follow the same access control mechanisms, security constrains for different user roles etc as their underlying datasources. This is key as backend applications and organizational data need to be protected from a variety of targets – denial of service attacks, malware, access control violations etc. Accordingly, policy based protection using API keys, JSON/XML signature scanning & threat protection, encryption for Data in motion and at rest, OAuth support etc – all need to be provided as standard features.
  3. Developer portal -A Developer portal is the entry point for developers and can also serve as a developer onboarding tool. Thus, typically it is a web based portal integrated with the API Gateway. Developers use the portal to study API specs, download SDKs for different programming languages, register their APIs and to monitor their API performance. It also provides a visual interface to help developers build/test their APIs and also provides support for a high degree of automation using a continuous delivery model. For internal developers, the ability to provide self service consumption of API developer stacks (Node.js/ JavaScript frameworks/Java runtimes/ PaaS integration etc) is a highly desirable capability.
  4. Management and Monitoring -Ensuring that the exposed APIs are maintaining their QOS (Quality of Service) as helping admins monitor their quota of resource consumption is key from a Operations standpoint. Further, the M&M functionality should also aid operators in resolving complex systems issues and ensuring a high degree of availability during upgrades etc.
  5. Billing and Chargeback -Here we refer to the ability to tie in the usage of APIs to back office applications that can charge users based on their metered usage of the backend applications. This is typically provided through logging and auditing capability.
  6. Governance -From a Governance standpoint, the ability to track APIs across their lifecycle,  a handy catalog of available APIs, an ability to audit their usage and the underlying assets they expose and the ability for business to set policies on their usage etc.

API Design Process..

Most API Platforms provide a developer toolkit with varying degrees of integration with a runtime platform. Handy SKDs for iOS, Android and Javascript development are provided.

An internal developer uses the developer toolkit (e.g. Eclipse with an offline plugin) and/or an API Designer tool included with a vendor platform to create the API based on organizational policies. Extensive CLI (Command Line Interface) is also provided to perform all functions which can be done using the GUI. These include, local unit & system test capabilities and an ability to publish the tested APIs to a repository from where the runtime can access, deploy and update the APIs.

From a data standpoint, multiple databases including RDBMS, NoSQL are supported for data access. During the creation of the API, depending on whether the developer already has an existing data model in mind, the business logic is mapped closely with the data schema, or, one can also work top down to create the backend once the API interface has been defined using a model driven approach. These also include settings for security permissions with support for OAuth and any other third party authentication dependencies.

Once defined and tested, the API is published onto the runtime. During this process access control privileges, access policies and the endpoint itself are defined. The API is then ready for external consumption and discovery.

Runtime Flow Across the Architecture..

In the simplest case – once the API has been deployed and tested it is made available for public discovery and consumption. Client Applications then begin to leverage the API and this can be done in a variety of ways. For example – user interactions on mobile applications, webpages and B2B services trigger calls to the API Gateway. The Gateway performs a range of functions to process the request – from security authorization to load-balancing before accessing policies setup for that particular API. The Gateway then invokes the API by calling the backend system typically via message oriented middleware such as an ESB or a Message Broker. Once the backend responds with the appropriate payload ,the data is sent to the requesting application. Systems and Administration teams can view detailed operational metrics and logs to monitor API performance.

A Note on Security..

It should come as no surprise that the security aspect of an API Management Platform is one of the most critical aspects of the implementation. While API Security is a good subject for a followup post and too exhaustive to be covered in a short blurb – several standards such as OAuth2, OpenID Connect, JSON Security & Policy languages are all topics that need to be explored by both organizational developers and administrators.  Extensive flow mapping and scenario testing are mandated here. Also, endpoint security from a client application standpoint is key. Your Servers, Desktops, Supported Mobile devices need to be updated and secured with the latest antivirus & other standard IT Security/access control policies.

Conclusion..

In this post, we tried to highlight the major components of an API Management Platform from a technology standpoint. While there are a range of commercial & open source platforms, it is important to evaluate them from a feature standpoint as well as from an ecosystem capability perspective as developers began implementing microservices based Digital Architectures.

The Why and How of an Enterprise API Strategy..

We discussed the emergence of Application Programming Interfaces (APIs) as a key business capability in Digital Platforms @ http://www.vamsitalkstech.com/?p=3834. We also saw how APIs can serve as a business interaction driven integration layer. APIs provide a layer that serves to connect backend business services across Digital applications across multiple channels. In this second post we will discuss the foundational business, technology, integration & governance capabilities that any Enterprise API Platform must support. The next and final post will discuss an API centric deployment architecture for a medium to large enterprise.

What is your API vision?

The first post in this series (http://www.vamsitalkstech.com/?p=3834) covered the need for industry players to treat APIs as a way of reinventing the many aspects of their business and their consumers.  From a high level standpoint, this can be done in one of three ways –

  1. Inculcating Digital Innovation both inside & inside out. Extending the boundaries of a large global or national enterprise or outside in, by enabling partners to build innovative applications.
  2. Exposing Data Assets and combining them with advanced analytics to enable customers to consume enterprise business services across the globe.
  3. Taking a Platform first approach to building new applications and enabling API nativity in such greenfield development.

Not every Borders Bookstores like company can turn into an Amazon but the ability to create new lines of revenue implies closer integration with business partners. The creation of APIs enables this integration as we saw in the previous post but it is really the treatment of APIs as an enterprise enabler that ensure the scalability of innovation. Hence the need for an enterprise API strategy which senior executives need to be able to devise based on both from a tactical standpoint as well as keeping the strategic vision in mind.

As with all things in digital technology, API Management is founded on strong business use cases. So let us begin by examining a smattering of these.

Industrial Use Cases for API Management Platforms..

Let us first discuss the major business use cases for APIs in a business enterprise.

  1. The simplest use case for any API implementation is to provide Information Retrieval. This includes such use cases for a Free API (which typically accesses non private information) to a Paid API (which securely accesses business sensitive data stored in Book of Record Transaction (BORT) systems). E.g. Patient Medical Records, Supply Chain data, Bank Customer Account Information, Insurance Policies etc.
  2. Other complementary use cases include supporting exposing functionality in Internal applications (that typically perform Document & File Management) across a range of business scenarios – typically via a Private API
  3. Across Partner & Supplier Applications, support the invocation of business logic that typically performs a business process, using an internal or trusted partner API.
  4. Support for Mobile applications and web front ends for applications ranging from field employee enablement to online payments etc using consumer facing public APIs
  5. The most complex use case is support for Data Monetization using advanced analytics. The last post discussed how APIs need to help monetize business assets, this implies an ability to provide complex analytic support for select APIs that extend brands by connecting to a range of backend sources.

The technology and platform requirements for an API strategy will cascade from these use cases – all of which should fairly resonate across several industry verticals.

Business Requirements for API Strategy..

The goal of an enterprise API strategy should be to support the creation of a centralized API platform which appeals to various audiences – Customers, Internal & External Developers, Lines of business and Operations teams.

There are ten distinct business challenges that an enterprise API strategy needs to account for.

  1. First and foremost, an API strategy needs to support the ability of existing business systems to expose their business assets for consumption in Digital scenarios. This implies not just supporting a cloud native/micro-service model of application development but also a range of legacy systems such as RDBMS’s, ERP, CRM systems etc. The ability to front these systems with RESTful APIs, at a minimum, will ensure that these can participate in a digital business process without a lot of upfront rewriting.  Adapters that can provide deep integration with these sources that allow for efficient API performance using techniques such as query optimization, pagination, support for business policies etc. The API platform also needs to support easy ways of composing APIs and orchestrating them across backend applications which are not always cloud native. The capability of API Composition where backend APIs are orchestrated to perform a higher business function is highly desirable.
  2. The API Management Platform needs to support a High Performance Architecture capable of supporting high volumes of client applications – at a high end potentially millions of API calls per minute.
  3. The Platform needs to provide five nine’s of Infrastructure and Application reliability. Lost API messages mean missed revenue – it is as simple as that. Thus, API’s need to be highly available and support a high degree of redundancy.
  4. APIs increase the attack surface of an enterprise. Accordingly, the strategy needs to account for the provision of bulletproof Security against a range of threat vectors – malicious API client applications, Malware, Denial of Service (DOS) attacks etc. Also ensuring strong Identity Management capabilities for client applications across complex backend services
  5. The ability to Monitor the APIs for performance, throttling etc to guarantee SLA (Support Level Agreements). It is also important to provide the ability to generate granular business & IT reporting on API usage across a range of metrics etc.
  6. As discussed in the last blog @ http://www.vamsitalkstech.com/?p=3834, an API ecosystem provides support for multiple players – customers, partners, employees etc. Accordingly needs to support multiple versions of underlying APIs that expose different views of business assets. This is key so that consumers can obtain value around the capabilities that are aligned with their interests.
  7. An ability to support Data Monetization via Rich Analytics than has been possible before that provide a great degree of context. This ability to reason around context is what provides an ability to design new business models which cannot be currently imagined due to lack of agility in the data and analytics space.This integration helps these systems leverage the digital intelligence and insights across (potentially) millions of devices across complex areas of operation.
  8. Application developers access to APIs with a view to including them in their business applications. Accordingly, an API Management strategy should provide strong capabilities for Developers via a Portal. The Portal helps them right from on-boarding, help around exploring organization backend capabilities, API documentation, Quickstart Guides, Online videos, API Testing capabilities, API version history, search & discovery tools for API discovery etc.It should be noted that multiple developer portal views must be supported – both for internal and external communities of developers. Internal developers will want to do a range of tasks that create support lines of business, business automation tasks, supporting workforce related IT access applications etc. They will create, package and upload APIs to the portal. External API developers range from Partners to Customer communities. They typically access these APIs, subscribe to them and run a range of dev-test tasks using the Portal.
  9. Supporting Governance across potentially hundreds of API definitions. The topic of Governance is the most critical area and tools need to help right from the definition of business case to assigning actors (who may already be defined in business directories) to managing deployment schedules to change management etc. Business policies need to be supported to enable business and IT stakeholders to retire APIs.
  10. Finally, an API strategy cannot be divorced from the Industry Vertical that the enterprise operates in. This implies that starter set APIs, templates, SDKs etc be provided as modules for verticals like Financial Services, Insurance, Telecom, Healthcare, Manufacturing and Connected Cars etc.

Conclusion..

APIs are a product line and should be treated as such which implies an ability to manage them across their lifecycle.  Developers create API client applications, the corporation makes these API definitions available for communities of developers consume in their applications. Sys admins secure, deploy & manage these APIs.

The end goal of an API strategy is to ensure that the process of creating, securing, orchestrating & monitoring these API interfaces is intuitive, consistent and scalable across a large organization. We will round off this three part series on APIs by define a technical deployment architecture in the next & final post.