Why Data Silos Are Your Biggest Source of Technical Debt..

Any enterprise CEO really ought to be able to ask a question that involves connecting data across the organization, be able to run a company effectively, and especially to be able to respond to unexpected events. Most organizations are missing this ability to connect all the data together.” Tim Berners Lee -(English computer scientist, best known as the inventor of the World Wide Web)

Image Credit – Device42

We have discussed vertical industry business challenges across sectors like Banking, Insurance, Retail and Manufacturing in some level of detail over the last two years. Though enterprise business models vary depending on the industry, there is a common Digital theme raging across all industries in 2017. Every industry is witnessing an upswing in the numbers of younger and digitally aware customers. Estimates of this influential population are as high as 40% in areas such as Banking and Telecommunications. They represent a tremendous source of revenue but can also defect just as easily if the services offered aren’t compelling or easy to use – as the below illustration re the Banking industry illustrates.

These customers are Digital Natives i.e they are highly comfortable with technology and use services such as Google, Facebook, Uber, Netflix, Amazon, Google etc almost hourly in their daily lives. As a consequence, they expect a similar seamless & contextual experience while engaging with Banks, Telcos, Retailers, Insurance companies over (primarily) digital channels. Enterprises then have a dual fold challenge – to store all this data as well as harness it for real time insights in a way that is connected with internal marketing & sales.

As many studies have shown, companies that constantly harness data about their customers and perform speedy advanced analytics outshine their competition. Does that seem a bombastic statement? Not when you consider that almost half of all online dollars spent in the United States in 2016 were spent on Amazon and almost all digital advertising revenue growth in 2016 was accounted by two biggies – Google and Facebook. [1]

According to The Economist, the world’s most valuable commodity is no longer Oil, but Data. The few large companies depicted in the picture are now virtual monopolies[2] (Image Credit – David Parkins)

Let us now return to the average Enterprise. The vast majority of industrial applications (numbering around an average of 1000+ applications at large enterprises according to research firm NetSkope) generally lag the innovation cycle. This is because they’re created using archaic technology platforms by teams that conform to rigid development practices. The Fab Four (Facebook Amazon Google Netflix) and others have shown that Enterprise Architecture is a business differentiator but the Fortune 500 have not gotten that message as yet. Hence they largely predicate their software development on vendor provided technology instead of open approaches. This anti-pattern is further exacerbated by legacy organizational structures which ultimately leads to these applications holding a very parochial view of customer data. These applications can typically be classified in one of the buckets – ERP, Billing Systems, Payment Processors, Core Banking Systems, Service Management Systems, General Ledger, Accounting Systems, CRM, Corporate Email, Salesforce, Customer On-boarding etc etc. 

These enterprise applications are then typically managed by disparate IT groups scattered across the globe. They often serve different stakeholders who seem to have broad overlapping interests but have conflicting organizational priorities for various reasons. These applications then produce and data in silos – localized by geography, department, or, line of business, or, channels.

Organizational barriers only serve to impede data sharing for various reasons –  ranging from competitive dynamics around who owns the customer relationship, regulatory reasons to internal politics etc. You get the idea, it is all a giant mishmash.

Before we get any further, we need to define that dreaded word – Silo.

What Is a Silo?

A mind-set present in some companies when certain departments or sectors do not wish to share information with others in the same company. This type of mentality will reduce the efficiency of the overall operation, reduce morale, and may contribute to the demise of a productive company culture. (Source- Business Dictionary -[2])

Data is the Core Asset in Every Industry Vertical but most of it is siloed in Departments, Lines of Business across Geographies..

Let us be clear, most Industries do not suffer from a shortage of data assets. Consider a few of the major industry verticals and a smattering of the kinds of data that players in these areas commonly possess – 

Data In Banking– 

  • Customer Account data e.g. Names, Demographics, Linked Accounts etc
  • Core Banking Data going back decades
  • Transaction Data which captures the low level details of every transaction (e.g debit, credit, transfer, credit card usage etc)
  • Wire & Payment Data
  • Trade & Position Data
  • General Ledger Data e.g AP (accounts payable), AR (accounts receivable), cash management & purchasing information etc.
  • Data from other systems supporting banking reporting functions.

DATA IN HEALTHCARE–  

  • Structured Clinical data e.g. Patient ADT information
  • Free hand notes
  • Patient Insurance information
  • Device Telemetry 
  • Medication data
  • Patient Trial Data
  • Medical Images – e.g. CAT Scans, MRIs, CT images etc

DATA IN MANUFACTURING– 

  • Supply chain data
  • Demand data
  • Pricing data
  • Operational data from the shop floor 
  • Sensor & telemetry data 
  • Sales campaign data

The typical flow of data in an enterprise follows a familiar path –

  1. Data is captured in large quantities as a result of business operations (customer orders, e commerce transactions, supply chain activities, Partner integration, Clinical notes et al). These feeds are captured using a combination of techniques – mostly ESB (Enterprise Service Bus) and Message Brokers.
  2. The raw data streams then flow into respective application owned silos where over time a great amount of data movement (via copying, replication and transformation operations – the dreaded ETL) occurs using proprietary vendor developed systems. Vendors in this space have not only developed shrink wrapped products that make them tens of billions of dollars annually but also imposed massive human capital requirements of enterprises to program & maintain these data flows.
  3. Once all of the relevant data has been normalized, transformed and then processed, it  is then copied over into business reporting systems where it is used to perform a range of functions – typically for reporting for use cases such as Customer Analytics, Risk Reporting, Business Reporting, Operational improvements etc.
  4. Rinse and repeat..

Due to this old school methodology of working with customer, operational data, most organizations have no real time data processing capabilities in place & they thus live in a largely reactive world. What that means is that their view of a given customers world is typically a week to 10 days old.

Another factor to consider is – the data sources described out above are what can be described as structured data or traditional data. However, organizations are now on-boarding large volumes of unstructured data as has been captured in the below blogpost. Oftentimes, it is easier for Business Analysts, Data Scientists and Data Architects to get access to external data faster than internal data.

Getting access to internal data typically means jumping over multiple hoops from which department is paying for the feeds, the format of the feeds, regulatory issues, cyber security policy approvals, SOX/PCI compliance et al. The list is long and impedes the ability of business to get things done quickly.

Infographic: The Seven Types of Non Traditional Data that can drive Business Insights

Data and Technical Debt… 

Since Gene Kim coined the term ‘Technical Debt‘ , it has typically been used in an IT- DevOps- Containers – Data Center context. However, technology areas like DevOps, PaaS, Cloud Computing with IaaS, Application Middleware, Data centers etc in and of themselves add no direct economic value to customers unless they are able to intelligently process Data. Data is the most important technology asset compared to other IT infrastructure considerations. You do not have to take my word for that. It so happens that The Economist just published an article where they discuss the fact that the likes of Google, Facebook, Amazon et al are now virtual data monopolies and that global corporations are way way behind in the competitive race to own Data [1].

Thus, it is ironic that while the majority of traditional Fortune 500 companies are still stuck in silos, Silicon Valley companies are not just fast becoming the biggest owners of global data but are also monetizing them on the way to record profits. Alphabet (Google’s corporate parent), Amazon, Apple, Facebook and Microsoft are the five most valuable listed firms in the world. Case in point – their profits are around $25bn  in the first quarter of 2017 and together they make up more than half the value of the NASDAQ composite index. [1]

The Five Business Challenges that Data Fragmentation causes (or) Death by Silo … 

How intelligently a company harnesses it’s data assets determines it’s overall competitive position. This truth is being evidenced in sectors like Banking and Retail as we have seen in previous posts.

What is interesting, is that in some countries which are concerned about the pace of technological innovation, National regulatory authorities are creating legislation to force slow moving incumbent corporations to unlock their data assets. For example, in the European Union as a result of regulatory mandates – the PSD2 & Open Bank Standard –  a range of agile players across the value chain (e.g FinTechs ) will soon be able to obtain seamless access to a variety of retail bank customer data by accessing using standard & secure APIs.

Once obtained the data can help these companies can reimagine it in manifold ways to offer new products & services that the banks themselves cannot. A simple use case can be that they can provide personal finance planning platforms (PFMs) that help consumers make better personal financial decisions at the expense of the Banks owning the data.  Surely, FinTechs have generally been able to make more productive use of client data than have banks. They do this by providing clients with intuitive access to cross asset data, tailoring algorithms based on behavioral characteristics and by providing clients with a more engaging and unified experience.

Why cannot the slow moving established Banks do this? They suffer from a lack of data agility due to the silos that have been built up over years of operations and acquisitions. None of these are challenges for the FinTechs which can build off of a greenfield technology environment.

To recap, let us consider the five ways in which Data Fragmentation hurts enterprises – 

#1 Data Silos Cause Missed Top line Sales Growth  –

Data produced by disparate applications which use scattered silos to store them causes challenges in enabling a Single View of a customer across channels, products and lines of business. This then makes everything across the customer lifecycle a pain – ranging from smooth on-boarding, to customer service to marketing analytics. Thus, it impedes an ability to segment customers intelligently, perform cross sell & up sell. This sheer inability to understand customer journeys (across different target personas) also leads customer retention issues. When underlying data sources are fragmented, communication between business teams moves over to other internal mechanisms such as email, chat and phone calls etc. This is a recipe for delayed business decisions which are ultimately ineffective as they depend more on intuition than are backed by data. 

#2 Data Silos are the Root Cause of Poor Customer Service  –

Across industries like Banking, Insurance, Telecom & Manufacturing, the ability to get a unified view of the customer & their journey is at the heart of the the enterprises ability to understand their customers preferences & needs. This is also crucial in promoting relevant offerings and in detecting customer dissatisfaction. Currently most enterprises are woefully inadequate at putting together this comprehensive Single View of their Customers (SVC). Due to operational silos, each department possess a silo & limited view of the customer across other silos (or channels). These views are typically inconsistent in and of themselves as they lack synchronization with other departments. The net result is that the companies typically miss a high amount of potential cross-sell and up-sell opportunities.

#3 – Data Silos produce Inaccurate Analytics 

First off most Analysts need to wait long times to acquire the relevant data they need to test their hypotheses. Thus, since the data they work on is of poor quality as a result of fragmentation, so are the analytics operate on the data.

Let us take an example in Banking, Mortgage Lending, an already complex business process has been made even more so due to the data silos built around Core Banking, Loan Portfolio, Consumer Lending applications.Qualifying borrowers for Mortgages needs to be based on not just historical data that is used as part of the origination & underwriting process (credit reports, employment & income history etc) but also data that was not mined hitherto (social media data, financial purchasing patterns,). It is a well known fact there are huge segments of the population (especially the millennials) who are broadly eligible but under-banked as they do not satisfy some of the classical business rules needed to obtain approvals on mortgages.  Each of the silos store partial customer data. Thus, Banks do not possess an accurate and holistic picture of a customer’s financial status and are thus unable to qualify the customer for a mortgage in quick time with the best available custom rate.

#4 – Data Silos hinder the creation of new Business Models  


The abundance of data created over the last decade is changing the nature of business. If it follows that enterprise businesses are being increasingly built around data assets, then it must naturally follow that data as a commodity can be traded or re-imagined to create revenue streams off it. As an example, pioneering payment providers now offer retailers analytical services to help them understand which products perform best and how to improve the micro-targeting of customers. Thus, data is the critical prong of any digital initiative. This has led to efforts to monetize on data by creating platforms that either support ecosystems of capabilities. To vastly oversimplify this discussion ,the ability to monetize data needs two prongs – to centralize it in the first place and then to perform strong predictive modeling at large scale where systems need to constantly learn and optimize their interactions, responsiveness & services based on client needs & preferences. Thus, Data Silos hurt this overall effort more than the typical enterprise can imagine.

#5 – Data Silos vastly magnify Cyber, Risk and Compliance challenges – 

Enterprises have to perform a range of back-office functions such as Risk Data Aggregation & Reporting, Anti Money Laundering Compliance and Cyber Security Monitoring.

Cybersecurity – The biggest threat to the Digital Economy..(1/4)

It must naturally follow that as more and more information assets are stored across the organization, it is a manifold headache to deal with securing each and every silo from a range of bad actors – extremely well funded and sophisticated adversaries ranging from criminals to cyber thieves to hacktivists. On the business compliance front, sectors like Banking & Insurance need to maintain large AML and Risk Data Aggregation programs – silos are the bane of both. Every industry needs fraud detection capabilities as well, which need access to unified data.

Conclusion

My intention for this post is clearly to raise more questions than provide answers. There is no question Digital Platforms are a massive business differentiator but they need to have access to an underlying store of high quality, curated, and unified data to perform their magic. Industry leaders need to begin treating high quality Data as the most important business asset they have & to work across the organization to rid it of Silos.

References..

[1]  The Economist – “The world’s most valuable resource is no longer oil, but data” – http://www.economist.com/news/leaders/21721656-data-economy-demands-new-approach-antitrust-rules-worlds-most-valuable-resource

[2] Definition of Silo Mentality – http://www.businessdictionary.com/definition/silo-mentality.html

Data Lakes power the future of Industrial Analytics..(1/4)

The first post in this four part series on Data lakes will focus on the business reasons to create one. The second post will delve deeper into the technology considerations & choices around data ingest & processing in the lake to satisfy myriad business requirements. The third will tackle the critical topic of metadata management, data cleanliness & governance. The fourth & final post in the series will focus on the business justification to build out a Big Data Center of Excellence (COE).

Business owners at the C level are saying, ‘Hey guys, look. It’s no longer inordinately expensive for us to store all of our data. I want all of you to make copies. OK, your systems are busy. Find the time, get an extract, and dump it in Hadoop.’”- Mike Lang, CEO of Revelytix

The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous online interactions with global consumers/customers/clients or patients. The goal is not just provide engaging visualization but also to personalize services clients care about across multiple modes of interaction. Mobile applications first begun forcing the need for enterprise to begin supporting multiple channels of interaction with their consumers. For example Banking now requires an ability to engage consumers in a seamless experience across an average of four to five channels – Mobile, eBanking, Call Center, Kiosk etc. Healthcare is a close second where caregivers expect patient, medication & disease data at their fingertips with a few finger swipes on an iPad app.

Big Data has been the chief catalyst in this disruption. The Data Lake architectural & deployment pattern makes it possible to first store all this data & then enables the panoply of Hadoop ecosystem projects & technologies to operate on it to produce business results.

Let us consider a few of the major industry verticals and the sheer data variety that players in these areas commonly possess – 

The Healthcare & Life Sciences industry possess some of the most diverse data across the spectrum ranging from – 

  • Structured Clinical data e.g. Patient ADT information
  • Free hand notes
  • Patient Insurance information
  • Device Telemetry 
  • Medication data
  • Patient Trial Data
  • Medical Images – e.g. CAT Scans, MRIs, CT images etc

The Manufacturing industry players are leveraging the below datasets and many others to derive new insights in a highly process oriented industry-

  • Supply chain data
  • Demand data
  • Pricing data
  • Operational data from the shop floor 
  • Sensor & telemetry data 
  • Sales campaign data

Data In Banking– Corporate IT organizations in the financial industry have been tackling data challenges due to strict silo based approaches that inhibit data agility for many years now.
Consider some of the traditional sources of data in banking –

  • Customer Account data e.g. Names, Demographics, Linked Accounts etc
  • Core Banking Data
  • Transaction Data which captures the low level details of every transaction (e.g debit, credit, transfer, credit card usage etc)
  • Wire & Payment Data
  • Trade & Position Data
  • General Ledger Data e.g AP (accounts payable), AR (accounts receivable), cash management & purchasing information etc.
  • Data from other systems supporting banking reporting functions.

Industries have changed around us since the advent of relational databases & enterprise data warehouses. Relational Databases (RDBMS) & Enterprise Data Warehouses (EDW) were built with very different purposes in mind. RDBMS systems excel at online transaction processing (OLTP) use cases where massive volumes of structured data needs to be processed quickly. EDW’s on the other hand perform online analytical processing functions (OLAP) where data extracts are taken from OLTP systems, loaded & sliced in different ways to . Both these kinds of systems are not simply suited to handle not just immense volumes of data but also highly variable structures of data.

awesome-lake

Let us consider the main reasons why legacy data storage & processing techniques are unsuited to new business realities of today.

  • Legacy data technology enforces a vertical scaling method that is sorely unsuited to handling massive volumes of data in a scale up/scale down manner
  • The structure of the data needs to be modeled in a paradigm called ’schema on write’ which sorely inhibits time to market for new business projects
  • Traditional data systems suffer bottlenecks when large amounts of high variety data are processed using them 
  • Limits in the types of analytics that could be performed. In industries like Retail, Financial Services & Telecommunications, enterprise need to build detailed models of customers accounts to predict their overall service level satisfaction in realtime. These models are predictive in nature and use data science techniques as an integral component. The higher volumes of data along with attribute richness that can be provided to them (e.g. transaction data, social network data, transcribed customer call data) ensures that the models are highly accurate & can provide an enormous amount of value to the business. Legacy systems are not a great fit here.

Given all of the above data complexity and the need to adopt agile analytical methods  – what is the first step that enterprises must adopt? 

The answer is the adoption of the Data Lake as an overarching data architecture pattern. Lets define the term first. A data lake is two things – a small or massive data storage repository and a data processing engine. A data lake provides “massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs“.[1] Data Lake are created to ingest, transform, process, analyze & finally archive large amounts of any kind of data – structured, semistructured and unstructured data.

DL_1

                                  Illustration – The Data Lake Architecture Pattern

What Big Data brings to the equation beyond it’s strength in data ingest & processing is a unified architecture. For instance, MapReduce is the original framework for writing applications that process large amounts of structured and unstructured data stored in the Hadoop Distributed File System (HDFS). Apache Hadoop YARN opened Hadoop to other data processing engines (e.g. Apache Spark/Storm) that can now run alongside existing MapReduce jobs to process data in many different ways at the same time. The result is that ANY kind of application processing can be run inside a Hadoop runtime – batch, realtime, interactive or streaming.

Visualization  – Mobile applications first begun forcing the need for enterprise to begin supporting multiple channels of interaction with their consumers. For example Banking now requires an ability to engage consumers in a seamless experience across an average of four to five channels – Mobile, eBanking, Call Center, Kiosk etc. The average enterprise user is also familiar with BYOD in the age of self service. The Digital Mesh only exacerbates this gap in user experiences as information consumers navigate applications as they consume services across a mesh that is both multi-channel as well as provides Customer 360 across all these engagement points.While information management technology has grown at a blistering pace, the human ability to process and comprehend numerical data has not. Applications being developed in 2016 are beginning to adopt intelligent visualization approaches that are easy to use,highly interactive and enable the user to manipulate corporate & business data using their fingertips – much like an iPad app. Tools such as intelligent dashboards, scorecards, mashups etc are helping change a visualization paradigms that were based on histograms, pie charts and tons of numbers. Big Data improvements in data lineage, quality are greatly helping the visualization space.

The Final Word

Specifically, Data Lake architectural pattern provide the following benefits – 

The ability to store enormous amounts of data with a high degree of agility & low cost: The Schema On Read architecture makes it trivial to ingest any kind of raw data into Hadoop in a manner that preserves it’s structure.  Business analysts can then explore  this data and then defined a schema to suit the needs of their particular application.

The ability to run any kind of Analytics on the data: Hadoop supports multiple access methods (batch, real-time, streaming, in-memory, etc.) to a common data set.  You are only restricted by your use case.

the ability to analyze, process & archive data while dramatically cutting cost : Since Hadoop was designed to work on low-cost commodity servers which have direct attached storage – it helps dramatically lower the overall cost of storage.  Thus enterprises are able to retain source data for long periods, thus providing business applications with far greater historical context.

The ability to augment & optimize Data Warehouses: Data lakes & Hadoop technology are not a ‘rip & replace’ proposition. While they provide a much lower cost environment than data warehouses, they can also be used as the compute layer to augment these systems.  Data can be stored, extracted and transformed in Hadoop. Then a subset of the data i.e the results are loaded into the data warehouse. This enables the EDW to leverage compute cycles and storage to perform truly high value analytics.

The next post of the series will dive deeper into the architectural choices one needs to make while creating a high fidelity & business centric enterprise data lake.

References – 

[1] https://en.wikipedia.org/wiki/Data_lake

The Data Science Continuum in Financial Services..(3/3)

In God we trust. All others must bring data.” – DrEdwards Deming, statistician, professor, author, lecturer, and consultant.

The first post in this three part series described key ways in which innovative applications of data science are slowly changing a somewhat insular banking & financial services industry . The second post then delineated key business use cases enabled by a data driven or data native  approach. The final post will examine foundational Data Science tasks & techniques that are commonly employed to get value from data with financial industry examples. We will round off the discussion with recommendations for industry CXOs.

data-science

The Need for Data Science –

It is no surprise that Big Data approaches were first invented & then refined in web scale businesses at Google, Yahoo, eBay, Facebook and Amazon etc. These web properties offer highly user friendly, contextual & mobile native application platforms  which produce a large amount of complex and multi-varied data from consumers,sensors and other telemetry devices. All this data that is constantly analyzed to drive higher rates of application adoption thus driving a virtuous cycle. We have discussed the (now) augmented capability of financial organizations to acquire, store and process large volumes of data by leveraging the HDFS (Hadoop Distributed Filesystem) running on commodity (x86) hardware.

One of the chief reasons that these webscale shops adopted Big Data is the ability to store the entire data set in Hadoop to build more accurate predictive models. The ability store thousands of  attributes at a much finer grain over a historical amount of time instead of just depending on a statistically significant sample is a significant gain over legacy data technology.

Every year Moore’s Law keeps driving the costs of raw data storage down. At the same time, compute technologies such as MapReduce, Tez, Storm and Spark have enabled the organization and analysis of Big Data at scale. The convergence of cost effective storage and scalable processing allows us to extract richer insights from data. These insights need to then be operationalized@scale to provide business value as the use cases in the last post highlighted @ http://www.vamsitalkstech.com/?p=1582

The differences between Descriptive & Predictive Analytics –

Business intelligence (BI) is a traditional & well established analytical domain that essentially takes a retrospective look at business data in systems of record. The goal for BI is to primarily look for macro or aggregate business trends across different aspects or dimensions such as time, product lines, business unites & operating geographies.

BI is primarily concerned with “What happened and what trends exist in the business based on historical data?“. The typical use cases for BI include budgeting, business forecasts, reporting & key performance indicators (KPI).

On the other hand, Predictive Analytics (a subset of Data Science) augments & builds on the BI paradigm by adding a “What could happen” dimension to the data in terms of –

  • being able to probabilistically predict different business scenarios across thousands of variables
  • suggesting specific business actions based on the above outcomes

Predictive Analytics does not intend to nor will it replace the BI domain but only adds significant business capabilities that lead to overall business success. It is not uncommon to find real world business projects leveraging both these analytical approaches.

Data Science  –

So, what exactly is Data Science ?

Data Science is an umbrella concept that refers to the process of extracting business patterns from large volumes of both structured, semi structured and unstructured data. Data Science is the key ingredient in enabling a predictive approach to the business.

Some of the key aspects that follow are  –

  1. Data Science is not just about applying analytics to massive volumes of data. It is also about exploring the patterns,associations & interrelationships of thousands of variables within the data. It does so by adopting an algorithmic approach to gleaning the business insights that are embedded in the data.
  2. Data Science is a standalone discipline that has spawned its own set of platforms, tools and processes across it’s lifecycle.
  3. Data science also aids in the construction of software applications & platforms to utilize such insights in a business context.  This involves the art of discovering data insights combined with the science of operationalizing them at scale. The word ‘scale’ is key. Any algorithm, model or deployment paradigm should support an expanding number of users without the need for unreasonable manual intervention as time goes on.
  4. A data scientist uses a combination of machine learning, statistics, visualization, and computer science to extract valuable business insights hiding in data and builds operational systems to deliver that value.
  5. The machine learning components are classified into two categories: ‘supervised’ and ‘unsupervised’ learning. In supervised learning, the constructed model defines the effect one set the inputs on the outputs through the causal chain.In unsupervised learning, the ouputs are affected by so called latent variables. It is also possible to have a hybrid approach to certain types of mining tasks.
  6. Strategic business projects typically begin leveraging a Data Science based approach to derive business value. This approach then becomes integral and eventually core to the design and architecture of such a business system.
  7. Contrary to what some of the above may imply, Data Science is a cross-functional discipline and not just the domain of Phd’s. A data scientist is part statistician, part developer and part business strategist.
  8. Working in small self sufficient teams, the Data Scientist collaborates with extended areas which includes visualization specialists, developers, business analysts, data engineers, applied scientists, architects, LOB owners and DevOp. The success of data science projects often relies on the communication, collaboration, and interaction that takes place with the extended team, both internally and possibly externally to their organization.
  9. It needs to be clarified that not every business project is a fit for a Data science approach. The criteria that must be employed to understand if such an advanced approach is called for include if the business initiative needs to provide knowledge based decisions (beyond the classical rules engine/ expert systems based approaches), deal with volumes of relevant data, a rapidly changing business climate, & finally where scale is required beyond what can be supplied using human analysts.
  10. Indeed any project where hugely improved access to information & realtime analytics for customers, analysts (and other stakeholders) is a must for the business – is fertile ground for Data Science.

Algorithms & Models

The word ‘model‘ is highly overloaded and means different things to different IT specialities e.g. RDBMS models imply data schemas, statistical models are built by statisticians etc. However, it can safely be said that models are representations of a business construct or a business situation.

Data mining algorithms are used to create models from data.

To create a data science model, the data mining algorithm looks for key patterns in data provided. The results of this analysis are to define the best parameters to create the model. Once identified, these parameters are applied across the entire data set to extract actionable patterns and detailed statistics.

The model itself can take various forms ranging from a set of customers across clusters, a revenue forecasting model, a set of fraud detection rules for credit cards or a decision tree that predicts outcomes based on specific criteria.

Common Data Mining Tasks –

There are many different kinds of data mining algorithms but all of these address a few fundamental types of tasks. The pouplar ones are listed below along with relevant examples:

  • Classification & Class Probability Estimation– For a given set of data, predict for each individual in a population, a discrete set of classes that this individual belongs to. An example classification is – “For all wealth management clients in a given population, who are most likely to respond to an offer to move to a higher segment”. Common techniques used in classification include decision trees, bayesian models, k-nearest neighbors, induction rules etc. Class Probability Estimation (CPE) is a closely related concept in which a scoring model is created to predict the likelihood that an individual would belong to that class.
  • Clustering is an unsupervised technique used to find classes or segments of populations within a larger dataset without being driven by any specific purpose. For example – “What are the natural groups our customers fall into?”. The most popular use of clustering techniques is to identify clusters to use in activities like market segmentation.A common algorithm used here is k-means clustering.
  • Market basket analysis  is commonly used to find out associations between entities based on transactions that involve them. E.g Recommendation engines which use affinity grouping.
  • Regression algorithms aim to characterize the normal or typical behavior of an individual or group within a larger population. It is frequently used in anomaly detection systems such as those that detect AML (Anti Money Laundering) and Credit Card fraud.
  • Profiling algorithms divide data into groups, or clusters, of items that have similar properties.
  • Causal Modeling algorithms attempt to find out what business events influence others.

There is no reason that one should be limited to one of the above techniques while forming a solution. An example is to use one algorithm (say clustering) to determine the natural groups in the data, and then to apply regression to predict a specific outcome based on that data. Another example is to use multiple algorithms within a single business project to perform related but separate tasks. Ex – Using regression to create financial reporting forecasts, and then using a neural network algorithm to perform a deep analysis of the factors that influence product adoption.

The Data Science Process  –

A general process framework for a typical Data science project is depicted below. The process flow depicted below suggests a sequential waterfall but allows for Agile/DevOps loops in the core analysis & feedback phases. The process is also not a virtual one sided pipeline but also allows for continuous improvements.

DS_Process

                                           Illustration: The Data Science Process 

  1. The central pool of data that hosts all the tiers of data processing in the above illustration is called the Data Lake. The Data Lake enables two key advantages – the ability to collect cross business unit data so that it can be sampled/explored at will & the ability to perform any kind of data access pattern across a shared data infrastructure: batch,  interactive, search, in-memory and custom etc.
  2. The Data science process begins with a clear and contextual understanding of the granular business questions that need to be answered from the real world dataset. The Data scientist needs to be trained in the nuances of the business to achieve the appropriate outcome. E.g. Detecting customer churn, predicting fraudulent credit card transactions in the credit cards space, predicting which customers in the Retail Bank are likely to churn over the next few months based on their usage patterns etc.
  3. Once this is known, relevant data needs to be collected from the real world. These sources in Banking range from –
    1. Customer Account data e.g. Names,Demographics, Linked Accounts etc
    2. Transaction Data which captures the low level details of every transaction (e.g debit, credit, transfer, credit card usage etc),
    3. Wire & Payment Data,
    4. Trade & Position Data,
    5. General Ledger Data and Data from other systems supporting core banking functions.
    6. Unstructured data. E.g social media feeds, server logs, clickstream data & mobile application data etc.
  4. Following the planning stage, Data Acquisition follows an  iterative process of acquiring data from the actual sources by creating appropriate loaders choosing appropriate technology components. E.g. Apache NiFi, Kafka, Sqoop, Flume, HDFS API, Java etc
  5. The next step is to perform Data Cleansing. Here the goal is to look for gaps in the data  (given the business context), ensuring that the dataset is valid with no missing values, consistent in layout and as fresh as possible from a temporal standpoint. This phase also involves fixing any obvious quality problems involving range or table driven data. The purpose at this stage is also to facilitate & perform appropriate data governance.
  6. Exploratory Data Analysis (EDA) helps with trial & error analysis of data. This is a phase where plots and graphs are used to systematically go through the data. The importance of this cannot be overstated as it provide the Data scientist and the business with a flavor of the data.
  7. Data Analysis: Generation of features or attributes that will be part of the model. This is the step of the process where actual data mining takes place leveraging models built using the above algorithms.

Within each of the above there exist further iterative steps within the Data Cleansing and Data Analysis stages.

Once the models have been tested and refined to the satisfaction of the business and their performance been put through a rigorous performance test phase, they are deployed into production. Once deployed, these are constantly refined based on end user and system feedback.

The Big Data ecosystem (consisting of tools such as Pig, Scalding, Hive, Spark and MapReduce etc) enable sea changes of improvement across the entire Data science lifecycle from data acquisition to data processing to data analysis. The ability of Big Data/Hadoop to unify all data storage in one place which renders data more accessible for modeling. Hadoop also scales up machine learning analysis due to it’s inbuilt paralleism which adds a tremendous amount of value both in terms of training multiple parallel models to improve their efficacy. The ability to collect a lot of data as opposed to small samples also helps greatly.

Recommendations – 

Developing a strategic mindset to Data science and predictive analytics should be a board level concern. This entails

  • To begin with – ensuring buy in & commitment in the form of funding at a Senior Management level. This support needs to extend across the entire lifecycle depicted above (from identifying business use cases).
  • Extensive but realistic ROI (Return On Investment) models built during due diligence with periodic updates for executive stakeholders
  • On a similar note, ensuring buy in using a strategy of co-opting & alignment with Quants and different high potential areas of the business (as covered in the usecases in the last blog)
  • Identifying leaders within the organization who can not only lead important projects but also create compelling content to evangelize the use of predictive analytics
  • Begin to tactically bake in or embed data science capabilities across different lines of business and horizontal IT
  • Slowly moving adoption to the Risk, Fraud, Cybersecurity and Compliance teams as part of the second wave. This is critical in ensuring that analysts across these areas move from a spreadsheet intensive model to adopting advanced statistical techniques
  • Creating a Predictive Analytics COE (Center of Excellence) that enable cross pollination of ideas across the fields of statistical modeling, data mining, text analytics, and Big Data technology
  • Informing the regulatory authorities of one’s intentions to leverage data science across the spectrum of operations
  • Ensuring that issues related to data privacy,audit & compliance have been given a great deal of forethought
  • Identifying  & developing human skills in toolsets (across open source and closed source) that facilitate adapting to data lake based architectures. A large part of this is to organically grow the talent pool by instituting a college recruitment process

While this ends the current series on Data Science in financial services, it is my intention to explore each of the above Data Mining techniques to a greater degree of depth as applied to specific business situations in 2016 & beyond. This being said, we will take a look at another pressing business & strategic concern – Cybersecurity in Banking – in the next series.

Big Data Drives Disruption In Wealth Management..(2/3)

The first post in this three part series brought to the fore critical strategic trends in the Wealth & Asset Management (WM) space – the most lucrative portion of Banking. This second post will describe an innovation framework for a forward looking WM institution.We will do this by means of concrete & granular use cases across the entire WM business lifecycle. The final post will cover technology architecture and business strategy recommendations for WM CXO’s.

Introduction:

The ability to sign up wealthy individuals & families; then retaining them over the years by offer those engaging, bespoke & contextual services will largely provide growth in the Wealth Management industry in 2016 and beyond. However,  WM as an industry sector has lagged other areas within banking from a technology & digitization standpoint.Multiple business forces ranging from increased regulatory & compliance demands, technology savvy customers and new Age FinTechs have led to firms slowly begin a makeover process.

So all of this begs the question – what do WM need to do to grow their client base and ultimately revenues? I contend that there are four strategic goals that firms need to operate across – 

  1. Increase Client Loyalty by Digitizing Client Interactions –  WM Clients who use services like Uber, Zillow, Amazon etc in their daily lives are now very vocal in demanding a seamless experience across all of the WM services using digital channels.  The vast majority of  WM applications still lag the innovation cycle, are archaic & are still separately managed. The net issue with this is that the client is faced with distinct user experiences ranging from client onboarding to servicing to transaction management. There is a crying need for IT infrastructure modernization ranging across the industry from Cloud Computing to Big Data to microservices to agile cultures promoting techniques such as a DevOps approach to building out these architectures. Such applications need to provide anticipatory or predictive capabilities at scale while understand the specific customers lifestyles, financial needs & behavioral preferences. 
  2. Generate Optimal Client Experiences –  In most existing WM systems, siloed functions have led to brittle data architectures operating on custom built legacy applications. This problem is firstly compounded by inflexible core banking systems and secondly exacerbated by a gross lack of standardization in application stacks underlying ancillary capabilities. These factors inhibit deployment flexibility across a range of platforms thus leading to extremely high IT costs and technical debut. The consequence is that these inhibit client facing applications from using data in a manner that constantly & positively impacts the client experience. There is clearly a need to provide an integrated digital experience across a global customer base. And then to offer more intelligent functions based on existing data assets. Current players do possess a huge first mover advantage as they offer highly established financial products across their large (and largely loyal & sticky) customer bases, a wide networks of physical locations, rich troves of data that pertain to customer accounts & demographic information. However, it is not enough to just possess the data. They must be able to drive change through legacy thinking and infrastructures as things change around the entire industry as it struggles to adapt to a major new segment – the millenials – who increasingly use mobile devices and demand more contextual services as well as a seamless and highly analytic driven & unified banking experience – akin to what they commonly experience via the internet – at web properties like Facebook, Amazon, Google or Yahoo etc
  3. Automate Back & Mid Office Processes Across the WM Value Chain – The needs to forge a closer banker/client experience is not just driving demand around data silos & streams themselves but also forcing players to move away from paper based models to more of a seamless, digital & highly automated model to rework a ton of existing back & front office processes – which is the weakest link in the chain. These processes range from risk data aggregation, supranational compliance (AML,KYC, CRS & FATCA), financial reporting across a range of global regions & Cyber Security. Can the Data architectures & the IT systems  that leverage them be created in such a way that they permit agility while constantly learning & optimizing their behaviors across national regulations, InfoSec & compliance requirements? Can every piece of actionable data be aggregated,secured, transformed and reported on in such a way that it’s quality across the entire lifecycle is guaranteed? 
  4. Tune existing business models based on client tastes and feedback – While Automation 1.0 focuses on digitizing processes, rules & workflow as stated above; Automation 2.0 implies strong predictive modeling capabilities working at large scale – systems that constantly learn and optimize products & services based on client needs & preferences. The clear ongoing theme in the WM space is constant innovation. Firms need to ask themselves if they are offering the right products that cater to an increasingly affluent yet dynamic clientele. This is the area where firms need to show that they can compete with the FinTechs (Wealthfront, Nutmeg, Fodor Bank et al) to attract younger customers.

Having set the stage for the capabilities that need to be added or augmented, let us examine what the WM firm of the future can look like.

WM_NewAge

                            Illustration – Technology Driven Wealth Management

Improve the Client Experience

The ability of the clients to view their holistic portfolio, banking,bill pay data & advisor interactions in one intuitive user interface is a must have. All this information needs to be available across multiple channels of banking & across all accounts the client owns with multiple FIs (Financial Institutions). Further, pulling in data from relevant social media properties like Twitter, Facebook etc can help clients gauge the popularity of certain products across their networks thus helping them make targeted, real-time, decisions that increase market share. Easy access to investment advice, portfolio analytics and DIY (Do it Yourself) “what if” scenarios based on the client’s investment profile, past financial behavior & family commitments are highly desirable and encourage client loyalty.

Help the Advisor –

On the other side of the coin, most  WM advisors lack a comprehensive view of their customers. This is due to legacy IT reasons due to which their interactions with clients across multiple channels takes up a lot of their work time but also results in limited collaboration within the bank when servicing client needs.

Other “must have” capabilities –

  • Predicting Customer Attrition & Churn across both a single client as well as over a n advisor’s entire client base
  • Portfolio Rebalancing & risk modeling across multiple dimensions
  • Single View of Customer Segments across multiple product offerings
  • Basket Analysis based on criteria like investment preferences, asset allocation etc – i.e “what products are typically purchased in tandem”
  • Run in place analytics on customer lifetime value (CLV) and yield per customer
  • Suggest Next Best Action for a given client and across a pool of managed clients
  • Provide multiple levels of dashboards ranging from the Descriptive (Business Intelligence) to the Prescriptive (business simulation as well as optimization)

Digitize Business Processes –

Since a high degree of WM technology still lives in the legacy age, it should not be a surprise that a lot of backend processes result in client dissatisfaction as well as an inability to provide lean & efficient operations. Strategic investments in Business Process Management (BPM) systems, Big Data architectures & processing techniques, Digital Signature systems & augmenting tactical document management systems can result in a high degree of digitization. This leads to seamless business interoperability, efficient client operations and an ability to turn around compliance information quickly & more efficiently over to regulatory authorities.

Invest In Technology to Drive the Business –

Strategic deployment of technology assets will be the differentiator in the WM business going forward. The technology investments that WM firms need to make are in three broad areas – Big Data & Predictive Analytics, Cloud Computing & in a DevOps based approach to building out these capabilities.

Big Data & Hadoop provide the foundation to an intelligent approach to unifying data (ingesting, mining & linking micro feeds with existing core banking data) and then fostering  a deep analytical approach based on predictive analytics and machine learning.

So what kind of new age business capabilities can WM firms build on a Big Data & Advanced Analytics based foundation?

  • New Client Acquisition by creating client profiles and helping develop targeted leads across a population of individuals
  • Instrument and understand Risk at multiple levels (customer churn, client risk etc) in real time
  • Advanced Portfolio Analytics
  • Performance Management Metrics for the business across client segments, advisors and specific geographies
  • Better Client Advice based on portfolio optimization which takes client life journey details into account as opposed to static age based rebalancing
  • Promoting client’s ability to self service their accounts thus reducing load on advisors for mundane tasks
  • The biggest (and perhaps the most famous) capability is providing Robo Advisor functionality with advanced visualization capabilities. One of the goals here is to compete with Fintechs which are automating their customer account servicing using an automated approach.
  • Help with Compliance and other reporting functions

Big Data and Hadoop seems to be emerging as the platform of choice for many reasons – ability to handle any kind of data at scale, cost, techniques like deep learning need a lot of computing power which Hadoop can provide via paralleization, integration with SAS/Python and R. A high degree of data preprocessing could be done via Advanced MapReduce techniques.Finally, additive to all of this is an agile infrastructure based on cloud computing principles which calls out for a microservice based approach to building out software architectures, mobile platforms that accelerate customers abilities to bank from anywhere. DevOps dictates an increased focus on automation from a business process to software system delivery  and encourages a culture that encourages risk taking & a “fail fast” approach.

The final post in this series will cover a high level technology architecture and then specific recommendations to WM CXO’s.

Big Data #2 – The velocity of Big Data & CEP vs Stream Processing

The term ‘Data In Motion’ is widely used to represent the speed at which large volumes of data are moving into corporate applications in a broad range of industries. Big Data also needs to be Fast Data. Fast in terms of processing potentially millions of messages per second while also accommodating multiple data formats across all these sources.

Stream Processing refers to the ability to process these messages (typically sensor or machine data) as soon as they hit the wire with a high event throughput to query ratio.

Some common use-cases are processing set-top data from converged media products in the Cable & Telco space to analyze device performance or outage scenarios; fraud detection in the credit card industry, stock ticker data & market feed data regd other financial instruments that must be analyzed in split second time; smart meter data in the Utilities space, driver monitoring in transportation etc. The data stream pattern could be  batch or real-time owing to the nature of the business event etc.

While the structure and the ingress velocity of all this data varies depending on the industry & the use case, what stays common is that these data stream must all be analyzed & reacted upon as soon as is possible with an eye towards business advantage.

The leading Big Data Stream processing technology is Storm. More on it’s architecture and design in subsequent posts. If you currently do not have a stream based processing solution in place, Storm is a great choice – one that is robust, proven in the field and well supported as part of the Hortonworks HDP.

Storm_concepts-300x170

A range of reference usecases listed here –

http://storm.apache.org/documentation/Powered-By.html

Now, at the surface a lot of this sounds awfully close to Complex Event Processing (CEP). For those of you who have been in the IT industry for a while, CEP is a mature application domain & is defined as the below by Wikipedia.

Complex Event Processing, or CEP, is primarily an event processing concept that deals with the task of processing multiple events with the goal of identifying the meaningful events within the event cloud. CEP employs techniques such as detection of complex patterns of many events, event correlation and abstraction, event hierarchies, and relationships between events such as causality, membership, and timing, and event-driven processes.”

However, these two technologies differ along the following lines –

  1. Target use cases
    CEP as a technology & product category has a very valid use-case when one has an ability to define analytics (based on simple business rules) on pre-modeled discrete (rather than continuous) events. Big Data Stream processing on the other hand enable one to apply a range of complex analytics that can infer across multiple aspects of every message.
  2. Scale -Compared to Big Data Stream Processing, CEP supports a small to medium size scale. You are typically talking a few 10,000’s of messages while supporting latency of a magnitude of seconds to sub-seconds (based on the CEP product). Big Data stream processing on the other hand operates across 100,000’s of messages every second while supporting a range of latencies. For instance, Hortonworks has benchmarked Storm as processing one million 100 byte messages per second per node.Thus, Stream processing technologies have been proven at web scale versus CEP, which is still a niche and high-end capability depending upon the vertical (low latency trading) or the availability of enlightened architects & developers who get the concept.
  3. Variety of data -CEP operates on structured data types (as opposed to Stream Processing which is agnostic to the arriving data format) where the schema of the arriving events has been predefined into some kind of a fact model. E.g. a Java POJO.
    The business logic is then defined using a vendor provided API.
  4. Type of analytics – CEP’s strong suit is it’s ability to take atomic events and correlate them into compound events (via the API). Most of the commercially viable CEP engines thus provide a rules based language to define these patterns. Patterns which are temporal and spatial in nature. E.g. JBOSS Drools DRLBig Data Stream Processing on the other hand supports a super set of such analytical capabilities with map-reduce, predictive analytics, machine learning and advanced visualization etc etc.
  5. Scalability –CEP engines are typically small multi-node installs with vertical scalability being the core model of scaling clusters in production. Stream processing on the other hand provides scalable and auto-healing across typical 2 CPU dual core boxes. The model of scalability & resilience is horizontal.Having said all of this, it is important to note  that these two technologies (Hadoop ecosystem & CEP) can also co-exist. Many customers are looking to build “live” data marts where data is kept in memory as it streams in, a range of queries are continuously applied and realtime views are then shown to end users.Microsoft has combined their StreamInsight CEP engine into Apache Hadoop where MapReduce is used to run scaled out reducers performing complex event processing over the data partitions.

    3007.StreamInsight 2.png-481x359
    Source – http://blogs.msdn.com/b/microsoft_business_intelligence1/archive/2012/02/22/big-data-hadoop-and-streaminsight.aspx