How Big Data & Predictive Analytics transform AML Compliance in Banking & Payments..(2/2)

The first blog in this two part series (Deter Financial Crime by Creating an effective AML Program) described how Money Laundering (ML) activities employed by nefarious actors (e.g drug cartels, corrupt public figures & terrorist organizations) have gotten more sophisticated over the years. Global and Regional Banks are falling short of their compliance goals despite huge technology and process investments. Banks that fail to maintain effective compliance are typically fined hundreds of millions of dollars. In this second & final post, we will examine why Big Data Analytics as a second generation effort can become critical to efforts to shut down the flow of illicit funds across the globe thus ensuring financial organizations are compliant with efforts to reduce money laundering.

Where current enterprisewide AML programs fall short..

As discussed in various posts and in the first blog in the series (below), the Money Laundering (ML) rings of today are highly sophisticated in their understanding of the business specifics across the domains of Banking  – Capital Markets, Retail & Commercial banking. They are also very well versed in the complex rules that govern global trade finance.

Deter Financial Crime by Creating an Effective Anti Money Laundering (AML) Program…(1/2)

Further, the more complex and geographically diverse a financial institution is, the higher it’s risk of AML (Anti Money Laundering) compliance violations. Other factors such as an enormous volume of transactions across multiple distribution channels, across geographies between thousands of counter-parties always increases money laundering risk.

Thus, current AML programs fall short in five specific areas –

  1. Manual Data Collection & Risk Scoring – Bank’s response to AML statutes has been to bring in more staff typically in hundreds at large banks. These staff perform rote but key processes in AML such as Customer Due Diligence (CDD) and Know Your Customer (KYC).  These staff extensively scour external sources like Lexis Nexis, Thomson Reuters, D&B etc to manually scoring of risky client entities often pairing these with internal bank data. They also use AML watch-lists to perform this process of verifying individuals and business customers so that AML Case Managers can review it before filing Suspicious Activity Reports (SAR). On an average, about 50% of the cost of AML programs is incurred in terms of the large headcount requirements. At large Global Banks where the number of accounts are more 100 million customers the data volumes can get real big real quick causing all kinds of headaches for AML programs from a data aggregation, storage, processing and accuracy standpoint. There is a crying need to automate AML programs end to end to not only perform accurate risk scoring but also to keep costs down.
  2. Social Graph Analysis in areas such as Trade finance helps model the complex transactions occurring between thousands of entities. Each of these entities may have a complex holding structure with accounts that have been created using forged documents. Most fraud also happens in networks of fraud. An inability to dynamically understand the topology of the financial relationships among thousands of entities implies that AML programs need to develop graph based analysis capabilities .
  3. AML programs extensively deploy rule based systems or Transaction Monitoring Systems (TMS) which allow an expert system based approach to setup new rules. These rules span areas like monetary thresholds, specific patterns that connote money laundering & also business scenarios that may violate these patterns. However, fraudster rings now learn (or know) these rules quickly & change their fraudulent methods constantly to avoid detection. Thus there is a significant need to reduce a high degree of dependence on traditional TMS – which are slow to adapt to the dynamic nature of money laundering.
  4. The need to perform extensive Behavioral modeling & Customer Segmentation to discover transactions behavior with a view to identifying behavioral patterns of entities & outlier behaviors that connote potential laundering.
  5. Real time transaction monitoring in areas like Payment Cards presents unique challenges where money laundering is hidden within mountains of transaction data. Every piece of data produced as a result of bank operations needs to be commingled with historical data sets (for customers under suspicion) spanning years in making a judgment call about filing a SAR (Suspicious Activity Report).

How Big Data & Predictive Analytics can help across all these areas..

aml_predictiveanalytics

  1. The first area where Big Data & Predictive Analytics have a massive impact is around Due Diligence data of KYC (Know Your Customer) data. All of the above discussed data scraping from various sources can be easily automated by using tools in a Big Data stack to ingest information automatically. This is done by sending requests to data providers (the exact same ones that Banking institutions are currently using) via an API. Once this data is obtained, they can use real time processing tools (such as Apache Storm and Apache Spark) to apply sophisticated algorithms to that collected data to transform that data to calculate a Risk Score or Rating. In Trade Finance, Text Analytics can be used to process a range of documents like invoices, bills of lading, certificates of shipping etc to enable Banks to inspect a complex process across hundreds of entities operating across countries.  This approach enables Banks to process massive amounts of diverse data in quick time (even seconds) to synthesize it to accurate risk scores. Implementing Big Data in this very important workstream can help increase efficiency and reduce costs.
  2. The second area where Big Data shines at is in the space of helping create a Single View of a Customer as depicted below. This is made possible by doing advanced entity matching with the establishment and adoption of a lightweight entity ID service. This service will consist of entity assignment and batch reconciliation. The goal here is to get each business system to propagate the Entity ID back into their Core Banking, loan and payment systems, then transaction data will flow into the lake with this ID attached providing a way to do Customer 360.single-view-of-the-customer
  3. To be clear, we are advocating for a mix of both business rules and Data Science. Machine Learning is recommended as enables a range of business analytics across AML programs overcoming the limitations of a TMS. The first usecase is around Data Science for  – which is – Give me all transactions in one place, give me all the Case Mgmt files in one place, give me all of the customer data in one place and give me all External data (TBD) in one place. And the reason I want all of this is to perform Exploratory, hypothesis Data Science with the goal being to uncover areas of risk that one possibly missed out on before, find out areas that were not as risky as they thought were before so the risk score can be lowered and really constantly finding out the real Risk profile that your institution bears. E.g. Downgrading investment in your Trade financing as you are find a lot of Scrap Metal based fraudulent transactions.
  4. The other important value driver in deploying Data Science is to perform Advanced Transaction Monitoring Intelligence.  The core idea is to get years worth of Banking data in one location (the datalake) & then applying  unsupervised learning to glean patterns in those transactions. The goal is then to identify profiles of actors with the intent of feeding it into downstream surveillance & TM systems. This knowledge can then be used to –
  • Constantly learn transaction behavior for similar customers is very important in detecting laundering in areas like payment cards. It is very common to have retail businesses setup with the sole purpose of laundering money.
  • Discover transaction activity of trade finance customers with similar traits (types of businesses, nature of transfers, areas of operations etc.)
  • Segment customers by similar trasnaction behaviors
  • Understand common money laundering typologies and identify specific risks from a temporal and spatial/geographic standpoint
  • Improve and lear correlations between alert accuracy and suspicious activity reports (SAR) filings
  • Keep the noise level down by weeding out false positives

Benefits of a forward looking approach..  

We believe that we have a fresh approach that can help Banks with the following value drivers & metrics –

  • Detect AML violations on a proactive basis thus reducing the probability of massive fines
  • Save on staffing expenses for Customer Due Diligence (CDD)
  • Increase accurate production of suspicious activity reports (SAR)
  • Decrease the percent of corporate customers with AML-related account closures in the past year by customer risk level and reason – thus reducing loss of revenue
  • Decrease the overall KYC profile update backlog across geographies
  • Help create Customer 360 views that can help accelerate CLV (Customer Lifetime Value) as well as Customer Segmentation from a cross-sell/up-sell perspective

Big Data shines in all the above areas..

Conclusion…

The AML landscape will rapidly change over the next few years to accommodate the business requirements highlighted above. Regulatory authorities should also lead the way in adopting a Hadoop/ ML/Predictive Analytics based approach over the next few years. There is no other way to do tackle large & medium AML programs in a lower cost and highly automated manner.

Design and Architecture of A Robo-Advisor Platform..(3/3)

This three part series explores the automated investment management or the “Robo-advisor” (RA) movement. The first post in this series @- http://www.vamsitalkstech.com/?p=2329 – discussed how Wealth Management has been an area largely untouched by automation as far as the front office is concerned. As a result, automated investment vehicles have largely begun changing that trend and they helping create a variety of business models in the industry esp those catering to the Millenial Mass Affluent Segment. The second post @- http://www.vamsitalkstech.com/?p=2418  focused on the overall business model & main functions of a Robo-Advisor (RA). This third and final post covers a generic technology architecture for a RA platform.

Business Requirements for a Robo-Advisor (RA) Platform…

Some of the key business requirements of a RA platform that confer it advantages as compared to the manual/human driven style of investing are:

  • Collect Individual Client Data – RA Platforms need to offer a high degree of customization from the standpoint of an individual investor. This means an ability to provide a preferably mobile and web interface to capture detailed customer financial background, existing investments as well as any historical data regarding customer segments etc.
  • Client Segmentation – Clients are to be segmented  across granular segments as opposed to the traditional asset based methodology (e.g mass affluent, high net worth, ultra high net worth etc).
  • Algorithm Based Investment Allocation – Once the client data is collected,  normalized & segmented –  a variety of algorithms are applied to the data to classify the client’s overall risk profile and an investment portfolio is allocated based on those requirements. Appropriate securities are purchased as we will discuss in the below sections.
  • Portfolio Rebalancing  – The client’s portfolio is rebalanced appropriately depending on life event changes and market movements.
  • Tax Loss Harvesting – Tax-loss harvesting is the mechanism of selling securities that have a loss associated with them. By doing so or by taking  a loss, the idea is that that client can offset taxes on both gains and income. The sold securities are replaced by similar securities by the RA platform thus maintaining the optimal investment mix.
  • A Single View of a Client’s Financial History- From the WM firm’s standpoint, it would be very useful to have a single view capability for a RA client that shows all of their accounts, interactions & preferences in one view.

User Interface Requirements for a Robo-Advisor (RA) Platform…

Once a customer logs in using any of the digital channels supported (e.g. Mobile, eBanking, Phone etc)  – they are presented with a single view of all their accounts. This view has a few critical areas – Summary View (showing an aggregated view of their financial picture), the Transfer View (allowing one to transfer funds across accounts with other providers).

The Summary View lists the below

  • Demographic info: Customer name, address, age
  • Relationships: customer rating influence, connections, associations across client groups
  • Current activity: financial products, account interactions, any burning customer issues, payments missed etc
  • Customer Journey Graph: which products or services they are associated with since the time they became a customer first etc,

Depending on the clients risk tolerance and investment horizon, the weighted allocation of investments across these categories will vary. To illustrate this, a Model Portfolio and an example are shown below.

Algorithms for a Robo-Advisor (RA) Platform…

There are a variety of algorithmic approaches that could be taken to building out an RA platform. However the common feature of all of these is to –

  • Leverage data science & statistical modeling to automatically allocate client wealth across different asset classes (such as domestic/foreign stocks, bonds & real estate related securities) to automatically rebalance portfolio positions based on changing market conditions or client preferences. These investment decisions are also made based on detailed behavioral understanding of a client’s financial journey metrics – Age, Risk Appetite & other related information. 
  • A mixture of different algorithms can be used such as Modern Portfolio Theory (MPT), Capital Asset Pricing Model (CAPM), the Black Litterman Model, the Fama-French etc. These are used to allocate assets as well as to adjust positions based on market movements and conditions.
  • RA platforms also provide 24×7 tracking of market movements to use that to track rebalancing decisions from not just a portfolio standpoint but also from a taxation standpoint.

Model Portfolios…

  1. Equity  

             A) US Domestic Stock – Large Cap, Medium Cap , Small Cap, Dividend Stocks 

             B) Foreign Stock – Emerging Markets, Developed Markets

       2. Fixed Income

             A) Developed Market Bonds 

             B) US Bonds

             C) International Bonds

             D) Emerging Markets Bonds

      3. Other 

             A) Real Estate  

             B) Currencies

             C) Gold and Precious Metals

             D) Commodities

       4. Cash

Sample Portfolios – for an aggressive investor…

  1. Equity  – 85%

             A) US Domestic Stock (50%) – Large Cap – 30%, Medium Cap – 10% , Small Cap – 10%, Dividend Stocks – 0%

             B) Foreign Stock – (35%) –  Emerging Markets – 18%, Developed Markets – 17% 

       2. Fixed Income – 5%

             A) Developed Market Bonds  – 2%

             B) US Bonds – 1%

             C) International Bonds – 1%

             D) Emerging Markets Bonds – 1%

      3. Other – 5%

             A) Real Estate  – 3%

             B) Currencies – 0%

             C) Gold and Precious Metals – 0%

             D) Commodities – 2%

       4. Cash – 5%

Technology Requirements for a Robo-Advisor (RA) Platform…

An intelligent RA platform has a few core technology requirements (based on the above business requirements).

  1. A Single Data Repository – A shared data repository called a Data Lake is created, that can capture every bit of client data (explained in more detail below) as well as external data. The RA datalake provides more visibility into all data to a variety of different stakeholders. Wealth Advisors access processed data to view client accounts etc. Clients can access their own detailed positions,account balances etc. The Risk group accesses this shared data lake to processes more position, execution and balance data.  Data Scientists (or Quants) who develop models for the RA platform also access this data to perform analysis on fresh data (from the current workday) or on historical data. All historical data is available for at least five years—much longer than before. Moreover, the Hadoop platform enables ingest of data across a range of systems despite their having disparate data definitions and infrastructures. All the data that pertains to trade decisions and lifecycle needs to be made resident in a general enterprise storage pool that is run on the HDFS (Hadoop Distributed Filesystem) or similar Cloud based filesystem. This repository is augmented by incremental feeds with intra-day trading activity data that will be streamed in using technologies like Sqoop, Kafka and Storm.
  2. Customer Data Collection – Existing Financial Data across the below categories is collected & aggregated into the data lake. This data ranges from Customer Data, Reference Data, Market Data & other Client communications. All of this data, can be ingested using a API or pulled into the lake from a relational system using connectors supplied in the RA Data Platform. Examples of data collected include – Customer’s existing Brokerage accounts, Customer’s Savings Accounts, Behavioral Finance Suveys and Questionnaires etc etc. The RA Data Lake stores all internal & external data.
  3. Algorithms – The core of the RA Platform are data science algos. Whatever algorithms are used – a few critical workflows are common to them. The first is Asset Allocation is to take the customers input in the “ADVICE” tab for each type of account and to tailor the portfolio based on the input. The others include Portfolio Rebalancing and Tax Loss Harvesting.
  4. The RA platform should be able to store market data across years both from a macro and from an individual portfolio standpoint so that several key risk measures such as volatility (e.g. position risk, any residual risk and market risk), Beta, and R-Squared – can be calculated at multiple levels.  This for individual securities, a specified index, and for the client portfolio as a whole.

roboadvisor_design_arch

                      Illustration: Architecture of a Robo-Advisor (RA) Platform 

The overall logical flow of data in the system –

  • Information sources are depicted at the left. These encompass a variety of institutional, system and human actors potentially sending thousands of real time messages per hour or by sending over batch feeds.
  • A highly scalable messaging system to help bring these feeds into the RA Platform architecture as well as normalize them and send them in for further processing. Apache Kafka is a good choice for this tier. Realtime data is published by a range of systems over Kafka queues. Each of the transactions could potentially include 100s of attributes that can be analyzed in real time to detect business patterns.  We leverage Kafka integration with Apache Storm to read one value at a time and perform some kind of storage like persist the data into a HBase cluster.In a modern data architecture built on Apache Hadoop, Kafka ( a fast, scalable and durable message broker) works in combination with Storm, HBase (and Spark) for real-time analysis and rendering of streaming data. 
  • Trade data is thus streamed into the platform (on a T+1 basis), which thus ingests, collects, transforms and analyzes core information in real time. The analysis can be both simple and complex event processing & based on pre-existing rules that can be defined in a rules engine, which is invoked with Apache Storm. A Complex Event Processing (CEP) tier can process these feeds at scale to understand relationships among them; where the relationships among these events are defined by business owners in a non technical or by developers in a technical language. Apache Storm integrates with Kafka to process incoming data. 
  • For Real time or Batch Analytics, Apache HBase provides near real-time, random read and write access to tables (or ‘maps’) storing billions of rows and millions of columns. In this case once we store this rapidly and continuously growing dataset from the information producers, we are able  to do perform super fast lookup for analytics irrespective of the data size.
  • Data that has analytic relevance and needs to be kept for offline or batch processing can be stored using the Hadoop Distributed Filesystem (HDFS) or an equivalent filesystem such as Amazon S3 or EMC Isilon or Red Hat Gluster. The idea to deploy Hadoop oriented workloads (MapReduce, or, Machine Learning) directly on the data layer. This is done to perform analytics on small, medium or massive data volumes over a period of time. Historical data can be fed into Machine Learning models created above and commingled with streaming data as discussed in step 1.
  • Horizontal scale-out (read Cloud based IaaS) is preferred as a deployment approach as this helps the architecture scale linearly as the loads placed on the system increase over time. This approach enables the Market Surveillance engine to distribute the load dynamically across a cluster of cloud based servers based on trade data volumes.
  • It is recommended to take an incremental approach to building the RA platform, once all data resides in a general enterprise storage pool and makes the data accessible to many analytical workloads including Trade Surveillance, Risk, Compliance, etc. A shared data repository across multiple lines of business provides more visibility into all intra-day trading activities. Data can be also fed into downstream systems in a seamless manner using technologies like SQOOP, Kafka and Storm. The results of the processing and queries can be exported in various data formats, a simple CSV/txt format or more optimized binary formats, json formats, or you can plug in custom SERDE for custom formats. Additionally, with HIVE or HBASE, data within HDFS can be queried via standard SQL using JDBC or ODBC. The results will be in the form of standard relational DB data types (e.g. String, Date, Numeric, Boolean). Finally, REST APIs in HDP natively support both JSON and XML output by default.
  • Operational data across a bunch of asset classes, risk types and geographies is thus available to investment analysts during the entire trading window when markets are still open, enabling them to reduce risk of that day’s trading activities. The specific advantages to this approach are two-fold: Existing architectures typically are only able to hold a limited set of asset classes within a given system. This means that the data is only assembled for risk processing at the end of the day. In addition, historical data is often not available in sufficient detail. Hadoop accelerates a firm’s speed-to-analytics and also extends its data retention timeline
  • Apache Atlas is used to provide Data Governance capabilities in the platform that use both prescriptive and forensic models, which are enriched by a given businesses data taxonomy and metadata.  This allows for tagging of trade data  between the different businesses data views, which is a key requirement for good data governance and reporting. Atlas also provides audit trail management as data is processed in a pipeline in the lake
  • Another important capability that Big Data/Hadoop can provide is the establishment and adoption of a Lightweight Entity ID service – which aids dramatically in the holistic viewing & audit tracking of trades. The service will consist of entity assignment for both institutional and individual traders. The goal here is to get each target institution to propagate the Entity ID back into their trade booking and execution systems, then transaction data will flow into the lake with this ID attached providing a way to do Client 360.
  • Output data elements can be written out to HDFS, and managed by HBase. From here, reports and visualizations can easily be constructed. One can optionally layer in search and/or workflow engines to present the right data to the right business user at the right time.  

Conclusion…

As one can see clearly, though automated investing methods are still in early stages of maturity – they hold out a tremendous amount of promise. As they are unmistakably the next big trend in the WM industry industry players should begin developing such capabilities.

The Three Core Competencies of Digital – Cloud, Big Data & Intelligent Middleware

Ultimately, the cloud is the latest example of Schumpeterian creative destruction: creating wealth for those who exploit it; and leading to the demise of those that don’t.” – Joe Weiman author of Cloudonomics: The Business Value of Cloud Computing

trifacta_digital

The  Cloud As a Venue for Digital Workloads…

As 2016 draws to a close, it can safely be said that no industry leader questions the existence of the new Digital Economy and the fact that every firm out there needs to create a digital strategy. Myriad organizations are taking serious business steps to making their platforms highly customer-centric via a renewed operational metrics focus. They are also working on creating new business models using their Analytics investments. Examples of these verticals include Banking, Insurance, Telecom, Healthcare, Energy etc.

As a general trend, the Digital Economy brings immense opportunities while exposing firms to risks as well. Customers now demanding highly contextual products, services and experiences – all accessible via an easy API (Application Programming Interfaces).

Big Data Analytics (BDA) software revenues will grow from nearly $122B in 2015 to more than $187B in 2019 – according to Forbes [1].  At the same time, it is clear that exploding data generation across the global economy has become a clear & present business phenomenon. Data volumes are rapidly expanding across industries. However, while the production of data itself that has increased but it is also driving the need for organizations to derive business value from it. As IT leaders know well, digital capabilities need low cost yet massively scalable & agile information delivery platforms – which only Cloud Computing can provide.

For a more detailed technical overview- please visit below link.

http://www.vamsitalkstech.com/?p=1833

Big Data & Big Data Analytics drive consumer interactions.. 

The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous online interactions with global consumers/customers/clients or patients. The goal is not just provide engaging visualization but also to personalize services clients care about across multiple channels of interaction. The only way to attain digital success is to understand your customers at a micro level while constantly making strategic decisions on your offerings to the market. Big Data has become the catalyst in this massive disruption as it can help business in any vertical solve their need to understand their customers better & perceive trends before the competition does. Big Data thus provides the foundational  platform for successful business platforms.

The three key areas where Big Data & Cloud Computing intersect are – 

  • Data Science and Exploration
  • ETL, Data Backups and Data Preparation
  • Analytics and Reporting

Big Data drives business usecases in Digital in myriad ways – key examples include  –  

  1. Obtaining a realtime Single View of an entity (typically a customer across multiple channels, product silos & geographies)
  2. Customer Segmentation by helping businesses understand their customers down to the individual micro level as well as at a segment level
  3. Customer sentiment analysis by combining internal organizational data, clickstream data, sentiment analysis with structured sales history to provide a clear view into consumer behavior.
  4. Product Recommendation engines which provide compelling personal product recommendations by mining realtime consumer sentiment, product affinity information with historical data.
  5. Market Basket Analysis, observing consumer purchase history and enriching this data with social media, web activity, and community sentiment regarding past purchase and future buying trends.

Further, Digital implies the need for sophisticated, multifactor business analytics that need to be performed in near real time on gigantic data volumes. The only deployment paradigm capable of handling such needs is Cloud Computing – whether public or private. Cloud was initially touted as a platform to rapidly provision compute resources. Now with the advent of Digital technologies, the Cloud & Big Data will combine to process & store all this information.  According to the IDC , by 2020 spending on Cloud based Big Data Analytics will outpace on-premise by a factor of 4.5. [2]

Intelligent Middleware provides Digital Agility.. 

Digital Applications are applications modular, flexible and responsive to a variety of access methods – mobile & non mobile. These applications are also highly process driven and support the highest degree of automation. The need of the hour is to provide enterprise architecture capabilities around designing flexible digital platforms that are built around efficient use of data, speed, agility and a service oriented architecture. The choice of open source is key as it allows for a modular and flexible architecture that can be modified and adopted in a phased manner – as you will shortly see.

The intention in adopting a SOA (or even a microservices) architecture for Digital capabilities is to allow lines of business an ability to incrementally plug in lightweight business services like customer on-boarding, electronic patient records, performance measurement, trade surveillance, risk analytics, claims management etc.

Intelligent Middleware adds significant value in six specific areas –

  1. Supports a high degree of Process Automation & Orchestration thus enabling the rapid conversion of paper based business processes to a true digital form in a manner that lends itself to continuous improvement & optimization
  2. Business Rules help by adding a high degree of business flexibility & responsiveness
  3. Native Mobile Applications  enables platforms to support a range of devices & consumer behavior across those front ends
  4. Platforms As a Service engines which enable rapid application & business capability development across a range of runtimes and container paradigms
  5. Business Process Integration engines which enable rapid application & business capability development
  6. Middleware brings the notion of DevOps into the equation. Digital projects bring several technology & culture challenges which can be solved by a greater degree of collaboration, continuous development cycles & new toolchains without giving up proven integration with existing (or legacy)systems.

Intelligent Middleware not only enables Automation & Orchestration but also provides an assembly environment to string different (micro)services together. Finally, it also enables less technical analysts to drive application lifecycle as much as possible.

Further, Digital business projects call out for mobile native applications – which a forward looking middleware stack will support.Middleware is a key component for driving innovation and improving operational efficiency.

Five Key Business Drivers for combining Big Data, Intelligent Middleware & the Cloud…

The key benefits of combining the above paradigms to create new Digital Applications are –

  • Enable Elastic Scalability Across the Digital Stack
    Cloud computing can handle the storage and processing of any amount of data & any kind of data.This calls for the collection & curation of data from dynamic and highly distributed sources such as consumer transactions, B2B interactions, machines such as ATM’s & geo location devices, click streams, social media feeds, server & application log files and multimedia content such as videos etc. It needs to be noted that data volumes here consist of multi-varied formats, differing schemas, transport protocols and velocities. Cloud computing provides the underlying elastic foundation to analyze these datasets.
  • Support Polyglot Development, Data Science & Visualization
    Cloud technologies are polyglot in nature. Developers can choose from a range of programming languages (Java, Python, R, Scala and C# etc) and development frameworks (such as Spark and Storm). Cloud offerings also enable data visualization using a range of tools from Excel to BI Platforms.
  • Reduce Time to Market for Digital Business Capabilities
    Enterprises can avoid time consuming installation, setup & other upfront procedures. consuming  can deploy Hadoop in the cloud without buying new hardware or incurring other up-front costs. On the same vein, even big data analytics should be able to support self service across the lifecycle – from data acquisition, preparation, analysis & visualization.
  • Support a multitude of Deployment Options – Private/Public/Hybrid Cloud 
    A range of scenarios for product development, testing, deployment, backup or cloudbursting are efficiently supported in pursuit of cost & flexibility goals.
  • Fill the Talent Gap
    Open Source technology is the common thread across Cloud, Big Data and Middleware. The hope is that the ubiquity of open source will serve as a critical level in enabling the filling up of the IT-Business skills scarcity gap.

As opposed to building standalone or one-off business applications, a ‘Digital Platform Mindset’ is a more holistic approach capable of producing higher rates of adoption & thus revenues. Platforms abound in the web-scale world at shops like Apple, Facebook & Google etc. Digital Applications are constructed like lego blocks  and they reuse customer & interaction data to drive cross sell and up sell among different product lines. The key components here are to ensure that one starts off with products with high customer attachment & retention. While increasing brand value, it is key to ensure that customers & partners can also collaborate in the improvements in the various applications hosted on top of the platform.

References

[1] Forbes Roundup of Big Data Analytics (BDA) Report

http://www.forbes.com/sites/louiscolumbus/2016/08/20/roundup-of-analytics-big-data-bi-forecasts-and-market-estimates-2016/#b49033b49c5f

[2] IDC FutureScape: Worldwide Big Data and Analytics 2016 Predictions

Five Areas Where Big Data Drives Innovation in the Bill Pay Industry..

As the Bill Pay Industry Motors On…

The traditional model of service providers relying on call centers and face-to-face interactions with their customers to gauge their satisfaction are long past. With the advent of PSD2, the regulatory authorities themselves may be more open to new business models in the Bill Pay space.

With the explosion of data being collected from mobile applications, location based devices & social media, Bill Pay providers can monetize on their years of historical data by opportunistically combining the above and providing Analytics in the below five strategic areas

  1. Ensuring the best possible & timely Customer Payment Experience –Younger customers are typically very happy in leveraging online channels like mobile phones, web applications to make their payment instead of using paper based mailing. Using online channels to process payments also results in higher degrees of both end customer and service provider satisfaction, as it is quicker in terms of funds transfer, availability and is also less error prone. Leveraging Big Data to understand which of your customers prefer mobile channels (based on lifestyle & behavioral preferences) and helping them download service provider mobile applications can accelerate mobile payment adoption modes. Another key use case is to understand which customers typically pay just before or after the deadline thus incurring late fees – another source of customer dissatisfaction. Again, understanding customer payment modes & trends can help increase customer satisfaction here. The ability to reach out to a customer at the best possible mode that they prefer (via mobile app, or, a text message, or, a phone call) can also help address customer dissatisfaction with services.
  2. Provding a Unified View of Customer Across Multiple Service Accounts – Creating a single customer profile or view across multiple household services & interactions, payment history across those can provide an ability for Service Providers to understand the total Customer Lifetime Value (CLV) of a single customer. Creating this profile can also help drive the business value in the following areas.
  • What mode of contact do they prefer? And at what time? Can Customers be better targeted at these channels at those preferred times?
  • What is the overall Customer Lifetime Value (CLV) or how much profit we are able to generate from this customer over their total lifetime?
  • By understanding CLV across populations, can Service Providers leverage that to increase spend on marketing & sales for products that are resulting in higher customer value?
  • Which of my customers are targets for promoting Green Services and Products?
  • What Features are customers currently missing?
  • How can Service Providers we increase cross sell and up-sell of products & services?
  • Does this customer fall into a certain natural segment and if so, how can we acquire most customers like them?

 

monetize_billpay

           Five Ways for Bill Pay Providers to Monetize their Data Assets

  1. Improving Customer Satisfaction – Creating a single customer profile or view across multiple household services & interactions can provide an ability for Service Providers to understand the total Customer Lifetime Value (CLV) of a single customer. Creating this profile can also help drive the business value in the following areas – Customer Satisfaction, Customer NPS (Net Promoter Score), Customer Mood & Willingness to adopt new services, Customer Retention etc.
  1. Analytics As A Service to interested 3rd Parties

The ability of consumers to make their household services payments can serve as a reliable indicator of household economic health as well as a sign of their willingness to adopt new products and services. This data can be anonymized at an individual consumer level, analyzed using machine learning and be provided as a service to various stakeholders – Other businesses like Retailers, the Government & the Regulatory Authorities.

Concrete examples include –

  • Combining Social data, demographic data with bill pay data & other credit data can help the Government gauge the direction of the economy. Obviously the more data that can be merged into this model (e.g. mortgage payment data etc) can help with its overall accuracy
  • Allowing Retailers to analyze consumer mobile usage data, bill pay data, credit records as well as use external data (social media etc) to predict what products they may like etc and to target promotions & card offers etc

A final note on the overall scope of Predictive Analytics in this usecase-

  • Obtaining a real-time Single View of the Customer (typically a customer across multiple channels, product silos & geographies) across years of account history
  • Customer Segmentation by helping businesses understand customer segments down to the individual level as well as at a segment level
  • Performing Customer sentiment analysis by combining internal organizational data, clickstream data, sentiment analysis with structured sales history to provide a clear view into consumer behavior.
  • Product Recommendation engines which provide compelling personal product recommendations by mining realtime consumer sentiment, product affinity information with historical data etc.
  • Market Basket Analysis, observing consumer purchase history and enriching this data with social media, web activity, and community sentiment regarding past purchase and future buying trends.

5.Service Provider Analytics

Service Providers can themselves access this data to help with the various areas of their operations –

  • Improve new Consumer Acquisition by creating client profiles and helping develop targeted leads across a population of individuals
  • Instrument and understand Risk at multiple levels (customer churn, client risk etc) in real time
  • Financial risk modeling across multiple dimensions (?)
  • For Providers with multiple products & services (e.g Cable, Voice and Internet), Basket Analysis based on criteria like behavioral preferences, asset allocation etc – i.e “what products & services are typically purchased in tandem”
  • Run in place analytics on customer lifetime value (CLV) and yield per customer
  • Suggest Next Best Action for a given client and across a pool of customers
  • Provide multiple levels of dashboards ranging from the Descriptive (Business Intelligence) to the Prescriptive (business simulation as well as optimization)
  • Help with Compliance and other reporting functions

CONCLUSION…

Bill Pay is a specialized area of the payments industry. However, the massive amounts of historical customer & service data that players possess can be advantageously leveraged to provide value added services and ultimately drive new business models.