Payment Providers – How Big Data Analytics Provides New Opportunities in 2017

                                                         Image Credit – JDL Group

Payments Industry in 2017..

The last post in this blog (handy link below) discussed my predictions for the payments market in 2017. The payments industry is large, quite diverse from a capabilities standpoint while being lucrative from a revenue standpoint.

My Last Post for the Year – Predictions for the Global Payments Industry in 2017

Why is that?

First, payments are both an essential daily function for consumers and corporates alike which means a constant annual growth in transaction volumes. Volumes are the very lifeblood of the industry.

Second, thanks to the explosion of technology capabilities especially around Smartphones & Smart Apps – the number of avenues that consumers can use to make payments has virtually surged.

Thirdly, an increasing number of developing economies such as China, India and Brazil are slowly moving over massive consumer populations over to digital payments from previously all cash economies.

Finally, in developed economies – the increased regulatory push  in the form of standards like PSD2 (Payments Systems Directive 2) have begun blurring boundaries between traditional players and the new upstarts.

All of these factors have the Payments industry growing at a faster clip than most other areas of finance. No wonder, payments startups occupy pride of place in the FinTech boom.

The net net of all this is that payments will continue to offer a steady and attractive stream of investments for players in this area.

Big Data Driven Analytics in the Payments Industry..

Much like the other areas of finance, the payments industry can benefit tremendously from adopting the latest techniques in data storage and analysis. Let us consider the important ways in which they can leverage the diverse and extensive data assets they possess to perform important business functions –

  1. Integrating all the complex & disparate functions of Payments Platforms
    Most payment providers offer a variety of services. E.g. credit cards, debit cards and corporate payments. Integrating different kinds of payment types – credit cards, debit cards, Check, Wire Transfers etc into one centralized payment platform. This helps with internal efficiencies (e.g collapsing redundant functions such as fraud, risk scoring, reconciliation, reporting into one platform) but also with external services offered to merchants (e.g. forecasting, analytics etc).
  2. Detect Payments Fraud
    Big Data is dramatically changing that approach with advanced analytic solutions that are powerful and fast enough to detect fraud in real time but also build models based on historical data (and deep learning) to proactively identify risks.

    http://www.vamsitalkstech.com/?p=1098

  3. Risk Scoring of Payments in Realtime & Batch 
    Payment Providers assess the risk score of transactions in realtime depending upon various attributes (e.g. Consumer’s country of origin, IP Address etc). Big Data enables these attributes to become granular by helping support advanced statistical techniques to incorporate behavioral (e.g. transaction is out of normal behavior for a consumers buying patterns), temporal and spatial techniques.
  4. Detect Payments Money Laundering (AML)
    A range of Big Data techniques are being deployed  to detect money laundering disguised as legitimate payments.

    http://www.vamsitalkstech.com/?p=2559

  5. Understand Your Customers Better
    Payment providers can create a single view of a Cardholder across multiple accounts & channels of usage. Doing this will enable cross sell/upsell and better customer segmentation. The below picture says it all.

    http://www.vamsitalkstech.com/?p=2517

  6. Merchant Analytics 
    Payment providers have been sitting on petabytes of customer data and have only now began waking up to the possibilities of monetizing this data. An area of increasing interest is to provide sophisticated analytics to merchants as a way of driving merchant rewards programs. Retailers, Airlines and other online merchants need to understand what segments their customers fall into as well as what the best avenues are to market to each of them. E.g. Webapp, desktop or tablet etc. Using all of the Payment Data available to them, Payment providers can help Merchant Retailers understand their customers better as well as improve their loyalty programs.
  7. Cross Sell & Up Sell New Payment & Banking Products & Services
    Most payment service providers are also morphing into online banks. Big Data based Data Lakes support the integration of regular banking  capabilities such as bill payment, person-to-person payments and account-to-account transfers to streamline the payments experience beyond the point of sale. Consumers can then move and manage money at the time they choose: instantly, same-day, next-day or on a scheduled date in the future
  8. Delivering the best possible highly personalized Payments Experience
    Mobile Wallets offer the consumer tremendous convenience by Data Lakes support the integration of capabilities such as bill payment, person-to-person payments and account-to-account transfers to streamline the payments experience beyond the point of sale. Consumers can then move and manage money at the time they choose: instantly, same-day, next-day or on a scheduled date in the future

Conclusion..

As we have discussed in previous posts in this blog, the payments industry is at the cusp (if not already, in the midst) of a massive disruption. Business strategies will continue to be driven by technology especially Big Data Analytics. Whether this is in Defense (cut costs, optimize IT, defend against financial crimes or augment existing cyber security) or playing Offense (signing up new customers, better cross sell and data monetization) – Big Data will continue to be a key capability in the industry.

My Last Post for the Year – Predictions for the Global Payments Industry in 2017

The Global Payments Industry enters 2017..

It is not without reason that the payments industry lies at the heart of the FinTech boom. Across the globe, hundreds of startups are vying to provide digital services across the complex payments spectrum. Players ranging from Smartphone manufacturers, FinTechs,Banks and Retailers are all players in this ecosystem.Added to this the payments landscape across the globe is experiencing massive change driven by technology and regulatory mandates. Please find my take on the top five trends on this dynamic industry as we begin 2017, which truly promises to be a watershed year for the industry.

                                                         Image Credit – MasterCard

Trend #1 Mobile Payments volume to surge in 2017

By 2019, global consumer mobile payment volumes are expected to surpass 1 trillion US dollars [1] – this is a massive increase from just 450 billion US dollars in 2017.

The growing popularity of alternative payment modes like Mobile Wallets (e.g Apple Pay, Chase and Android Pay) are driving increased volumes across both open loop and closed loop payments. Couple this with in-app payments (e.g Uber) as well as Banking providers with their own Digital Wallets will step up their game only driving further adoption.

Retailers like Walmart, Nordstrom and Tesco have already started offering more convenient in store payments. At the same time, mobile commerce has matured as smartphone manufacturers have started to create devices with larger screens and payment companies have added one-click buy buttons in order to remove the friction of shopping on mobile phones.

This trend is being clearly seen across all forms of consumer and merchant payments. This is owed to the convenience of making these payments often at the click of a button. This trend will only continue to accelerate in 2017 as smartphone manufacturers continue to make devices that have more onscreen real estate. This will drive more mobile commerce. With IoT technology taking center stage, the day is not long off when connected devices (e.g. wearables) make their own payments.

A smoother and friction less consumer payment experience is what is driving adoption across all of these modes. It is not just about convenience (with a range of stored user preferences) but also comfort (a range of analytics that provide integration with the users other banking products). Thus, providing a more timely and integrated experience.

Trend #2 Payment Providers will rethink their business models

Across the globe, national governments and regulatory authorities are beginning to take note of the fact that they need to unshackle banking data from the incumbents and provide access to other service providers. The intention is to change archaic business models. Access to customer information and transaction data will enable the creation of new business services like with the FinTechs.

On the institutional side, provisions permitting cross border acquiring as well as capping of interchange fees have been passed. This will enable third part processors to access consumer account information thus enabling them to create new products. E..g. Offer banking services, contextual offers etc. This will lead to banks and payment providers building products that provide value added services on the data that they already own.

With the passage of the second revision of the pathbreaking Directive on Payment Services Directive (PSD-2), the European Parliament has adopted the legal foundation of the creation of a EU-wide single payments area (SEPA).  While the goal of the PSD is to establish a set of modern, digital industry rules for all payment services in the European Union; it has significant ramifications for the financial services industry as it will surely current business models & foster new areas of competition. The key message from a regulatory standpoint is that consumer data can be opened up to other players in the payment value chain. This will lead to a clamor by players to own more of the customers data with a view to selling business services (e.g. accurate credit scoring, access to mortgage & other consumer loans and mutual funds etc) on that information.

Trend #3 Global Payment infrastructure moves to Real Time

The demand for fast payments from both consumers and corporates has led to about 40+ countries implementing immediate payment infrastructures that are highly secure yet speedy. The European Union leads the way with Denmark (Realtime 24/7), Norway and the UK (FPS) implementing such systems. The US and Canada have also begun making moves on this front as well.

The implications of this are two fold. One, this will drive down the already decreasing percentage of cash payments in the system while – two – increasing the ability of providers and non banks to provide value added services on the transaction data which is more readily available. At the same time, expect more regulatory focus on moving fraud and compliance programs into a real time mode.

Trend #4 Payment Providers will begin resembling Bay area startups

This one is a little provocative but the high tech trend towards digitization is clear. Payment technology is the eye of the storm – from a FinTech standpoint. This trend will accelerate in 2017 with the easy availability of open source technology in four critical areas – Big Data, Cloud Computing, Predictive Analytics & Blockchain.

Big Data will heavily be leveraged (on Private or Public Cloud based infrastructure) to perform real-time Predictive analytics on payments data in motion as well as at rest. Critical capabilities such as a Single View of Customer/Payment/Fraud & Customer Journey Management etc will all depend on Big Data. Blockchain technology (and its attractiveness in terms of removing the middleman while providing transparency & security) will continue to be prototyped across many different areas.

Trend #5 Cyber Security, Risk and Compliance will remain board level concerns

If there is one common thread across the entire payments value chain – Merchants, Acquirers, Gateways, Schemes, Banks, Corporates etc – it is the risk of cyber attacks. Though EMV based chip technology has reduced point of sales fraud, the trend in cyberattacks is only on the upsurge. Techniques like tokenization – have been developed to help both the schemes as well as providers of digital wallets etc reduce their compliance risk. As corporate payments and other B2C payments move to becoming more cross border – the focus on Anti Money Laundering and Fraud detection will only increase. The need of the hour is to deploy realtime analytics and Big Data techniques to tackle these at an application level.

Big Data Counters Payment Card Fraud (1/3)…

Conclusion..

As payments services firms begin 2017, they will need to transition to a customer oriented mindset. They will being pushed to share data through open standards, become highly digitized in interacting with consumers and will begin leveraging the vast internal data (about customers, their transaction histories, financial preferences, operational insights etc) to create new products or services or to enhance the product experience.

References..

[1]  Mobile Payment Volumes Forecast – https://www.statista.com/statistics/226530/mobile-payment-transaction-volume-forecast/

Five Areas Where Big Data Drives Innovation in the Bill Pay Industry..

As the Bill Pay Industry Motors On…

The traditional model of service providers relying on call centers and face-to-face interactions with their customers to gauge their satisfaction are long past. With the advent of PSD2, the regulatory authorities themselves may be more open to new business models in the Bill Pay space.

With the explosion of data being collected from mobile applications, location based devices & social media, Bill Pay providers can monetize on their years of historical data by opportunistically combining the above and providing Analytics in the below five strategic areas

  1. Ensuring the best possible & timely Customer Payment Experience –Younger customers are typically very happy in leveraging online channels like mobile phones, web applications to make their payment instead of using paper based mailing. Using online channels to process payments also results in higher degrees of both end customer and service provider satisfaction, as it is quicker in terms of funds transfer, availability and is also less error prone. Leveraging Big Data to understand which of your customers prefer mobile channels (based on lifestyle & behavioral preferences) and helping them download service provider mobile applications can accelerate mobile payment adoption modes. Another key use case is to understand which customers typically pay just before or after the deadline thus incurring late fees – another source of customer dissatisfaction. Again, understanding customer payment modes & trends can help increase customer satisfaction here. The ability to reach out to a customer at the best possible mode that they prefer (via mobile app, or, a text message, or, a phone call) can also help address customer dissatisfaction with services.
  2. Provding a Unified View of Customer Across Multiple Service Accounts – Creating a single customer profile or view across multiple household services & interactions, payment history across those can provide an ability for Service Providers to understand the total Customer Lifetime Value (CLV) of a single customer. Creating this profile can also help drive the business value in the following areas.
  • What mode of contact do they prefer? And at what time? Can Customers be better targeted at these channels at those preferred times?
  • What is the overall Customer Lifetime Value (CLV) or how much profit we are able to generate from this customer over their total lifetime?
  • By understanding CLV across populations, can Service Providers leverage that to increase spend on marketing & sales for products that are resulting in higher customer value?
  • Which of my customers are targets for promoting Green Services and Products?
  • What Features are customers currently missing?
  • How can Service Providers we increase cross sell and up-sell of products & services?
  • Does this customer fall into a certain natural segment and if so, how can we acquire most customers like them?

 

monetize_billpay

           Five Ways for Bill Pay Providers to Monetize their Data Assets

  1. Improving Customer Satisfaction – Creating a single customer profile or view across multiple household services & interactions can provide an ability for Service Providers to understand the total Customer Lifetime Value (CLV) of a single customer. Creating this profile can also help drive the business value in the following areas – Customer Satisfaction, Customer NPS (Net Promoter Score), Customer Mood & Willingness to adopt new services, Customer Retention etc.
  1. Analytics As A Service to interested 3rd Parties

The ability of consumers to make their household services payments can serve as a reliable indicator of household economic health as well as a sign of their willingness to adopt new products and services. This data can be anonymized at an individual consumer level, analyzed using machine learning and be provided as a service to various stakeholders – Other businesses like Retailers, the Government & the Regulatory Authorities.

Concrete examples include –

  • Combining Social data, demographic data with bill pay data & other credit data can help the Government gauge the direction of the economy. Obviously the more data that can be merged into this model (e.g. mortgage payment data etc) can help with its overall accuracy
  • Allowing Retailers to analyze consumer mobile usage data, bill pay data, credit records as well as use external data (social media etc) to predict what products they may like etc and to target promotions & card offers etc

A final note on the overall scope of Predictive Analytics in this usecase-

  • Obtaining a real-time Single View of the Customer (typically a customer across multiple channels, product silos & geographies) across years of account history
  • Customer Segmentation by helping businesses understand customer segments down to the individual level as well as at a segment level
  • Performing Customer sentiment analysis by combining internal organizational data, clickstream data, sentiment analysis with structured sales history to provide a clear view into consumer behavior.
  • Product Recommendation engines which provide compelling personal product recommendations by mining realtime consumer sentiment, product affinity information with historical data etc.
  • Market Basket Analysis, observing consumer purchase history and enriching this data with social media, web activity, and community sentiment regarding past purchase and future buying trends.

5.Service Provider Analytics

Service Providers can themselves access this data to help with the various areas of their operations –

  • Improve new Consumer Acquisition by creating client profiles and helping develop targeted leads across a population of individuals
  • Instrument and understand Risk at multiple levels (customer churn, client risk etc) in real time
  • Financial risk modeling across multiple dimensions (?)
  • For Providers with multiple products & services (e.g Cable, Voice and Internet), Basket Analysis based on criteria like behavioral preferences, asset allocation etc – i.e “what products & services are typically purchased in tandem”
  • Run in place analytics on customer lifetime value (CLV) and yield per customer
  • Suggest Next Best Action for a given client and across a pool of customers
  • Provide multiple levels of dashboards ranging from the Descriptive (Business Intelligence) to the Prescriptive (business simulation as well as optimization)
  • Help with Compliance and other reporting functions

CONCLUSION…

Bill Pay is a specialized area of the payments industry. However, the massive amounts of historical customer & service data that players possess can be advantageously leveraged to provide value added services and ultimately drive new business models.

Hadoop counters Credit Card Fraud..(2/3)

This article is the second installment in a three part series that covers one of the most critical issues facing the financial industry – Payment Card Fraud. While the first (and previous) post discussed the global scope of the problem & the business ramifications –  this post will discuss a candidate Big Data Architecture that can help financial institutions turn the tables on Fraudster Networks. The final post will cover the evolving business landscape in this sector – in the context of disruptive technology innovation (predictive & streaming analytics) and will make specific recommendations from a thought leadership standpoint.

Traditional Approach to Fraud Monitoring & Detection – 

Traditional Fraud detection systems have been focused on looking for factors such as known bad IP addresses or unusual login times based on Business Rules and Events. Advanced fraud detection systems augment the above approach with building models of customer behavior at the macro level. Then they would use these models to detect anomalous transactions and flag them as potentially being fraudulent. However, the scammers have also learnt to stay ahead of the scammed and are leveraging computing advances to come up with ever new ways of cheating the banks.

Case in point [1] –

In 2008 and 2009, PayPal tested several fraud detection packages, finding that none could provide correct analysis fast enough, Dr. Wang (head of Fraud Risk Sciences – PayPal) said. She declined to name the packages but said that the sheer amount of data PayPal must analyze slowed those systems down.

Why Big Data and Hadoop for Fraud Detection?

Big Data is dramatically changing that approach with advanced analytic solutions that are powerful and fast enough to detect fraud in real time but also build models based on historical data (and deep learning) to proactively identify risks.

The business reasons why Hadoop is emerging as the best choice for fraud detection are –

  1. Real time insights –  Hadoop can be used to generate insights at a latency of a few milliseconds  that can assist Banks in detecting fraud as soon as it happens
  2. A Single View of Customer/Transaction & Fraud enabled by Hadoop
  3. Loosely coupled yet Cloud Ready Architecture
  4. Highly Scalable yet Cost effective

The technology reasons why Hadoop is emerging as the best choice for fraud detection are:

  1. Hadoop (Gen 2) is not just a data processing platform. It has multiple personas – a real time, streaming data, interactive platform for any kind of data processing (batch, analytical, in memory & graph based) along with search, messaging & governance capabilities built in – all of which support fraud detection architecture patterns
  2. Hadoop provides not just massive data storage capabilities but also provides multiple frameworks to process the data resulting in response times of milliseconds with the outmost reliability whether that be realtime data or historical processing of backend data
  3. Hadoop can ingest billions of events at scale thus supporting the most mission critical analytics irrespective of data size
  4. From a component perspective Hadoop supports multiple ways of running models and algorithms that are used to find patterns of fraud and anomalies in the data to predict customer behavior. Examples include Bayesian filters, Clustering, Regression Analysis, Neural Networks etc. Data Scientists & Business Analysts have a choice of MapReduce, Spark (via Java,Python,R), Storm etc and SAS to name a few – to create these models. Fraud model development, testing and deployment on fresh & historical data become very straightforward to implement on Hadoop
  5. Hadoop is not all about highly scalable filesystems and processing engines. It also provides native integration with highly scalable NoSQL options including a database called HBase. HBase has been proven to support near real-time ingest of billions of data streams. HBase provides near real-time, random read and write access to tables containing billions of rows and millions of columns

Again, from [1] –

PayPal processes more than 1.1 petabytes of data for 169 million active customer accounts, according to James Barrese, PayPal’s chief technology officer. During customer transactions, subsets of the data are analyzed in real-time.

Since 2009, PayPal has been building and modifying its fraud analytics systems, incorporating new open-source technologies as they have evolved. For example, the company uses Hadoop to store data, and related analytics tools, such as the Kraken. A data warehouse from Teradata Corp. stores structured data. The fraud analysis systems run on both grid and cloud computing infrastructures.

Several kinds of algorithms analyze thousands of data points in real-time, such as IP address, buying history, recent activity at the merchant’s site or at PayPal’s site and information stored in cookies. Results are compared with external data from identity authentication providers. Each transaction is scored for likely fraud, with suspicious activity flagged for further automated and human scrutiny, Mr. Barrese said.

After implementing multiple large real time data processing applications using Big Data related technologies in financial services, we present a proven architectural pattern & technology stack that has been proven in very large production deployments. The key goal is to process millions of events per second, tens of billions of events per day and tens of terabytes of financial data per day – as is to be expected in a large Payment Processor or Bank.

Business Requirements

  1. Ingest (& cleanse) real time Card usage data to get complete view of every transaction with a view to detecting potential fraud
  2. Support multiple ways of ingest across a pub-sub messaging paradigm,clickstreams, logfile aggregation and batch data – at a minimum
  3. Allow business users to specify 1000’s of rules that signal fraud e.g. when the same credit card is used from multiple IP addresses within a very short span of time
  4. Support batch oriented analytics that provide predictive and historical models of performance
  5. As much as possible, eliminate false positives as these cause inconvenience to customers and also inhibit transaction volumes
  6. Support a very high degree of scalability – 10’s of millions of transactions a day and 100’s of TB of historical information
  7. Predict cardholder behavior (using a 360 degree view) to provide better customer service
  8. Help target customer transactions for personalized communications on transactions that raise security flags
  9. Deliver alerts the ways customers want — web, text, email and mail etc
  10. Track these events end to end from a strategic perspective across dashboards and predictive models
  11. Help provide a complete picture of high value customers to help drive loyalty programs

Design and Architecture

The architecture thus needs to consider two broad data paradigms — data in motion and data at rest.

Data in motion is defined as streaming data that is being sent into an information architecture in real time. Examples of data in motion include credit card swipes, e-commerce tickets, web-based interactions and social media feeds that are a result of purchases or feedback about services. The challenge in this area is to assimilate a huge volume of data and filter it, gather reason from it and to send it to downstream systems such as a business process management (BPM) or a Partner System(s). Managing the event data to make sure changing business rules/regulations are consistently integrated with the data is another key facet in this area.

Data at rest is defined as data that has been collected and ingested in a form that conforms to enterprise data architecture and governance specifications. This data needs to be assimilated or federated with pre-existing sources so that the business can query it in a read/write manner from a strategic and long-term perspective.

A Quick Note on Data Science and it’s applicability to Fraud Monitoring & Detection – 

Various posts in this blog have discussed the augmented capability of financial organizations to acquire, store and process large volumes of data using commodity (x86) hardware.  At the same time, technologies such as Hadoop and Spark have enabled the collection, organization and analysis of Big Data at scale. The convergence of cost effective storage and scalable processing allows us to extract richer insights from data. These insights can then be operationalized to provide commercial and social value.   Data science is a term that refers to the process of extracting meaningful insights from large volumes of structured and unstructured data. Data science is about scientific exploration of data to extract meaning or insight, and the construction of software systems to utilize such insights in a business context.   This involves the art of discovering data insights combined with the science of operationalizing them.  A data scientist uses a combination of machine learning, statistics, visualization, and computer science to extract valuable business insights hiding in data and builds operational systems to deliver that value. Data Science based approaches are core to the design and architecture of a Fraud Detection System. Data Mining techniques range from clustering and classification to find patterns and associations among a large group of data. The machine learning components are classified into two categories: ‘supervised’ and ‘unsupervised’ learning. These methods seek for accounts, customers, suppliers, etc. that behave ‘unusually’ in order to output suspicion scores, rules or visual anomalies, depending on the method. (Ref – Wikipedia).

It needs to be kept in mind that Data science is a cross-functional discipline. A data scientist is part statistician, part developer and part business strategist. The Data Science team collaborates with an extended umbrella team which includes visualization specialists, developers, business analysts, data engineers, applied scientists, architects, LOB owners and DevOps (ref – Hortonworks). The success of data science projects often relies on the communication, collaboration, and interaction that takes place with the extended team, both internally and possibly externally to their organizations.

Reference Architecture

FP1 

Illustration 1:  Candidate Architecture Pattern for a Fraud Detection Application 

 

The key technology components of the above reference architecture stack include:

  1. Information sources are depicted at the left. These encompass a variety of machine and human actors either transmitting potentially thousands of real time messages per second. These are your typical Credit Card Swipes, Online transactions, Fraud databases and other core Banking data.
  2. A highly scalable messaging system to help bring these feeds into the architecture as well as normalize them and send them in for further processing. Apache Kafka is chosen for this tier.Realtime data is published by Payment Processing systems over Kafka queues. Each of the transactions has 100s of attributes that can be analyzed in real time to  detect patterns of usage.  We leverage Kafka integration with Apache Storm to read one value at a time and perform some kind of storage like persist the data into a HBase cluster.In a modern data architecture built on Apache Hadoop, Kafka ( a fast, scalable and durable message broker) works in combination with Storm, HBase (and Spark) for real-time analysis and rendering of streaming data. Kafka has been used to message geospatial data from a fleet of long-haul trucks to financial data to sensor data from HVAC systems in office buildings.
  3. A Complex Event Processing tier that can process these feeds at scale to understand relationships among them; where the relationships among these events are defined by business owners in a non technical or by developers in a technical language. Apache Storm integrates with Kafka to process incoming data. Storm architecture is covered briefly in the below section.
  4. Once the machine learning models are defined, incoming data received from the Storm/Spark tier will be ingested into the models to predict outlier transactions or potential fraud. As a result of specific patterns being met that indicate potential fraud, business process workflows are created that follow a well defined process that is predefined and modeled by the business.
    • Credit card transaction data comes as stream (typically through Kafka)
    • An external system has information about the credit card holder’s recent location (collected from GPS on mobile device and/or from mobile towers)
    • Each credit card transaction is looked up against user’s current location
    • If the geographic distance between the credit card transaction location and user’s recent known location is significant (say 100 miles), the credit card transaction is flagged as potential fraudScreen Shot 2015-10-27 at 9.52.34 PM

Illustration 2 :External Lookup Pattern for a Fraud Detection Application (Sheetal Dolas – Hortonworks)

  1. Data that has business relevance and needs to be kept for offline or batch processing can be handled using the  storage platform based on Hadoop Distributed Filesystem (HDFS). The idea to deploy Hadoop oriented workloads (MapReduce, or, Machine Learning) to understand fraud patterns as they occur over a period of time.Historical data can be fed into Machine Learning models created in Step 1 and commingled with streaming data as discussed in step 2.
  2. Horizontal scale-out is preferred as a deployment approach as this helps the architecture scale linearly as the loads placed on the system increase over time
  3. Output data elements can be written out to HDFS, and managed by HBase. From here, reports and visualizations can easily be constructed.
  4. One can optionally layer in search and/or workflow engines to present the right data to the right business user at the right time.  


Messaging Broker Tier

The messaging broker tier (based on Apache Kafka) is the first point of entry in a system. It fundamentally hosts a set of message queues. The broker tier needs to be highly scalable while supporting a variety of cross language clients and protocols from Java, C, C++, C#, Ruby, Perl, Python and PHP. Using various messaging patterns to support real-time messaging, this tier integrates application, endpoints and devices quickly and efficiently. The architecture of this tier needs to be flexible so as to allow it to be deployed in various configurations to connect to customized solutions at every endpoint, payment outlet, partner, or device.

Pipeline

Illustration 3: Multistage Data Refinery Pipeline for a Fraud Detection Application

Apache Storm is an Open Source distributed, reliable, fault – tolerant system for real time processing of large volume of data. Spout and Bolt are the two main components in Storm, which work together to process streams of data.

  • Spout: Works on the source of data streams. In this use case, Spout will read realtime transaction data from Kafka topics.
  • Bolt: Spout passes streams of data to Bolt which processes and passes it to either a data store or another Bolt.

Storm-Kafka

                                                        Illustration 3:  Kafka-Storm integration

Storage Tier

There are broad needs for two distinct data tiers that can be identified based on business requirements.

  1. Some data needs to be pulled in near realtime, accessed in a low latency pattern as well as have calculations performed on this data. The design principle here needs to be “Write Many and Read Many” with an ability to scale out tiers of servers
  2. In memory technology based on Spark is very suitable for this use case as it not only supports a very high write rate but also gives users the ability to store, access, modify and transfer extremely large amounts of distributed data. A key advantage here is that Hadoop based architectures can pool memory and can scaleout across a cluster of servers in a horizontal manner. Further, computation can be pushed into the tiers of servers running the datagrid as opposed to pulling data into the computation tier.
  3. As the data volumes increase in size, compute can scale linearly to accommodate them. The standard means of doing so is through techniques such as data distribution and replication. Replicas are nothing but copies of the same segment or piece of data that are stored across (aka distributed) a cluster of servers for purposes of fault tolerance and speedy access. Smart clients can retrieve data from a subset of servers by understanding the topology of the grid. This speeds up query performance for tools like business intelligence dashboards and web portals that serve the business community.
  4. The second data access pattern that needs to be supported is storage for data that is older. This is typically large scale historical data. The primary data access principle here is “Write Once, Read Many.” This layer contains the immutable, constantly growing master dataset stored on a distributed file system like HDFS. Besides being a storage mechanism, the data stored in this layer can be formatted in a manner suitable for consumption from any tool within the Apache Hadoop ecosystem like Hive or Pig or Mahout.

The final word [1] – 

Since 2009, PayPal has been building and modifying its fraud analytics systems, incorporating new open-source technologies as they have evolved. For example, the company uses Hadoop to store data, and related analytics tools, such as the Kraken. A data warehouse from Teradata Corp. stores structured data. The fraud analysis systems run on both grid and cloud computing infrastructures.

Several kinds of algorithms analyze thousands of data points in real-time, such as IP address, buying history, recent activity at the merchant’s site or at PayPal’s site and information stored in cookies. Results are compared with external data from identity authentication providers. Each transaction is scored for likely fraud, with suspicious activity flagged for further automated and human scrutiny, Mr. Barrese said.

For example, “a very bad sign” is when one account shows IP addresses from 10 parts of the world, Dr. Wang said, because it suggests the account might have been hacked.

The system tags the account for review by human experts, she said. “They might discover that the IP addresses are at airports and this guy is a pilot,” she said. Once verified, that intelligence is fed back into PayPal’s systems. Humans don’t make the system faster, but they make real-time decisions as a check against, and supplement to, the algorithms, she said.

The combination of open-source technology, online caching, algorithms and “human detectives,” she said, “gives us the best analytical advantage.”

References – 

[1] “PayPal fights Fraud With Machine Learning and Human Detectives” – From WSJ.com

http://blogs.wsj.com/cio/2015/08/25/paypal-fights-fraud-with-machine-learning-and-human-detectives/