The Definitive Reference Architecture for Market Surveillance (CAT, UMIR and MiFiD II) in Capital Markets..

We have discussed the topic of market surveillance reporting to some depth in previous blogs. e.g.http://www.vamsitalkstech.com/?p=2984.  Over the last decade, Global Financial Markets have embraced the high speed of electronic trading. This trend has only accelerated with the concomitant explosion in trading volumes. The diverse range of instruments & the proliferation of trading venues pose massive regulatory challenges in the area of market conduct supervision and abuse prevention. Banks, Broker dealers, Exchanges and other market participants across the globe are now shelling out millions of dollars in fines for failure to accurately report on market abuse violations. In response to this complex world of high volume & low touch electronic trading, global capital markets regulators have been hard at work across different jurisdictions & global hubs e.g. the FINRA in the US, the IROC in Canada and the ESMA in the European Union. Regulators have created extensive reporting regimes for surveillance with a view to detecting suspicious patterns of trade behavior (e.g, dumping, quote stuffing & non bonafide fake orders etc). The intent to increase market transparency on both the buy and the sell side. Based on the scrutiny Capital Markets players are under, a Big Data Analytics based architecture has become a “must-have” to ensure timely & accurate compliance with these mandates. This blog attempts to discuss such a reference architecture.

Business Technology Requirements for Market Surveillance..

The business requirements for the Surveillance architecture are covered at the below link in more detail but are reproduced below in a concise fashion.

A POV on European Banking Regulation.. MAR, MiFiD II et al

Some of the key business requirements that can be distilled from regulatory mandates include the below:

  • Store heterogeneous data – Both MiFiD II and MAR mandate the need to perform trade monitoring & analysis on not just real time data but also historical data spanning a few years. Among others this will include data feeds from a range of business systems – trade data, eComms, aComms, valuation & position data, order management systems, position management systems, reference data, rates, market data, client data, front, middle & back office, data, voice, chat & other internal communications etc. To sum up, the ability to store a range of cross asset (almost all kinds of instruments), cross format (structured & unstructured including voice), cross venue (exchange, OTC etc) trading data with a higher degree of granularity – is key.
  • Data Auditing – Such stored data needs to be fully auditable for 5 years. This implies not just being able to store it but also putting in place capabilities in place to ensure  strict governance & audit trail capabilities.
  • Manage a huge volume increase in data storage requirements (5+ years) due to extensive Record keeping requirements
  • Perform Realtime Surveillance & Monitoring of data – Once data is collected,  normalized & segmented, it will need to support realtime monitoring of data (around 5 seconds) to ensure that every trade can be tracked through it’s lifecycle. Detecting patterns that could perform surveillance for market abuse and monitor for best execution are key.
  • Business Rules  – Core logic that deals with identifying some of the above trade patterns are created using business rules. Business Rules have been covered in various areas in the blog but they primarily work based on an IF..THEN..ELSE construct.
  • Machine Learning & Predictive Analytics – A variety of supervised ad unsupervised learning approaches can be used to perform extensive Behavioral modeling & Segmentation to discover transactions behavior with a view to identifying behavioral patterns of traders & any outlier behaviors that connote potential regulatory violations.
  • A Single View of an Institutional Client- From the firm’s standpoint, it would be very useful to have a single view capability for clients that shows all of their positions across multiple desks, risk position, KYC score etc.

A Reference Architecture for Market Surveillance ..

This reference architecture aims to provide generic guidance to banking Business IT Architects building solutions in the realm of Market & Trade Surveillance. This supports a host of hugely important global reg reporting mandates – CAT, MiFiD II, MAR etc that Capital Markets need to comply with. While the concepts discussed in this solution architecture discussed are definitely Big Data oriented, they are largely agnostic to any cloud implementation – private, public or hybrid.

A Market Surveillance system needs to include both real time surveillance of trading activity as well as a retrospective (batch oriented) analysis component. The real time component includes the ability to perform realtime calculations (concerning thresholds, breached limits etc). real time queries with the goal of triggering alerts. Both these kinds of analytics span structured and unstructured data sources. For the batch component, the analytics involve data queries, simple to advanced statistics (min, max, avg, std deviation, sorting, binning, segmentation) to running data science models involving text analysis & search etc.

The system needs to process tens of millions to billions of events in a trading window while providing highest uptime guarantees. Batch analysis is always running in the background.

A Hadoop distribution that includes components such as Kafka, HBase and near real time components such as Storm & Spark Streaming provide a good fit for a responsive architecture. Apache NiFi with its ability to ingest data from a range of sources is preferred for it’s ability to support complex data routing, transformation, and system mediation logic in a complex event processing architecture. The capabilities of Hortonworks Data Flow (the enterprise version of Apache NiFi) is covered in the below blogpost in much detail.

Use Hortonworks Data Flow (HDF) To Connect The Dots In Financial Services..(3/3)

A Quick Note on Data Ingestion..

Data volumes in the area of Regulatory reporting can be huge to insanely massive. For instance, at large banks, they can go up to 100s of millions of transactions a day. At market venues such as stock exchanges, they easily enter into the hundreds of billions of messages every trading day. However the data itself is extremely powerful & is really business gold in terms of allowing banks to not just file mundane reg reports but also to perform critical line of business processes such as Single View of  Customer, Order Book Analysis, TCA (Transaction Cost Analysis), Algo Backtesting, Price Creation Analysis etc. The architecture thus needs to support multiple ways of storage, analysis and reporting ranging from compliance reporting to data scientists to business intelligence.

Real time processing in this proposed architecture are powered by Apache NiFi. There are five important reasons for this decision – 

  • First of all, complex rules can be defined in NiFi in a very flexible manner. As an example, one can execute SQL queries in processor A against incoming data from any source (data that isnt from a relational databases but JSON, Avro etc.) and  then route different results to different downstream processors based on the needs for processing while enriching it. E.g. Processor A could be event driven and if any data is being routed there, a field can be added, or an alert sent to XYZ. Essentially this can be very complex, equivalent to a nested rules engine so to speak. 
  • From a Throughput standpoint, a single NiFi node can typically handle somewhere between 50MB/s to 150MB/s depending on your hardware spec and data structure. Assuming 100-500 kbytes of average messages, for a throughput of 600MB/s, the architecture can be sized to about 5-10 NiFi nodes. It is important to note that performance latency of inbound message processing depends on the network, could be extremely small. Under the hood, you are sending data from source to NIfi node (disk), extract some attributes in memory to process, and deliver to the target system.
  • Data quality can be handled via the aforementioned “nested rules engine” approach, consisting of multiple NiFi processors. One can even embed an entire rules engine into a single processor. Similarly, you can define simple authentication rules at the event level. For instance, if Field A = English, route the message to an “authenticated” relationship; otherwise send it to an “unauthenticated” relationship.

  • One of the corner stones in NiFi is called “Data Provenance“, allowing you to have end to end traceability. Not only can the event lifecycle of trade data be traced but you can also track the time at which it happened & the user role who made the change and metadata around why did it happen.

  • Security – NiFi enables authentication at ingest. One can authenticate data via the rules defined in NiFi, or leverage target system authentication which is implemented at processor level. For example, the PutHDFS processor supports kerberized HDFS, the same applies for Solr and so on.

Overall Processing flow..

The below illustration shows the high-level conceptual architecture. The architecture is composed of core platform services and application-level components to facilitate the processing needs across three major areas of a typical surveillance reporting solution:

  • Connectivity to a range of trade data sources
  • Data processing, transformation & analytics
  • Visualization and business connectivity
Reference Architecture for Market Surveillance Reg Reporting – CAT, MAR,MiFiD II et al

The overall processing of data follows the order shown below and depicted in the diagram below –

  1. Data Production – Data related to Trades and their lifecycle is produced from a range of business systems. These data feeds from a range of business systems (including but not limited to) – trade data, valuation & position data, order management systems, position management systems, reference data, rates, market data, client data, front, middle & back office, data, voice, chat & other internal communications etc.
  2. Data Ingestion – Data produced from the the above layer is ingested using Apache NiFi from a range of sources described above. Data can also be filtered and alerts can be setup based on complex event logic. For time series data support HBase can be leveraged along with OpenTSDB. For CEP requirements, such as sliding windows and complex operators, NiFi can be leveraged along with Kafka and Storm pipeline.  Using NiFi will make the process easier to load data into the data lake while applying guarantees around the delivery itself.  Data can be streamed in real time as it is created in the feeder systems. Data is also loaded at end of the trading day based on the P&L sign off and the end of day close processes.  The majority of the data will be fed in from Book of Record Trading systems as well as from market data providers.
  3. As trade and other data is ingested into the data lake, it is important to note that the route in which certain streams are processed will differ from how other streams are processed. Thus the ingest architecture needs to support multiple types of processing ranging from in memory processing, intermediate transformation processing on certain data streams to produce a different representation of the stream. This is where NiFi adds critical support in not just handling a huge transaction throughput but also enabling “on the fly processing” of data in pipelines. As mentioned, NiFi does this via the concept of “processors”.
  4. The core data processing platform is then based on a datalake pattern which has been covered in this blog before. It includes the following pattern of processing.
    1. Data is ingested real time into a HBase database (which uses HDFS as the underlying storage layer). Tables are designed in HBase to store the profile of a trade and it’s lifecycle.
    2. Producers are authenticated at the point of ingest.
    3. Once the data has been ingested into HDFS, it is taken through a pipeline of processing (L0 to L3) as depicted in the below blogpost.

      http://www.vamsitalkstech.com/?p=667

    4. Historical data (defined as T+1) once in the HDFS tier is taken through layers of processing as discussed above. One of the key areas of processing is to run machine learning on the data to discover any hidden patterns in the trades themselves. Patterns that can connote a range of suspicious behavior. Most surveillance applications are based on a search for data that breaches thresholds and seek to match sell & buy orders. The idea is that when these rules are breached, alerts are then generated for compliance officers to conduct further investigation. However this method falls short with complex types of market abuse.A range of supervised learning techniques can then be applied on data such as creating a behavioral profile of different kinds of traders (for instance junior and senior) by classifying & then scoring them based on their likelihood to commit fraud. Thus a range of Surveillance Analytics can be performed on the data. Apache Spark, is highly recommended for near realtime processing not only due to its high performance characteristics but also due to its native support for graph analytics and machine learning – both of which are critical to surveillance reporting.For a deeper look at data science, I recommend the below post.

      http://www.vamsitalkstech.com/?p=1846

    5. The other important value driver in deploying Data Science is to perform Advanced Transaction Monitoring Intelligence.  The core idea is to get years worth of Trade data in one location (i.e the datalake) & then applying  unsupervised learning to glean patterns in those transactions. The goal is then to identify profiles of actors with the intent of feeding it into existing downstream surveillance & TM systems.
    6. This knowledge can then be used to constantly learn transaction behavior for similar traders. This can be a very important capability in detecting fraud in traders, customer accounts and instruments.Some of the usecases are –
      • Profile trading activity of individuals with similar traits (types of customers, trading desks & instruments, geographical areas of operations etc.) to perform Know Your Trader
      • Segment traders by similar experience levels and behavior
      • Understand common fraudulent behavior typologies (e.g. spoofing) and clustering such (malicious) trading activities by trader, instrument and volume etc. The goal being to raise appropriate downstream investigation case management system
      • Using advanced data processing techniques like Natural Language Processing, constantly analyze electronic communications and join them up with trade data sources to both detect under the radar activity but also to keep the false positive rate low.
    7. Graph Database – Given that most kinds of trading fraud happens in groups of actors – traders acting in collusion with  verification & compliance – the ability to view complex relationships of interactions and the strength of those interactions can be a significant monitoring capability
    8. Grid Layer – To improve performance, I propose the usage of a distributed in memory data fabric like JBOSS DataGrid or Pivotal GemFire. This can aid in two ways –

      a. Help with fast lookup of data elements by the visualization layer
      b. Help perform fast computation process by overlaying a framework like Spark or MapReduce directly onto a stateful data fabric.

      The choice of tools here is dependent of the language choices that have been made in building the pricing and risk analytic libraries across the Bank. If multiple language bindings are required (e.g. C# & Java) then the data fabric will typically be a different product than the Grid.

      Data Visualization…

      The visualization solution chose shouldI enable the quick creation of interactive dashboards that provide KPIs and other important business metrics from a process monitoring standpoint. Various levels of dashboard need to be created ranging from compliance officer toolboxes, executive dashboard to help identify trends and discover valuable insights.

      Compliance Officer Toolbox (Courtesy: Arcadia Data)

      Additionally, the visualization layer shall provide

      a) A single view of Trader or Trade or Instrument or Entity

      b) Investigative workbench with Case Management capability

      c) The ability follow the lifecycle of a trade

      d) The ability to perform ad hoc queries over multiple attributes

      e) Activity correlation across historical and current data sets

      f) Alerting on specific metrics and KPIs

      To Sum Up…

      The solution architecture described in this blogpost is designed with peaceful enterprise co-existence in mind. In the sense, it interacts and is also integrated with a range of BORT systems and other enterprise systems such as ERP, CRM, legacy surveillance systems. This includes all and any other line of business solutions that typically exist as shared enterprise resources (such as CRM or ERP systems or other line-of-business solutions).

Design and Architecture of A Robo-Advisor Platform..(3/3)

This three part series explores the automated investment management or the “Robo-advisor” (RA) movement. The first post in this series @- http://www.vamsitalkstech.com/?p=2329 – discussed how Wealth Management has been an area largely untouched by automation as far as the front office is concerned. As a result, automated investment vehicles have largely begun changing that trend and they helping create a variety of business models in the industry esp those catering to the Millenial Mass Affluent Segment. The second post @- http://www.vamsitalkstech.com/?p=2418  focused on the overall business model & main functions of a Robo-Advisor (RA). This third and final post covers a generic technology architecture for a RA platform.

Business Requirements for a Robo-Advisor (RA) Platform…

Some of the key business requirements of a RA platform that confer it advantages as compared to the manual/human driven style of investing are:

  • Collect Individual Client Data – RA Platforms need to offer a high degree of customization from the standpoint of an individual investor. This means an ability to provide a preferably mobile and web interface to capture detailed customer financial background, existing investments as well as any historical data regarding customer segments etc.
  • Client Segmentation – Clients are to be segmented  across granular segments as opposed to the traditional asset based methodology (e.g mass affluent, high net worth, ultra high net worth etc).
  • Algorithm Based Investment Allocation – Once the client data is collected,  normalized & segmented –  a variety of algorithms are applied to the data to classify the client’s overall risk profile and an investment portfolio is allocated based on those requirements. Appropriate securities are purchased as we will discuss in the below sections.
  • Portfolio Rebalancing  – The client’s portfolio is rebalanced appropriately depending on life event changes and market movements.
  • Tax Loss Harvesting – Tax-loss harvesting is the mechanism of selling securities that have a loss associated with them. By doing so or by taking  a loss, the idea is that that client can offset taxes on both gains and income. The sold securities are replaced by similar securities by the RA platform thus maintaining the optimal investment mix.
  • A Single View of a Client’s Financial History- From the WM firm’s standpoint, it would be very useful to have a single view capability for a RA client that shows all of their accounts, interactions & preferences in one view.

User Interface Requirements for a Robo-Advisor (RA) Platform…

Once a customer logs in using any of the digital channels supported (e.g. Mobile, eBanking, Phone etc)  – they are presented with a single view of all their accounts. This view has a few critical areas – Summary View (showing an aggregated view of their financial picture), the Transfer View (allowing one to transfer funds across accounts with other providers).

The Summary View lists the below

  • Demographic info: Customer name, address, age
  • Relationships: customer rating influence, connections, associations across client groups
  • Current activity: financial products, account interactions, any burning customer issues, payments missed etc
  • Customer Journey Graph: which products or services they are associated with since the time they became a customer first etc,

Depending on the clients risk tolerance and investment horizon, the weighted allocation of investments across these categories will vary. To illustrate this, a Model Portfolio and an example are shown below.

Algorithms for a Robo-Advisor (RA) Platform…

There are a variety of algorithmic approaches that could be taken to building out an RA platform. However the common feature of all of these is to –

  • Leverage data science & statistical modeling to automatically allocate client wealth across different asset classes (such as domestic/foreign stocks, bonds & real estate related securities) to automatically rebalance portfolio positions based on changing market conditions or client preferences. These investment decisions are also made based on detailed behavioral understanding of a client’s financial journey metrics – Age, Risk Appetite & other related information. 
  • A mixture of different algorithms can be used such as Modern Portfolio Theory (MPT), Capital Asset Pricing Model (CAPM), the Black Litterman Model, the Fama-French etc. These are used to allocate assets as well as to adjust positions based on market movements and conditions.
  • RA platforms also provide 24×7 tracking of market movements to use that to track rebalancing decisions from not just a portfolio standpoint but also from a taxation standpoint.

Model Portfolios…

  1. Equity  

             A) US Domestic Stock – Large Cap, Medium Cap , Small Cap, Dividend Stocks 

             B) Foreign Stock – Emerging Markets, Developed Markets

       2. Fixed Income

             A) Developed Market Bonds 

             B) US Bonds

             C) International Bonds

             D) Emerging Markets Bonds

      3. Other 

             A) Real Estate  

             B) Currencies

             C) Gold and Precious Metals

             D) Commodities

       4. Cash

Sample Portfolios – for an aggressive investor…

  1. Equity  – 85%

             A) US Domestic Stock (50%) – Large Cap – 30%, Medium Cap – 10% , Small Cap – 10%, Dividend Stocks – 0%

             B) Foreign Stock – (35%) –  Emerging Markets – 18%, Developed Markets – 17% 

       2. Fixed Income – 5%

             A) Developed Market Bonds  – 2%

             B) US Bonds – 1%

             C) International Bonds – 1%

             D) Emerging Markets Bonds – 1%

      3. Other – 5%

             A) Real Estate  – 3%

             B) Currencies – 0%

             C) Gold and Precious Metals – 0%

             D) Commodities – 2%

       4. Cash – 5%

Technology Requirements for a Robo-Advisor (RA) Platform…

An intelligent RA platform has a few core technology requirements (based on the above business requirements).

  1. A Single Data Repository – A shared data repository called a Data Lake is created, that can capture every bit of client data (explained in more detail below) as well as external data. The RA datalake provides more visibility into all data to a variety of different stakeholders. Wealth Advisors access processed data to view client accounts etc. Clients can access their own detailed positions,account balances etc. The Risk group accesses this shared data lake to processes more position, execution and balance data.  Data Scientists (or Quants) who develop models for the RA platform also access this data to perform analysis on fresh data (from the current workday) or on historical data. All historical data is available for at least five years—much longer than before. Moreover, the Hadoop platform enables ingest of data across a range of systems despite their having disparate data definitions and infrastructures. All the data that pertains to trade decisions and lifecycle needs to be made resident in a general enterprise storage pool that is run on the HDFS (Hadoop Distributed Filesystem) or similar Cloud based filesystem. This repository is augmented by incremental feeds with intra-day trading activity data that will be streamed in using technologies like Sqoop, Kafka and Storm.
  2. Customer Data Collection – Existing Financial Data across the below categories is collected & aggregated into the data lake. This data ranges from Customer Data, Reference Data, Market Data & other Client communications. All of this data, can be ingested using a API or pulled into the lake from a relational system using connectors supplied in the RA Data Platform. Examples of data collected include – Customer’s existing Brokerage accounts, Customer’s Savings Accounts, Behavioral Finance Suveys and Questionnaires etc etc. The RA Data Lake stores all internal & external data.
  3. Algorithms – The core of the RA Platform are data science algos. Whatever algorithms are used – a few critical workflows are common to them. The first is Asset Allocation is to take the customers input in the “ADVICE” tab for each type of account and to tailor the portfolio based on the input. The others include Portfolio Rebalancing and Tax Loss Harvesting.
  4. The RA platform should be able to store market data across years both from a macro and from an individual portfolio standpoint so that several key risk measures such as volatility (e.g. position risk, any residual risk and market risk), Beta, and R-Squared – can be calculated at multiple levels.  This for individual securities, a specified index, and for the client portfolio as a whole.

roboadvisor_design_arch

                      Illustration: Architecture of a Robo-Advisor (RA) Platform 

The overall logical flow of data in the system –

  • Information sources are depicted at the left. These encompass a variety of institutional, system and human actors potentially sending thousands of real time messages per hour or by sending over batch feeds.
  • A highly scalable messaging system to help bring these feeds into the RA Platform architecture as well as normalize them and send them in for further processing. Apache Kafka is a good choice for this tier. Realtime data is published by a range of systems over Kafka queues. Each of the transactions could potentially include 100s of attributes that can be analyzed in real time to detect business patterns.  We leverage Kafka integration with Apache Storm to read one value at a time and perform some kind of storage like persist the data into a HBase cluster.In a modern data architecture built on Apache Hadoop, Kafka ( a fast, scalable and durable message broker) works in combination with Storm, HBase (and Spark) for real-time analysis and rendering of streaming data. 
  • Trade data is thus streamed into the platform (on a T+1 basis), which thus ingests, collects, transforms and analyzes core information in real time. The analysis can be both simple and complex event processing & based on pre-existing rules that can be defined in a rules engine, which is invoked with Apache Storm. A Complex Event Processing (CEP) tier can process these feeds at scale to understand relationships among them; where the relationships among these events are defined by business owners in a non technical or by developers in a technical language. Apache Storm integrates with Kafka to process incoming data. 
  • For Real time or Batch Analytics, Apache HBase provides near real-time, random read and write access to tables (or ‘maps’) storing billions of rows and millions of columns. In this case once we store this rapidly and continuously growing dataset from the information producers, we are able  to do perform super fast lookup for analytics irrespective of the data size.
  • Data that has analytic relevance and needs to be kept for offline or batch processing can be stored using the Hadoop Distributed Filesystem (HDFS) or an equivalent filesystem such as Amazon S3 or EMC Isilon or Red Hat Gluster. The idea to deploy Hadoop oriented workloads (MapReduce, or, Machine Learning) directly on the data layer. This is done to perform analytics on small, medium or massive data volumes over a period of time. Historical data can be fed into Machine Learning models created above and commingled with streaming data as discussed in step 1.
  • Horizontal scale-out (read Cloud based IaaS) is preferred as a deployment approach as this helps the architecture scale linearly as the loads placed on the system increase over time. This approach enables the Market Surveillance engine to distribute the load dynamically across a cluster of cloud based servers based on trade data volumes.
  • It is recommended to take an incremental approach to building the RA platform, once all data resides in a general enterprise storage pool and makes the data accessible to many analytical workloads including Trade Surveillance, Risk, Compliance, etc. A shared data repository across multiple lines of business provides more visibility into all intra-day trading activities. Data can be also fed into downstream systems in a seamless manner using technologies like SQOOP, Kafka and Storm. The results of the processing and queries can be exported in various data formats, a simple CSV/txt format or more optimized binary formats, json formats, or you can plug in custom SERDE for custom formats. Additionally, with HIVE or HBASE, data within HDFS can be queried via standard SQL using JDBC or ODBC. The results will be in the form of standard relational DB data types (e.g. String, Date, Numeric, Boolean). Finally, REST APIs in HDP natively support both JSON and XML output by default.
  • Operational data across a bunch of asset classes, risk types and geographies is thus available to investment analysts during the entire trading window when markets are still open, enabling them to reduce risk of that day’s trading activities. The specific advantages to this approach are two-fold: Existing architectures typically are only able to hold a limited set of asset classes within a given system. This means that the data is only assembled for risk processing at the end of the day. In addition, historical data is often not available in sufficient detail. Hadoop accelerates a firm’s speed-to-analytics and also extends its data retention timeline
  • Apache Atlas is used to provide Data Governance capabilities in the platform that use both prescriptive and forensic models, which are enriched by a given businesses data taxonomy and metadata.  This allows for tagging of trade data  between the different businesses data views, which is a key requirement for good data governance and reporting. Atlas also provides audit trail management as data is processed in a pipeline in the lake
  • Another important capability that Big Data/Hadoop can provide is the establishment and adoption of a Lightweight Entity ID service – which aids dramatically in the holistic viewing & audit tracking of trades. The service will consist of entity assignment for both institutional and individual traders. The goal here is to get each target institution to propagate the Entity ID back into their trade booking and execution systems, then transaction data will flow into the lake with this ID attached providing a way to do Client 360.
  • Output data elements can be written out to HDFS, and managed by HBase. From here, reports and visualizations can easily be constructed. One can optionally layer in search and/or workflow engines to present the right data to the right business user at the right time.  

Conclusion…

As one can see clearly, though automated investing methods are still in early stages of maturity – they hold out a tremendous amount of promise. As they are unmistakably the next big trend in the WM industry industry players should begin developing such capabilities.

The Three Core Competencies of Digital – Cloud, Big Data & Intelligent Middleware

Ultimately, the cloud is the latest example of Schumpeterian creative destruction: creating wealth for those who exploit it; and leading to the demise of those that don’t.” – Joe Weiman author of Cloudonomics: The Business Value of Cloud Computing

trifacta_digital

The  Cloud As a Venue for Digital Workloads…

As 2016 draws to a close, it can safely be said that no industry leader questions the existence of the new Digital Economy and the fact that every firm out there needs to create a digital strategy. Myriad organizations are taking serious business steps to making their platforms highly customer-centric via a renewed operational metrics focus. They are also working on creating new business models using their Analytics investments. Examples of these verticals include Banking, Insurance, Telecom, Healthcare, Energy etc.

As a general trend, the Digital Economy brings immense opportunities while exposing firms to risks as well. Customers now demanding highly contextual products, services and experiences – all accessible via an easy API (Application Programming Interfaces).

Big Data Analytics (BDA) software revenues will grow from nearly $122B in 2015 to more than $187B in 2019 – according to Forbes [1].  At the same time, it is clear that exploding data generation across the global economy has become a clear & present business phenomenon. Data volumes are rapidly expanding across industries. However, while the production of data itself that has increased but it is also driving the need for organizations to derive business value from it. As IT leaders know well, digital capabilities need low cost yet massively scalable & agile information delivery platforms – which only Cloud Computing can provide.

For a more detailed technical overview- please visit below link.

http://www.vamsitalkstech.com/?p=1833

Big Data & Big Data Analytics drive consumer interactions.. 

The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous online interactions with global consumers/customers/clients or patients. The goal is not just provide engaging visualization but also to personalize services clients care about across multiple channels of interaction. The only way to attain digital success is to understand your customers at a micro level while constantly making strategic decisions on your offerings to the market. Big Data has become the catalyst in this massive disruption as it can help business in any vertical solve their need to understand their customers better & perceive trends before the competition does. Big Data thus provides the foundational  platform for successful business platforms.

The three key areas where Big Data & Cloud Computing intersect are – 

  • Data Science and Exploration
  • ETL, Data Backups and Data Preparation
  • Analytics and Reporting

Big Data drives business usecases in Digital in myriad ways – key examples include  –  

  1. Obtaining a realtime Single View of an entity (typically a customer across multiple channels, product silos & geographies)
  2. Customer Segmentation by helping businesses understand their customers down to the individual micro level as well as at a segment level
  3. Customer sentiment analysis by combining internal organizational data, clickstream data, sentiment analysis with structured sales history to provide a clear view into consumer behavior.
  4. Product Recommendation engines which provide compelling personal product recommendations by mining realtime consumer sentiment, product affinity information with historical data.
  5. Market Basket Analysis, observing consumer purchase history and enriching this data with social media, web activity, and community sentiment regarding past purchase and future buying trends.

Further, Digital implies the need for sophisticated, multifactor business analytics that need to be performed in near real time on gigantic data volumes. The only deployment paradigm capable of handling such needs is Cloud Computing – whether public or private. Cloud was initially touted as a platform to rapidly provision compute resources. Now with the advent of Digital technologies, the Cloud & Big Data will combine to process & store all this information.  According to the IDC , by 2020 spending on Cloud based Big Data Analytics will outpace on-premise by a factor of 4.5. [2]

Intelligent Middleware provides Digital Agility.. 

Digital Applications are applications modular, flexible and responsive to a variety of access methods – mobile & non mobile. These applications are also highly process driven and support the highest degree of automation. The need of the hour is to provide enterprise architecture capabilities around designing flexible digital platforms that are built around efficient use of data, speed, agility and a service oriented architecture. The choice of open source is key as it allows for a modular and flexible architecture that can be modified and adopted in a phased manner – as you will shortly see.

The intention in adopting a SOA (or even a microservices) architecture for Digital capabilities is to allow lines of business an ability to incrementally plug in lightweight business services like customer on-boarding, electronic patient records, performance measurement, trade surveillance, risk analytics, claims management etc.

Intelligent Middleware adds significant value in six specific areas –

  1. Supports a high degree of Process Automation & Orchestration thus enabling the rapid conversion of paper based business processes to a true digital form in a manner that lends itself to continuous improvement & optimization
  2. Business Rules help by adding a high degree of business flexibility & responsiveness
  3. Native Mobile Applications  enables platforms to support a range of devices & consumer behavior across those front ends
  4. Platforms As a Service engines which enable rapid application & business capability development across a range of runtimes and container paradigms
  5. Business Process Integration engines which enable rapid application & business capability development
  6. Middleware brings the notion of DevOps into the equation. Digital projects bring several technology & culture challenges which can be solved by a greater degree of collaboration, continuous development cycles & new toolchains without giving up proven integration with existing (or legacy)systems.

Intelligent Middleware not only enables Automation & Orchestration but also provides an assembly environment to string different (micro)services together. Finally, it also enables less technical analysts to drive application lifecycle as much as possible.

Further, Digital business projects call out for mobile native applications – which a forward looking middleware stack will support.Middleware is a key component for driving innovation and improving operational efficiency.

Five Key Business Drivers for combining Big Data, Intelligent Middleware & the Cloud…

The key benefits of combining the above paradigms to create new Digital Applications are –

  • Enable Elastic Scalability Across the Digital Stack
    Cloud computing can handle the storage and processing of any amount of data & any kind of data.This calls for the collection & curation of data from dynamic and highly distributed sources such as consumer transactions, B2B interactions, machines such as ATM’s & geo location devices, click streams, social media feeds, server & application log files and multimedia content such as videos etc. It needs to be noted that data volumes here consist of multi-varied formats, differing schemas, transport protocols and velocities. Cloud computing provides the underlying elastic foundation to analyze these datasets.
  • Support Polyglot Development, Data Science & Visualization
    Cloud technologies are polyglot in nature. Developers can choose from a range of programming languages (Java, Python, R, Scala and C# etc) and development frameworks (such as Spark and Storm). Cloud offerings also enable data visualization using a range of tools from Excel to BI Platforms.
  • Reduce Time to Market for Digital Business Capabilities
    Enterprises can avoid time consuming installation, setup & other upfront procedures. consuming  can deploy Hadoop in the cloud without buying new hardware or incurring other up-front costs. On the same vein, even big data analytics should be able to support self service across the lifecycle – from data acquisition, preparation, analysis & visualization.
  • Support a multitude of Deployment Options – Private/Public/Hybrid Cloud 
    A range of scenarios for product development, testing, deployment, backup or cloudbursting are efficiently supported in pursuit of cost & flexibility goals.
  • Fill the Talent Gap
    Open Source technology is the common thread across Cloud, Big Data and Middleware. The hope is that the ubiquity of open source will serve as a critical level in enabling the filling up of the IT-Business skills scarcity gap.

As opposed to building standalone or one-off business applications, a ‘Digital Platform Mindset’ is a more holistic approach capable of producing higher rates of adoption & thus revenues. Platforms abound in the web-scale world at shops like Apple, Facebook & Google etc. Digital Applications are constructed like lego blocks  and they reuse customer & interaction data to drive cross sell and up sell among different product lines. The key components here are to ensure that one starts off with products with high customer attachment & retention. While increasing brand value, it is key to ensure that customers & partners can also collaborate in the improvements in the various applications hosted on top of the platform.

References

[1] Forbes Roundup of Big Data Analytics (BDA) Report

http://www.forbes.com/sites/louiscolumbus/2016/08/20/roundup-of-analytics-big-data-bi-forecasts-and-market-estimates-2016/#b49033b49c5f

[2] IDC FutureScape: Worldwide Big Data and Analytics 2016 Predictions

Can Your CIO Do Digital?

Business model innovation is the new contribution of IT”  — Werner Boeing, CIO, Roche Diagnostics

Digital Is Changing the Role of the Industry CIO…

A Motley crew of some what interrelated technologies – Cloud Computing, Big Data Platforms, Predictive Analytics & Mobile Applications are changing the enterprise IT landscape. The common paradigm that captures all of them is Digital. The immense business value of Digital technology no longer in question both from a customer as well as an enterprise standpoint. However, the Digital space calls for strong and visionary leadership both from a business & IT standpoint.

Business Boards and CXOs are now concerned about their organization’s overall level and maturity of digital investments. And the tangible business value in existing business operations– (e.g increasing sales & customer satisfaction, detecting fraud, driving down business & IT costs etc)-but also in helping finetune or create new business models by leveraging Digital paradigms. It is thus an increasingly accurate argument that smart applications & ecosystems built around Digitization will dictate enterprise success.

The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous micro level interactions with global consumers/customers/clients/stockholders or patients depending on the vertical you operate in. Initially enterprises viewed Digital as a bolt-on or a fresh color of paint on an existing IT operation.

How did that change over the last five years?

Mobile applications first begun forcing the need for enterprise to begin supporting multiple channels of interaction with their consumers. We have seen how how exploding data generation across the global economy has become a clear & present business & IT phenomenon. Data volumes are rapidly expanding across industries. However, while the production of data by Mobile Applications that has increased but it is also driving the need for organizations to derive business value from it, using advanced techniques such as Data Science and Machine Learning. As a first step, this calls for the collection & curation of data from dynamic,  and highly distributed sources such as consumer transactions, B2B interactions, machines such as ATM’s & geo location devices, click streams, social media feeds, server & application log files and multimedia content such as videos etc – using Big Data. Often these workloads are run on servers hosted on an agile infrastructure such as a Public or Private Cloud.

As one can understand from the above paragraph, the Digital Age calls for a diverse set of fresh skills – both from IT leadership and the rank & file. The role of the Chief Information Officer (CIO) is thus metamorphosing from being an infrastructure service provider to being the overall organizational thought leader in the Digital Age.

The question is – Can Industry CIOs adapt?

The Classic CIO is a provider of IT Infrastructure services.. 

what_cios_think                                                Illustration: The Concerns of a CIO..

So what do CIOs typically think about nowadays?

  1. Keep the core stable and running so IT delivers minimal services to the business and disarm external competition
  2. Are parts of my business really startups and should they be treated as such and should they be kept away from the shackles of inflexible legacy IT? Do I need a digital strategy?
  3. What does the emergence of the 3rd platform (Cloud, Mobility,Social and Big Data) imply?
  4. Where can I show the value of expertise and IT to the money making lines of business?
  5. How can one do all the above while keeping track of Corporate and IT security?

 CIO’s who do not adapt are on the road to Irrelevance…

Where CIOs are being perceived as managing complex legacy systems, the new role of Chief Digital Officer (CDO) has gained currency. The idea that a parallel & more agile IT organization can be created and run to create an ecosystem of innovation & that the office of the CDO is the right place to drive these innovative applications.

Why is that?

  1. CIOs that cannot or that seem dis-engaged with creating innovation through IT are headed the way of the dodo. At the enterprise officer – CIO/CTO level, it becomes very obvious that more than ever “IT is not just a complementary function or a supplementary service but IT is the Business”. If that was merely something that we all paid lip-service to in the past, it is hard reality now. So it is not a case of which company can make the best widgets or has the fastest trading platforms or efficient electronic health records. It is whose enterprise IT can provide the best possible results within a given cost that will win. Its up to the CIOs to deliver and deliver in such a way that large established organizations can compete with upstarts who do not have the same kind of enterprise constraints & shackles.
  2. Innovation & information now follow an “outside in” model. As opposed to data and value being generated by internal functions (sales,engineering, customer fulfillment, core business processes etc) . Enterprise customers are beginning to now operate in what I like to think of as the new normal: entropy.  It’s these conditions that make it imperative for IT Leadership to reconsider their core business applications at the enterprise level. Does internal IT infrastructure need to look more like those of the internet giants?
  3. As a result of the above trends, CIOs are clearly now business level stakeholders more than ever. This means that they need to engage & understand their business at a deep level from an ecosystem and competitive standpoint. Those that cannot do it are neither very effective nor in those positions for long.
  4. Also,it is not merely enough to be a passive stakeholder, CIOs have to deliver on two very broad fronts. The first is to deliver core services (aka standardized functions) on time and at a reasonable cost. These are things like core banking systems, email, data backups etc. Ensuring smooth operation running transactional systems like ERP/business processing systems in manufacturing, decision support systems, classic IT infrastructure, claims management systems in Insurance and Billing systems in Healthcare. The systems that need to run to keep the business operations.The focus here is to deliver on these on time and within SLAs to increasingly demanding internal customers. Like running the NYC subway – no one praises you for keeping things humming day in and out but all hell breaks loose when the trains are nonoperational for any reason. A thankless task but one essentially needed to win the credibility with lines of business.
  5. The advent of public cloud means that internal IT no longer has a monopoly and a captive internal customer base even with core services. If one cannot compete with the likes of Amazon AWS or any of the SaaS based clouds that are mushrooming on a quarterly basis, you will find that soon enough you have to co-exist with Not-So-Shadow IT. The industry has seen enough back-office CIOs who are not perceived by their organizations as having a largely irrelevant role in the evolution of the larger enterprise.
  6. Despite the continued focus on running a strong core as the price of CIO admission to the internal strategic dances, transformation is starting to emerge as a key business driver and is making its way into the larger industry. It is no longer the province of Wall St trading shops or a Google or a Facebook. Innovation as in “adopt this strategy and reinvent your IT and change the business”. The operative word here is incremental rather than disruptive innovation. More on this key point later.
  7. Most rank and file IT personnel in general cannot really keep up with all the nomenclature of technology. For instance, a majority do not really understand umbrella concepts like Cloud, Mobility and Big Data. They know what these mean at a high level but the complex technology underpinnings, various projects & the finer nuances are largely lost on enterprise customers. There are two stark choices from a time perspective that face overworked IT personnel – a) Do you want to increase your value to your corporation by learning to speak the lingua franca of your business and by investing in those skills away from a traditional IT employee mindset? b) do you want to increase your IT depth in your area of expertise.The first makes one a valued collaborator and paves your way up within the chain, the second may definitely increase your marketability in the industry but it is not that easy to keep up. We find that an increasing number of employees choose the first path which creates interesting openings and arbitrage opportunities for other groups in the organization. The CIO needs to step up and be the internal change agent.

CONCLUSION…

Enterprise wide business innovation will continue to be designed around the four key technologies  (Big Data, Cloud Computing, Technology & Platforms). Business Platforms created leveraging these technologies will create immense operational efficiency, better business models, increased relevance to customers and ultimately drive revenues. Such platforms will separate the visionaries, leaders from the laggards in the years to come. As often noticed, the keyword accompanying transformation is often digital. This means a renewed focus on making IT services appealing to millennial or the self service generation – be they customers or employees or partners. This really touches all areas of enterprise IT while leaving behind a significant impact on organizational culture.

This is the age of IT with no boundaries – the question is whether the role of the CIO will largely remain unscathed in the years to come.

A POV on the FRTB (Fundamental Review of the Trading Book)…

Regulatory Risk Management evolves…

The Basel Committee of supranational supervision was put in place to ensure the stability of the financial system. The Basel Accords are the frameworks that essentially govern the risk taking actions of a bank. To that end, minimum regulatory capital standards are introduced that banks must adhere to. The Bank of International Settlements (BIS) established  1930, is the world’s oldest international financial consortium. with 60+ member central banks, representing countries from around the world that together make up about 95% of world GDP. BIS stewards and maintains the Basel standards in conjunction with member banks.

The goal of Basel Committee and the Financial Stability Board (FSB) guidelines are to strengthen the regulation, supervision and risk management of the banking sector by improving risk management and governance. These have taken on an increased focus to ensure that a repeat of financial crisis 2008 comes to pass again. Basel III (building upon Basel I and Basel II) also sets new criteria for financial transparency and disclosure by banking institutions.

Basel III – the last prominent version of the Basel standards published in 2012 (named for the town of Basel in Switzerland where the committee meets) prescribes enhanced measures for capital & liquidity adequacy and were developed by the Basel Committee on Banking Supervision with voluntary worldwide applicability.  Basel III covers credit, market, and operational risks as well as liquidity risks. As this is known, BCBS 239 –  guidelines do not just apply to the G-SIBs (the Globally Systemically Important Banks) but also to the D-SIBs (Domestic Systemically Important Banks).Any important financial institution deemed ‘too big to fail” needs to work with the regulators to develop a “set of supervisory expectations” that would guide risk data aggregation and reporting.

Basel III & other Risk Management topics were covered in these previous posts – http://www.vamsitalkstech.com/?p=191 && http://www.vamsitalkstech.com/?p=667

Enter the FTRB (Fundamental Review of the Trading Book)…

In May 2012, the Basel Committee on Banking Supervision (BCBS) again issued a consultative document with an intention of revising the way capital was calculated for the trading book. These guidelines which can be found here in their final form [1] were repeatedly refined based on comments from various stakeholders & quantitative studies. In Jan 2016, a final version of this paper was released. These guidelines are now termed  the Fundamental Review of the Trading Book (FRTB) or unofficially as some industry watchers have termed – Basel IV. 

What is new with the FTRB …

The main changes the BCBS has made with the FRTB are – 

  1. Changed Measure of Market Risk – The FRTB proposes a fundamental change to the measure of market risk. Market Risk will now be calculated and reported via Expected Shortfall (ES) as the new standard measure as opposed to the venerated (& long standing) Value At Risk (VaR).  As opposed to the older method of VaR with a 99% confidence level, expected shortfall (ES) with a 97.5% confidence level is proposed. It is to be noted that for normal distributions, the two metrics should be the same however the ES is much superior at measuring the long tail. This is a recognition that in times of extreme economic stress, there is a tendency for multiple asset classes to move in unison. Consequently, under the ES method capital requirements are anticipated to be much higher.
  2. Model Creation & Approval – The FRTB also changes how models are approved & governed.  Banks that want to use the IMA (Internal Model Approach) need to pass  a set of rigorous tests so that they are not forced to used the Standard Rules approach (SA) for capital calculations. The fear is that the SA will increase capital requirements. The old IMA approach has now been revised and made more rigorous in a way that it enables supervisors to remove internal modeling permission for individual trading desks. This approach now enforces more consistent identification of material risk factors across banks, and constraints on hedging and diversification. All of this is now going to be done at a desk level instead of the entity level. FRTB moves the responsibility of showing compliant models, significant backtesting & PnL attribution to the desk level.
  3. Boundaries between the Regulatory Books – The FRTB also assigns explicit boundaries between the trading book (the instruments the bank intends to trade) and the bank book (the instruments held to maturity). These rules have been redefined in such a way that banks now have to contend with stringent rules for internal transfers between both. The regulatory motivation is to eliminate a given bank’s ability to designate individual positions as belonging to either book. Given the different accounting treatment for both, there is a feeling that bank’s were resorting to capital arbitrage with the goal of minimizing regulatory capital reserves. The FRTB also introduces more stringent reporting and data governance requirements for both which in conjunction with the well defined boundary between books. All of these changes should lead to a much better regulatory framework & also a revaluation of the structure of trading desks. 
  4. Increased Data Sufficiency and Quality – The FRTB regulation also introduces Non-Modellable risk factors (NMRF). Risk factors are non modellabe if certain aspects that pertain to the availability and sufficiency of the data are an issue . Thus with the NMRF, Banks now need increased data sufficiency and quality requirements that go into the model itself. This is a key point, the ramifications of which we will discuss in the next section.
  5. The FRTB also upgrades its standardized approach to data structuring – with a new standardized approach (SBA) which is more sensitive to various risk factors across different asset classes as compared to the Basel II SA. Regulators now determine the sensitivities in the data. Approvals will also be granted at the desk level rather than at the entity level.  The revised SA should provide a consistent way to measure risk across geographies and regions, giving regulatory a better way to compare and aggregate systemic risk. The sensitivities based approach should also allow banks to share a common infrastructure between the IMA approach and the SA approach. Thera are a set of buckets and risk factors that are prescribed by the regulator which instruments can then be mapped to.
  6. Models must be seeded with real and live transaction data – Fresh & current transactions will now need to be entered into the calculation of capital requirements as of the date on which they were conducted. Not only that, though reporting will take place at regular intervals, banks are now expected to manage market risks on a continuous basis -almost daily.
  7. Time Horizons for Calculation – There are also enhanced requirements for data granularity depending on the kind of asset. The FRTB does away with the generic 10 day time horizon for market variables in Basel II to time periods based on liquidity of these assets. It propose five different time horizons – 10 day, 20 day, 60 day, 120 day and 250 days.

FRTB_Horizons

                                 Illustration: FRTB designated horizons for market variables (src – [1])

To Sum Up the FRTB… 

The FRTB rules are now clear and they will have a profound effect on how market risk exposures are calculated. The FRTB clearly calls out the specific instruments in the trading book vs the banking book. With the new switch over to Expected Shortfall (ES) @ 97.5% over VaR @ 99% confidence levels – it should cause increased reserve requirements. Furthermore, the ES calculations will be done keeping liquidity considerations of the underlying instruments with a historical simulation approach ranging from 10 days to 250 days of stressed market conditions. Banks that use a pure IMA approach will now have to move to IMA plus the SA method.

The FRTB compels Banks to create unified teams from various departments – especially Risk, Finance, the Front Office (where trading desks sit) and Technology to address all of the above significant challenges of the regulation.

From a technology capabilities standpoint, the FRTB now presents banks with both a data volume, velocity and analysis challenge. Let us now examine the technology ramifications.

Technology Ramifications around the FRTB… 

The FRTB rules herald a clear shift in how IT architectures work across the Risk area and the Back office in general.

  1. The FRTB calls for a single source of data that pulls data across silos of the front office, trade data repositories, a range of BORT (Book of Record Transaction Systems) etc. With the FRTB, source data needs to be centralized and available in one location where every feeding application can trust it’s quality.
  2. With both the IMA and the SBA in the FRTB, many more detailed & granular data inputs (across desks & departments) need to be fed into the ES (Expected Shortfall) calculations from varying asset classes (Equity, Fixed Income, Forex, Commodities etc) across multiple scenarios. The calculator frameworks developed or enhanced for FRTB will need ready & easy access to realtime data feeds in addition to historical data. At the firm level, the data requirements and the calculation complexity will be even more higher as it needs to include the entire position book.

  3. The various time horizons called out also increase the need to run a full spectrum of analytics across many buckets. The analytics themselves will be more complex than before with multiple teams working on all of these areas. This calls out for standardization of the calculations themselves across the firm.

  4. Banks will have to also provide complete audit trails both for the data and the processes that worked on the data to provide these risk exposures. Data lineage, audit and tagging will be critical.

  5. The number of runs required for regulatory risk exposure calculations will dramatically go up under the new regime. The FRTB requires that each risk class be calculated separately from the whole set. Couple this with increased windows of calculations as discussed  in #3 above- means that more compute processing power and vectorization.

  6. FRTB also implies that from an analytics standpoint, a large number of scenarios on a large volume of data. Most Banks will need to standardize their libraries across the house. If Banks do not look to move to a Big Data Architecture, they will incur tens of millions of dollars in hardware spend.

The FRTB is the most pressing in a long list of Data Challenges facing Banks… 

The FRTB is yet another regulatory mandate that lays bare the data challenges facing every Bank. Current Regulatory Risk Architectures are based on traditional relational databases (RDBMS) architectures with 10’s of feeds from Core Banking Systems, Loan Data, Book Of Record Transaction Systems (BORTS) like Trade & Position Data (e.g. Equities, Fixed Income, Forex, Commodities, Options etc),  Wire Data, Payment Data, Transaction Data etc. 

These data feeds are then tactically placed in memory caches or in enterprise data warehouses (EDW). Once the data has been extracted, it is transformed using a series of batch jobs which then prepare the data for Calculator Frameworks to which run the risk models on them. 

All of the above applications need access to medium to large amounts of data at the individual transaction Level. The Corporate Finance function within the Bank then makes end of day adjustments to reconcile all of this data up and these adjustments need to be cascaded back to the source systems down to the individual transaction or classes of transaction levels. 

These applications are then typically deployed on clusters of bare metal servers that are not particularly suited to portability, automated provisioning, patching & management. In short, nothing that can automatically be moved over at a moment’s notice. These applications also work on legacy proprietary technology platforms that do not lend themselves to flexible & a DevOps style of development.

Finally, there is always need for statistical frameworks to make adjustments to customer transactions that somehow need to get reflected back in the source systems. All of these frameworks need to have access to and an ability to work with terabtyes (TBs) of data.

Each of above mentioned risk work streams has corresponding data sets, schemas & event flows that they need to work with, with different temporal needs for reporting as some need to be run a few times in a day (e.g. Traded Credit Risk), some daily (e.g. Market Risk) and some end of the week (e.g Enterprise Credit Risk). 

One of the chief areas of concern is that the FRTB may require a complete rewrite of analytics libraries. Under the FRTB, front office libraries will need to do Enterprise Risk –  a large number of analytics on a vast amount of data. Front office models cannot make all the assumptions that enterprise risk can to price a portfolio accurately. Front office systems run a limited number of scenarios thus trading off timeliness for accuracy – as opposed to enterprise risk.

Most banks have stringent vetting processes in place and all the rewritten analytic assets will need to be passed through that. Every aspect of the math of the analytics needs to be passed through this vigorous process. All of this will add to compliance costs as vetting process costs typically cost multiples of the rewrite process. The FRTB has put in place stringent model validation standards along with hypothetical portfolios to benchmark these.

The FRTB also requires data lineage and audit capabilities for the data. Banks will need to establish visual representation of the overall process as data flows from the BORT systems to the reporting application. All data assets have to be catalogued and a thorough metadata management process instituted.

What Must Bank IT Do… 

Given all of the above data complexity and the need to adopt agile analytical methods  – what is the first step that enterprises must adopt?

There is a need for Banks to build a unified data architecture – one which can serve as a cross organizational repository of all desk level, department level and firm level data.

The Data Lake is an overarching data architecture pattern. Lets define the term first. A data lake is two things – a small or massive data storage repository and a data processing engine. A data lake provides “massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs“. Data Lake are created to ingest, transform, process, analyze & finally archive large amounts of any kind of data – structured, semistructured and unstructured data.

The Data Lake is not just a data storage layer but one that can allow different users (traders, risk managers, compliance etc) plug in calculators that work on data that spans intra day activity as well as data across years. Calculators can then be designed that can work on data with multiple runs to calculate Risk Weighted Assets (RWAs) across multiple calibration windows.

The below illustration is a depiction of goal is to create a cross company data-lake containing all asset data and compute applied to the data.

RDA_Vamsi

                              Illustration – Data Lake Architecture for FRTB Calculations

1) Data Ingestion: This encompasses creation of the L1 loaders to take in Trade, Position, Market, Loan, Securities Master, Netting  and Wire Transfer data etc across trading desks. Developing the ingestion portion will be the first step to realizing the overall architecture as timely data ingestion is a large part of the problem at most institutions. Part of this process includes understanding examples of a) data ingestion from the highest priority of systems b) apply the correct governance rules to the data. The goal is to create these loaders for versions of different systems (e.g Calypso 9.x) and to maintain it as part of the platform moving forward. The first step is to understand the range of Book of Record transaction systems (lending, payments and transactions) and the feeds they send out. The goal would be to create the mapping to a release of an enterprise grade Open Source Big Data Platform e.g HDP (Hortonworks Data Platform) to the loaders so these can be maintained going forward.

2) Data Governance: These are the L2 loaders that apply the rules to the critical fields for Risk and Compliance. The goal here is to look for gaps in the data and any obvious quality problems involving range or table driven data. The purpose is to facilitate data governance reporting.

3) Entity Identification: This step is the establishment and adoption of a lightweight entity ID service. The service will consist of entity assignment and batch reconciliation.

4) Developing L3 loaders: This phase will involve defining the transformation rules that are required in each risk, finance and compliance area to prep the data for their specific processing.

5) Analytic Definition: Running the analytics that are to be used for FRTB.

6) Report Definition: Defining the reports that are to be issued for each risk and compliance area.

References..

[1] https://www.bis.org/bcbs/publ/d352.pdf

A Reference Architecture for The Open Banking Standard..

This is the second in a series of four posts on the Open Banking Standard (OBS) in the UK. This second post will briefly look at the strategic drivers for banks while proposing an architectural style or approach for incumbents to drive change in their platforms to achieve OBS Compliance. We will examine the overall data layer implications in the next post. The final post will look at key strategic levers and possible business models that the standard could help banks to drive innovation towards. 

Introduction…

The Open Banking Standard will steward the development of layers of guidelines (API interoperability standards, data security & privacy and governance) which primarily deal with data sharing in banking. The belief is that this regulation will ultimately spur open competition and unlock innovation. For years, the industry has grappled with fundamental platform issues that are native to every domain of banking. Some of these include systems are siloe-d by function, platforms that are inflexible in responding to rapidly changing market conditions & consumer tastes. Bank IT is perceived by the business to be glacially slow in responding to their needs.

The Open Banking Standard (OBS) represents a vast opportunity for banking organizations in multiple ways. First off, Bank IT has the luxury of using the regulatory mandate to slowly re-architect hitherto inflexible and siloe-d business systems. Secondly, doing so will enable Banks to significantly monetize their vast data resources in several key business areas.  

This will need to change with the introduction of Open Banking Standard. Banks that do not change will not be able to derive and sustain a competitive advantage. PSD2 Compliance (Payment Systems Directive – 2) – which will be mandated by the EU is one of the first layers in the OBS. Further layers will include API standards definitions for business processes (e.g View Account, Transfer Funds, Chargebacks, Dispute Handling etc). 

The OBWG (Open Banking Working Group) standards include the following key constituencies & their requirements [1] – 

 1. Customers: defined as account holders & businesses who agree to sharing their data & any publishers who share open datasets 

2. Data attribute providers: defined as banks & other financial services providers whose customers produce data as part of daily banking activities 

3. Third parties: Interested developers, financial services startups aka FinTechs, and any organisations (e.g  Retail Merchants) who can leverage the data to provide new views & products  

It naturally follows from the above, that the key technical requirements of the framework will include:

1. A set of Data elements, API definitions and Security Standards to provide both data security and a set of access restrictions 

2. A Governance model, a body which will develop & oversee the standards 

3. Developer resources, which will enable third parties to discover, educate and experiment.

The Four Strategic Drivers in the Open Bank Standard …

Clearly the more intelligently a firm harness technology (in pursuit of OBS compliance goals) will determine it’s overall competitive advantage.  This important to note since a range of players across the value chain (the above Third Parties as designated by the standard) can now obtain seamless access to a variety of data. Once obtained the data can help the Third Parties reimagine it in manifold ways. For example they can help consumers make better personal financial decisions for their clients at the expense of the Banks owning the data. For instance, FinTechs have generally been able to make more productive use of client data. They do this by providing clients with intuitive access to cross asset data, tailoring algorithms based on behavioral characteristics  and by providing clients with a more engaging and unified experience.

So, the four strategic business goals that OBS compliant architectures need to solve in the long run – 

  1. Digitize The Customer Journey –  Bank clients who use services like Uber, Zillow, Amazon etc in their daily lives are now very vocal in demanding a seamless experience across all of their banking ervices using digital channels.  The vast majority of Bank applications still lag the innovation cycle, are archaic & are separately managed. The net issue with this is that the client is faced with distinct user experiences ranging from client onboarding to servicing to transaction management. Such applications need to provide anticipatory or predictive capabilities at scale while understand the specific customers lifestyles, financial needs & behavioral preferences. 
  2. Provide Improved Access to Personal Financial Management & Improved Lending Processes  –  Provide consumers with a single aggregated picture of all their accounts. Also improve lending systems by providing more efficient access to loans by incorporating a large amount of contextual data in the process.
  3. Automate Back & Mid Office Processes Across Lending, Risk, Compliance & Fraud – The needs to forge a closer banker/client experience is not just driving demand around data silos & streams themselves but also forcing players to move away from paper based models to more of a seamless, digital & highly automated model to rework a ton of existing back & front office processes. These processes range from risk data aggregation, supranational compliance (AML,KYC, CRS & FATCA), financial reporting across a range of global regions & Cyber Security. Can the Data architectures & the IT systems  that leverage them be created in such a way that they permit agility while constantly learning & optimizing their behaviors across national regulations, InfoSec & compliance requirements? Can every piece of actionable data be aggregated,secured, transformed and reported on in such a way that it’s quality across the entire lifecycle is guaranteed? 
  4. Tune Existing Business Models Based on Client Tastes and Feedback – While the initial build out of the core architecture may seem to focus on digitizing interactions and exposing data via APIs. What follows fast is strong predictive modeling capabilities working at large scale where systems need to constantly learn and optimize their interactions, responsiveness & services based on client needs & preferences. 

The Key System Architecture Tenets…

The design and architecture of a solution as large and complex as a reference architecture for Open Banking is a multidimensional challenge and it will vary at every institution based on their existing investments, vendor products & overall culture. 

The OBS calls out the following areas of data as being in scope – Customer transaction data, customer reference data, aggregated data and sensitive commercial data. A thorough review of the OBWSG standard leads one to suggest a logical reference architecture as called out below.

Based on all the above, the Open Bank Architecture shall – 

  • Support an API based model to invoke any business process or data elements based on appropriate security  by a third party . E.g client or an advisor or a business partner
  • Support the development and deployment of an application that encourages a DevOps based approach
  • Support the easy creation of scalable business processes (e.g. client on boarding, KYC, Payment dispute check etc) that natively emit business metrics from the time they’re instantiated to throughout their lifecycle
  • Support automated application delivery, configuration management & deployment
  • Support a high degree of data agility and data intelligence. The end goal being that that every customer click, discussion & preference shall drive an analytics infused interaction between the Bank and the client
  • Support algorithmic capabilities that enable the creation of new services like automated (or Robo) advisors
  • Support a very high degree of scale across many numbers of users, interactions & omni-channel transactions while working across global infrastructure
  • Shall support deployment across cost efficient platforms like a public or private cloud. In short, the design of the application shall not constrain the available deployment options – which may vary because of cost considerations. The infrastructure options supported shall range from virtual machines to docker based containers – whether running on a public cloud, private cloud or in a hybrid cloud
  • Support small, incremental changes to business services & data elements based on changing business requirements 
  • Support standardization across application stacks, toolsets for development & data technology to a high degree
  • Shall support the creation of a user interface that is highly visual and feature rich from a content standpoint when accessed across any device

 

Reference Architecture…

Now that we have covered the business bases, what foundational technology choices comprise the satisfaction of the above? Lets examine that first at a higher level and then in more detail.

Given the above list of requirements – the application architecture that is a “best fit” is shown below.

Open Banking Architecture Diagram

                   Illustration – Candidate Reference Architecture for the Open Bank Standard

Lets examine each of the tiers starting from the lowest –

Infrastructure Layer…

Cloud Computing across it’s three main delivery models (IaaS, PaaS & SaaS) is largely a mainstream endeavor in financial services and no longer an esoteric adventure only for brave innovators. A range of institutions are either deploying or testing cloud-based solutions that span the full range of cloud delivery models. These capabilities include –

IaaS (infrastructure-as-a-service) to provision compute, network & storage, PaaS (platform-as-a-service) to develop applications & exposing their business services as  SaaS (software-as-a-service) via APIs.

Choosing Cloud based infrastructure – whether that is secure public cloud  (Amazon AWS or Microsoft Azure) or an internal private cloud (OpenStack etc) or even a hybrid approach is a safe and sound bet for these applications. Business innovation and transformation are best enabled by a cloud based infrastructure – whether public or private.

 

Data Layer…

While banking data tiers are usually composed of different technologies like RDBMS, EDW (Enterprise Data Warehouses), CMS (Content Management Systems) & Big Data etc. My recommendation for the OBSWG target state is largely dominated by a Big Data Platform powered by Hadoop technology. The vast majority of initial applications recommended by the OBSWG call out for predictive analytics to create tailored Customer Journeys. Big Data is a natural fit as it is fast emerging as the platform of choice for analytic applications.

Financial services firms specifically deal with manifold data types ranging from Customer Account data, Transaction Data, Wire Data, Trade Data, Customer Relationship Management (CRM), General Ledger and other systems supporting core banking functions. When one factors in social media feeds, mobile clients & other non traditional data types, the challenge is not just one of data volumes but also variety and the need to draw conclusions from fast moving data streams by commingling them with years of historical data.

The reasons for choosing Big Data as the dominant technology in the data tier are the below – 

  1. Hadoop’s ability to ingest and work with all the above kinds of data & more (using the schema on read method) has been proven at massive scale. Operational data stores are being built on Hadoop at a fraction of the cost & effort involved with older types of data technology (RDBMS & EDW)
  2. The ability to perform multiple types of processing on a given data set. This processing varies across batch, streaming, in memory and realtime which greatly opens up the ability to create, test & deploy closed loop analytics quicker than ever before
  3. The DAS (Direct Attached Storage) model that Hadoop provides fits neatly in with the horizontal scale out model that the services, UX and business process tier leverage. This keeps cuts Capital Expenditure  to a bare minimum.
  4. The ability to retain data for long periods of time thus providing WM applications with predictive models that can reason on historical data
  5. Hadoop provides the ability to run a massive volumes of models in a very short amount of time helps with modeling automation
  6. Due to it’s parallel processing nature, Hadoop can run calculations (pricing, risk, portfolio, reporting etc) in minutes versus the hours it took using older technology
  7. Hadoop has to work with existing data investments and to augment them with data ingestion & transformation leaving EDW’s to perform complex analytics that they excel at – a huge bonus.

Services Layer…

The overall goals of the OBSWG services tier are to help design, develop,modify and deploy business components in such a way that overall WM application delivery follows a continuous delivery/deployment (CI/CD) paradigm.Given that WM Platforms are some of the most complex financial applications out there, this also has the ancillary benefit of leaving different teams – digital channels, client on boarding, bill pay, transaction management & mid/back office teams to develop and update their components largely independent of other teams. Thus a large monolithic WM enterprise platform is decomposed into its constituent services which are loosely coupled and each is focused on one independent & autonomous business task only. The word ‘task’ here referring to a business capability that has tangible business value.

A highly scalable, open source & industry leading platform as a service (PaaS) is recommended as the way of building out and hosting banking business applications at this layer.  Microservices have moved from the webscale world to fast becoming the standard for building mission critical applications in many industries. Leveraging a PaaS such as OpenShift provides a way to help cut the “technical debt” that has plagued both developers and IT Ops. OpenShift provides the right level of abstraction to encapsulate microservices via it’s native support for Docker Containers. This also has the concomitant advantage of standardizing application stacks, streamlining deployment pipelines thus leading the charge to a DevOps style of building applications. 

Further I recommend that service designer take the approach that their micro services can be deployed in a SaaS application format going forward – which usually implies taking an API based approach.

Now, the services tier has the following global responsibilities – 

  1. Promote a Microservices/SOA style of application development
  2. Support component endpoint invocation via standards based REST APIs
  3. Promote a cloud, OS & ,development language agnostic style of application development
  4. Promote Horizontal scaling and resilience

Predictive Analytics & Business Process Layer…

Though segments of the banking industry have historically been early adopters of analytics, areas being targeted by the OBSWG – Retail lines of business &Payments have generally been laggards. However, the large datasets that are prevalent in Open Bank Standard world as well as the need to drive customer interaction & journeys, risk & compliance reporting, detecting fraud etc calls for a strategic relook at this space. 

Techniques like Machine Learning, Data Science & AI feed into core business processes thus improving them. For instance, Machine Learning techniques support the creation of self improving algorithms which get better with data thus making accurate business predictions. Thus, the overarching goal of the analytics tier should be to support a higher degree of automation by working with the business process and the services tier. Predictive Analytics can be leveraged across the value chain of the Open Bank Standard – ranging from new customer acquisition to customer journey to the back office. More recently these techniques have found increased rates of adoption with enterprise concerns from cyber security to telemetry data processing.

Another area is improved automation via light weight business process management (BPM). Though most large banks do have pockets of BPM implementations that are adding or beginning to add significant business value, an enterprise-wide re-look at the core revenue-producing activities is called for, as is a deeper examination of driving competitive advantage. BPM now has evolved into more than just pure process management. Meanwhile, other disciplines have been added to BPM — which has now become an umbrella term. These include business rules management, event processing, and business resource planning.

Financial Services firms general are fertile ground for business process automation, since most managers across their various lines of business are simply a collection of core and differentiated processes. Examples are private banking (with processes including onboarding customers, collecting deposits, conducting business via multiple channels, and compliance with regulatory mandates such as KYC and AML); investment banking (including straight-through-processing, trading platforms, prime brokerage, and compliance with regulation); payment services; and portfolio management (including modeling model portfolio positions and providing complete transparency across the end-to-end life cycle). The key takeaway is that driving automation can result not just in better business visibility and accountability on behalf of various actors. It can also drive revenue and contribute significantly to the bottom line.

A business process system should allow an IT analyst or customer or advisor to convey their business process by describing the steps that need to be executed in order to achieve the goal (and explain the order of those steps, typically using a flow chart). This greatly improves the visibility of business logic, resulting in higher-level and domain-specific representations (tailored to finance) that can be understood by business users and should be easier to monitor by management. Again , leveraging a PaaS such as OpenShift in conjunction with an industry leading open source BPMS (Business Process Management System) such as JBOSS BPMS provides an integrated BPM capability that can create cloud ready and horizontally scalable business processes.

API & UX Layer…

The API & UX (User Experience) tier fronts humans – clients. business partners, regulators, internal management and other business users across omnichannel touchpoints. A standards based API tier is provided for partner applications and other non-human actors to interact with business service tier. Once the OBSWG defines the exact protocols, data standards & formats – this should be straightforward to implement.

The API/UX tier has the following global responsibilities  – 

  1. Provide a seamless experience across all channels (mobile, eBanking, tablet etc) in a way that is a continuous and non-siloed. The implication is that clients should be able to begin a business transaction in channel A and be able to continue them in channel B where it makes business sense.
  2. Understand client personas and integrate with the business & predictive analytic tier in such a way that the API is loosely yet logically integrated with the overall information architecture
  3. Provide advanced visualization (wireframes, process control, social media collaboration) and cross partner authentication & single sign on
  4. Both the API & UX shall also be designed is such a manner that their design, development & ongoing enhancement lends themselves to an Agile & DevOps methodology.

It can all come together…

In most existing Banking systems, siloed functions have led to brittle data architectures operating on custom built legacy applications. This problem is firstly compounded by inflexible core banking systems and secondly exacerbated by a gross lack of standardization in application stacks underlying capabilities like customer journey, improved lending & fraud detection. These factors inhibit deployment flexibility across a range of platforms thus leading to extremely high IT costs and technical debut. The consequence is that these inhibit client facing applications from using data in a manner that constantly & positively impacts the client experience. There is clearly a need to provide an integrated digital experience across a global customer base. And then to offer more intelligent functions based on existing data assets. Current players do possess a huge first mover advantage as they offer highly established financial products across their large (and largely loyal & sticky) customer bases, a wide networks of physical locations, rich troves of data that pertain to customer accounts & demographic information. However, it is not enough to just possess the data. They must be able to drive change through legacy thinking and infrastructures as things change around the entire industry as it struggles to adapt to a major new segment – the millenials – who increasingly use mobile devices and demand more contextual services as well as a seamless and highly analytic driven & unified banking experience – akin to what they commonly experience via the internet – at web properties like Facebook, Amazon, Google or Yahoo etc

Summary

Platforms designed technology platforms designed around the four key business needs   will create immense operational efficiency, better business models, increased relevance and ultimately drive revenues. These will separate the visionaries, leaders from the laggards in the years to come. The Open Bank Standard will be a catalyst in this immense disruption. 

REFERENCES…

[1] The Open Banking Standard –
https://theodi.org/open-banking-standard

The Five Deadly Sins of Financial Services IT..

THE STATE OF GLOBAL FINANCIAL SERVICES IT ARCHITECTURE…

This blog has time & again discussed how Global, Domestic and Regional banks need to be innovative with their IT platform to constantly evolve their product offerings & services. This is imperative due to various business realities –  the increased competition by way of the FinTechs, web scale players delivering exciting services & sharply increasing regulatory compliance pressures. However, systems and software architecture has been a huge issue at nearly every large bank across the globe.

Regulation is also afoot in parts of the globe which will give non traditional banks access to hitherto locked customer data. E.g PSD-2 in the European Union. Further, banking licenses have been more easily granted to non-banks which are primarily technology pioneers. e.g. Paypal

It’s 2016 and Banks are waking up to the fact that IT Architecture is a critical strategic differentiator. Players that have agile & efficient architecture platforms, practices can not only add new service offerings but also able to experiment across a range of analytic led offerings that create & support multi-channel offerings. These digital services can now be found abundantly areas ranging from Retail Banking, Capital Markets, Payments & Wealth Management esp at the FinTechs.

So, How did we get here…

The Financial Services IT landscape – no matter the segment – one picks across the spectrum – Capital Markets, Retail & Consumer Banking, Payment Networks & Cards, Asset Management etc are all largely predicated on a few legacy anti-patterns. These anti-patterns have evolved over the years from a systems architecture, data architecture & middleware standpoint.

These anti-patterns have resulted in a mishmash of organically developed & shrink wrapped systems that do everything from running critical Core Banking Applications to Trade Lifecycle to Securities Settlement to Financial Reporting etc.  Each of these systems operates in an application, workflow, data silo with it’s own view of the enterprise. These are all kept in sync largely via data replication & stove piped process integration.

If this sounds too abstract, let us take an example &  a rather topical one at that. One of the most critical back office functions every financial services organization needs to perform is Risk Data Aggregation & Regulatory Reporting (RDARR). This spans areas from Credit Risk, Market Risk, Operational Risk , Basel III, Solvency II etc..the list goes on.

The basic idea in any risk calculation is to gather a whole range of quality data in one place and to run computations to generate risk measures for reporting.

So, how are various risk measures calculated currently? 

Current Risk Architectures are based on traditional relational databases (RDBMS) architectures with 10’s of feeds from Core Banking Systems, Loan Data, Book Of Record Transaction Systems (BORTS) like Trade & Position Data (e.g. Equities, Fixed Income, Forex, Commodities, Options etc),  Wire Data, Payment Data, Transaction Data etc. 

These data feeds are then tactically placed in memory caches or in enterprise data warehouses (EDW). Once the data has been extracted, it is transformed using a series of batch jobs which then prepare the data for Calculator Frameworks to which run the risk models on them. 

All of the above need access to large amounts of data at the individual transaction Level. The Corporate Finance function within the Bank then makes end of day adjustments to reconcile all of this data up and these adjustments need to be cascaded back to the source systems down to the individual transaction or classes of transaction levels. 

These applications are then typically deployed on clusters of bare metal servers that are not particularly suited to portability, automated provisioning, patching & management. In short, nothing that can automatically be moved over at a moment’s notice. These applications also work on legacy proprietary technology platforms that do not lend themselves to flexible & a DevOps style of development.

Finally, there is always need for statistical frameworks to make adjustments to customer transactions that somehow need to get reflected back in the source systems. All of these frameworks need to have access to and an ability to work with terabtyes (TBs) of data.

Each of above mentioned risk work streams has corresponding data sets, schemas & event flows that they need to work with, with different temporal needs for reporting as some need to be run a few times in a day (e.g. Traded Credit Risk), some daily (e.g. Market Risk) and some end of the week (e.g Enterprise Credit Risk)

Five_Deadly_Sins_Banking_Arch

                          Illustration – The Five Deadly Sins of Financial IT Architectures

Let us examine why this is in the context of these anti-patterns as proposed below –

THE FIVE DEADLY SINS…

The key challenges with current architectures –

  1. Utter, total and complete lack of centralized Data leading to repeated data duplication  – In the typical Risk Data Aggregation application – a massive degree of Data is duplicated from system to system leading to multiple inconsistencies at the summary as well as transaction levels. Because different groups perform different risk reporting functions (e.g Credit and Market Risk) – the feeds, the ingestion, the calculators end up being duplicated as well. A huge mess, any way one looks at it. 
  2. Analytic applications which are not designed for throughput – Traditional Risk algorithms cannot scale with this explosion of data as well as the heterogeneity inherent in reporting across multiple kinds of risks. E.g Certain kinds of Credit Risk need access to around 200 days of historical data where one is looking at the probability of the counter-party defaulting & to obtain a statistical measure of the same. The latter are highly computationally intensive and can run for days. 
  3. Lack of Application Blueprint, Analytic Model & Data Standardization – There is nothing that is either SOA or microservices-like and that precludes best practice development & deployment. This only leads to maintenance headaches. Cloud Computing enforces standards across the stack. Areas like Risk Model and Analytic development needs to be standardized to reflect realities post BCBS 239. The Volcker Rule aims to ban prop trading activity on part of the Banks. Banks must now report on seven key metrics across 10s of different data feeds across PB’s of data. Most cannot do that without undertaking a large development and change management headache.
  4. Lack of Scalability –  It must be possible to operate it as a central system that can scale to carry the full load of the organization and operate with hundreds of applications built by disparate teams all plugged into the same central nervous system.One other factor to consider is the role of cloud computing in customer retention efforts. The analytical computational power required to understand insights from gigantic data sets is costly to maintain on an individual basis. The traditional owned data center will probably not disappear, but banks need to be able to leverage the power of the cloud to perform big data analysis in a cost-effective manner.
    EndFragment
  5. A Lack of Deployment Flexibility –  The application & data requirements dictate the deployment platforms. This massive anti pattern leads to silos and legacy OS’s that can not easily be moved to Containers like Docker & instantiated by a modular Cloud OS like OpenStack.

THE BUSINESS VALUE DRIVERS OF EFFICIENT ARCHITECTURES …

Doing IT Architecture right and in a responsive manner to the business results in critical value drivers that that are met & exceeded this transformation are – 

  1. Effective Compliance with increased Regulatory Risk mandates ranging from Basel – III, FTRB, Liquidity Risk – which demand flexibility of all the different traditional IT tiers.
  2. An ability to detect and deter fraud – Anti Money Laundering (AML) and Retail/Payment Card Fraud etc
  3. Fendoff competition from the FinTechs
  4. Exist & evolve in a multichannel world dominated by the millennial generation
  5. Reduced costs to satisfy pressure on the Cost to Income Ratio (CIR)
  6. The ability to open up data & services that operate on the customer data to other institutions

 A uniform architecture that works across of all these various types would seem a commonsense requirement. However, this is a major problem for most banks. Forward looking approaches that draw heavily from microservices based application development, Big Data enabled data & processing layers, the adoption of Message Oriented Middleware (MOM) & a cloud native approach to developing applications (PaaS) & deployment (IaaS) are the solution to the vexing problem of inflexible IT.

The question is if banks can change before they see a perceptible drop in revenues over the years?  

How Robo-Advisors work..(2/3)

Millennials want “finance at their fingertips”..they want to be able to email and text the financial advisors and talk to them on a real-time basis,” – Greg Fleming, Ex-Morgan Stanley
The first post in this series on Robo-advisors touched on the fact that Wealth Management has been an area largely untouched by automation as far as the front office is concerned. Automated investment vehicles have largely begun changing that trend and they helping create a variety of business models in the industry. This three part series explored the automated “Robo-advisor” movement in the first post.This second post will focus on the overall business model & main functions of a Robo-advisor.
Introduction
FinTechs led by Wealthfront and Betterment have pioneered the somewhat revolutionary concept of Robo-advisors. To define the term – a Robo-advisor is an algorithm based automated investment advisor that can provide a range of Wealth Management services tailored to a variety of life situations.
Robo-advisors offer completely automated financial planning services. We have seen how the engine of the Wealth Management business is new customer acquisition. The industry is focused on acquiring the millennial or post millennial HNWI (High Net Worth Investor) generation. The technology friendliness of this group ensures that are the primary target market for automated investment advice. Not just the millenials, anyone who is comfortable with using technology and wants lower cost services can benefit from automated investment planning. However,  leaders in the space such as – Wealthfront & Betterment – have disclosed that their average investor age is around 35 years. [1]
Robo-advisors provide algorithm-based portfolio management methods around investment creation, provide automatic portfolio rebalancing and value added services like tax-loss harvesting as we will see. The chief investment vehicle of choice seems to be low-cost, passive exchange-traded funds (ETFs).

What are the main WM business models

Currently there are a few different business models that are being adopted by WM firms.

  1. Full service online Robo-advisor that is a 100% automated without any human element
  2. Hybrid Robo-advisor model being pioneered by firms like Vanguard & Charles Schwab
  3. Pure online advisor that is primarily human in nature

What do Robo-advisors typically do?

The Robo-advisor can be optionally augmented & supervised by a human adviser. At the moment, owing to the popularity of Robo-advisors among the younger high networth investors (HNWI), a range of established players like Vanguard, Charles Schwab as well as a number of FinTech start-ups have developed these automated online investment tools or have acquired FinTech’s in this space.e.g Blackrock. The Robo-advisor is typically offered as  a free service (below a certain minumum) and typically invests in low cost ETFs.  built using digital techniques – such as data science & Big Data.

Robo_Process

                                  Illustration: Essential functions of a Robo-advisor

The major business areas & client offerings in the Wealth & Asset Management space have been covered in the first post in this series at http://www.vamsitalkstech.com/?p=2329

Automated advisors only cover a subset of all of the above at the moment. The major usecases are as below –

  1. Determine individual Client profiles & preferences—e.g. For a given client profile- determine financial goals, expectations of investment return, diversification etc
  2. Identify appropriate financial products that can be offered either as pre-packaged portfolios or custom investments based on the client profile identified in the first step
  3. Establish correct Investment Mix for the client’s profile – these can included but are not ,limited to equities, bonds, ETFs & other securities in the firm’s portfolios . For instance, placing  tax-inefficient assets in retirement accounts like IRA’s as well as  tax efficient municipal bonds in taxable accounts etc.
  4. Using a algorithmic approach, choose the appropriate securities for each client account
  5. Continuously monitor the portfolio & transactions within it to tune performance , lower transaction costs, tax impacts etc based on how the markets are doing. Also ensure that a client’s preferences are being incorporated so that appropriate diversification and risk mitigation is being performed
  6. Provide value added services like Tax loss harvesting to ensure that the client is taking tax benefits into account as they rebalance portfolios or accrue dividends.
  7. Finally ,ensure the best user experience by handling a whole range of financial services – trading, account administration, loans,bill pay, cash transfers, tax reporting, statements in one intuitive user interface.

000-graph

                             Illustration: Betterment user interface. Source – Joe Jansen

To illustrate these concepts in action, leaders like Wealthfront & Betterment are increasingly adding features where  highly relevant, data-driven advice is being provided based on existing data as well as aggregated data from other providers. Wealthfront now provides recommendations on diversification, taxes and fees that are personalized not only to the specific investments in client’s account, but also tailored to their specific financial profile and risk tolerance. For instance, is enough cash being set aside in the emergency fund ? Is a customer holding too much stock in your employer? [1]

The final post will look at a technology & architectural approach to building out a Robo-advisor. We will also discuss best practices from a WM & industry standpoint in the context of Robo-advisors.

References:

  1. Wealthfront Blog – “Introducing the new Dashboard”

Data Lakes power the future of Industrial Analytics..(1/4)

The first post in this four part series on Data lakes will focus on the business reasons to create one. The second post will delve deeper into the technology considerations & choices around data ingest & processing in the lake to satisfy myriad business requirements. The third will tackle the critical topic of metadata management, data cleanliness & governance. The fourth & final post in the series will focus on the business justification to build out a Big Data Center of Excellence (COE).

Business owners at the C level are saying, ‘Hey guys, look. It’s no longer inordinately expensive for us to store all of our data. I want all of you to make copies. OK, your systems are busy. Find the time, get an extract, and dump it in Hadoop.’”- Mike Lang, CEO of Revelytix

The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous online interactions with global consumers/customers/clients or patients. The goal is not just provide engaging visualization but also to personalize services clients care about across multiple modes of interaction. Mobile applications first begun forcing the need for enterprise to begin supporting multiple channels of interaction with their consumers. For example Banking now requires an ability to engage consumers in a seamless experience across an average of four to five channels – Mobile, eBanking, Call Center, Kiosk etc. Healthcare is a close second where caregivers expect patient, medication & disease data at their fingertips with a few finger swipes on an iPad app.

Big Data has been the chief catalyst in this disruption. The Data Lake architectural & deployment pattern makes it possible to first store all this data & then enables the panoply of Hadoop ecosystem projects & technologies to operate on it to produce business results.

Let us consider a few of the major industry verticals and the sheer data variety that players in these areas commonly possess – 

The Healthcare & Life Sciences industry possess some of the most diverse data across the spectrum ranging from – 

  • Structured Clinical data e.g. Patient ADT information
  • Free hand notes
  • Patient Insurance information
  • Device Telemetry 
  • Medication data
  • Patient Trial Data
  • Medical Images – e.g. CAT Scans, MRIs, CT images etc

The Manufacturing industry players are leveraging the below datasets and many others to derive new insights in a highly process oriented industry-

  • Supply chain data
  • Demand data
  • Pricing data
  • Operational data from the shop floor 
  • Sensor & telemetry data 
  • Sales campaign data

Data In Banking– Corporate IT organizations in the financial industry have been tackling data challenges due to strict silo based approaches that inhibit data agility for many years now.
Consider some of the traditional sources of data in banking –

  • Customer Account data e.g. Names, Demographics, Linked Accounts etc
  • Core Banking Data
  • Transaction Data which captures the low level details of every transaction (e.g debit, credit, transfer, credit card usage etc)
  • Wire & Payment Data
  • Trade & Position Data
  • General Ledger Data e.g AP (accounts payable), AR (accounts receivable), cash management & purchasing information etc.
  • Data from other systems supporting banking reporting functions.

Industries have changed around us since the advent of relational databases & enterprise data warehouses. Relational Databases (RDBMS) & Enterprise Data Warehouses (EDW) were built with very different purposes in mind. RDBMS systems excel at online transaction processing (OLTP) use cases where massive volumes of structured data needs to be processed quickly. EDW’s on the other hand perform online analytical processing functions (OLAP) where data extracts are taken from OLTP systems, loaded & sliced in different ways to . Both these kinds of systems are not simply suited to handle not just immense volumes of data but also highly variable structures of data.

awesome-lake

Let us consider the main reasons why legacy data storage & processing techniques are unsuited to new business realities of today.

  • Legacy data technology enforces a vertical scaling method that is sorely unsuited to handling massive volumes of data in a scale up/scale down manner
  • The structure of the data needs to be modeled in a paradigm called ’schema on write’ which sorely inhibits time to market for new business projects
  • Traditional data systems suffer bottlenecks when large amounts of high variety data are processed using them 
  • Limits in the types of analytics that could be performed. In industries like Retail, Financial Services & Telecommunications, enterprise need to build detailed models of customers accounts to predict their overall service level satisfaction in realtime. These models are predictive in nature and use data science techniques as an integral component. The higher volumes of data along with attribute richness that can be provided to them (e.g. transaction data, social network data, transcribed customer call data) ensures that the models are highly accurate & can provide an enormous amount of value to the business. Legacy systems are not a great fit here.

Given all of the above data complexity and the need to adopt agile analytical methods  – what is the first step that enterprises must adopt? 

The answer is the adoption of the Data Lake as an overarching data architecture pattern. Lets define the term first. A data lake is two things – a small or massive data storage repository and a data processing engine. A data lake provides “massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs“.[1] Data Lake are created to ingest, transform, process, analyze & finally archive large amounts of any kind of data – structured, semistructured and unstructured data.

DL_1

                                  Illustration – The Data Lake Architecture Pattern

What Big Data brings to the equation beyond it’s strength in data ingest & processing is a unified architecture. For instance, MapReduce is the original framework for writing applications that process large amounts of structured and unstructured data stored in the Hadoop Distributed File System (HDFS). Apache Hadoop YARN opened Hadoop to other data processing engines (e.g. Apache Spark/Storm) that can now run alongside existing MapReduce jobs to process data in many different ways at the same time. The result is that ANY kind of application processing can be run inside a Hadoop runtime – batch, realtime, interactive or streaming.

Visualization  – Mobile applications first begun forcing the need for enterprise to begin supporting multiple channels of interaction with their consumers. For example Banking now requires an ability to engage consumers in a seamless experience across an average of four to five channels – Mobile, eBanking, Call Center, Kiosk etc. The average enterprise user is also familiar with BYOD in the age of self service. The Digital Mesh only exacerbates this gap in user experiences as information consumers navigate applications as they consume services across a mesh that is both multi-channel as well as provides Customer 360 across all these engagement points.While information management technology has grown at a blistering pace, the human ability to process and comprehend numerical data has not. Applications being developed in 2016 are beginning to adopt intelligent visualization approaches that are easy to use,highly interactive and enable the user to manipulate corporate & business data using their fingertips – much like an iPad app. Tools such as intelligent dashboards, scorecards, mashups etc are helping change a visualization paradigms that were based on histograms, pie charts and tons of numbers. Big Data improvements in data lineage, quality are greatly helping the visualization space.

The Final Word

Specifically, Data Lake architectural pattern provide the following benefits – 

The ability to store enormous amounts of data with a high degree of agility & low cost: The Schema On Read architecture makes it trivial to ingest any kind of raw data into Hadoop in a manner that preserves it’s structure.  Business analysts can then explore  this data and then defined a schema to suit the needs of their particular application.

The ability to run any kind of Analytics on the data: Hadoop supports multiple access methods (batch, real-time, streaming, in-memory, etc.) to a common data set.  You are only restricted by your use case.

the ability to analyze, process & archive data while dramatically cutting cost : Since Hadoop was designed to work on low-cost commodity servers which have direct attached storage – it helps dramatically lower the overall cost of storage.  Thus enterprises are able to retain source data for long periods, thus providing business applications with far greater historical context.

The ability to augment & optimize Data Warehouses: Data lakes & Hadoop technology are not a ‘rip & replace’ proposition. While they provide a much lower cost environment than data warehouses, they can also be used as the compute layer to augment these systems.  Data can be stored, extracted and transformed in Hadoop. Then a subset of the data i.e the results are loaded into the data warehouse. This enables the EDW to leverage compute cycles and storage to perform truly high value analytics.

The next post of the series will dive deeper into the architectural choices one needs to make while creating a high fidelity & business centric enterprise data lake.

References – 

[1] https://en.wikipedia.org/wiki/Data_lake

Next Gen Wealth Management Architecture..(3/3)

The previous two posts have covered the business & strategic need for Wealth Management IT applications to reimagine themselves to support their clients. How is this to be accomplished and what does a candidate architectural design pattern look like? What are the key enterprise wide IT concerns? This third & final post (3/3) tackles these questions. An additional following post will return our focus to the business end when we focus on strategic recommendations to industry CXO’s.

The Four Key Business Tenets – 

How well a WM firm harness technology determines it’s overall competitive advantage. When advisors can get seamless access to a variety of data, it can help them in manifold ways. For example it helps them make better decisions for their clients as well as make productive use of the day by having the right client data at their fingertips with the push of a button or by means of an intuitive user interface. Similarly, greater access to their portfolios gives clients an more engaging and unified experience.

So, to recap the four strategic goals that WM firms need to operate towards – 

  1. Increase Client Loyalty by Digitizing Client Interactions –  WM Clients who use services like Uber, Zillow, Amazon etc in their daily lives are now very vocal in demanding a seamless experience across all of the WM services using digital channels.  The vast majority of  WM applications still lag the innovation cycle, are archaic & are still separately managed. The net issue with this is that the client is faced with distinct user experiences ranging from client onboarding to servicing to transaction management. There is a crying need for IT infrastructure modernization ranging across the industry from Cloud Computing to Big Data to microservices to agile cultures promoting techniques such as a DevOps approach to building out these architectures. Such applications need to provide anticipatory or predictive capabilities at scale while understand the specific customers lifestyles, financial needs & behavioral preferences. 
  2. Generate Optimal Client Experiences –  In most existing WM systems, siloed functions have led to brittle data architectures operating on custom built legacy applications. This problem is firstly compounded by inflexible core banking systems and secondly exacerbated by a gross lack of standardization in application stacks underlying ancillary capabilities. These factors inhibit deployment flexibility across a range of platforms thus leading to extremely high IT costs and technical debut. The consequence is that these inhibit client facing applications from using data in a manner that constantly & positively impacts the client experience. There is clearly a need to provide an integrated digital experience across a global customer base. And then to offer more intelligent functions based on existing data assets. Current players do possess a huge first mover advantage as they offer highly established financial products across their large (and largely loyal & sticky) customer bases, a wide networks of physical locations, rich troves of data that pertain to customer accounts & demographic information. However, it is not enough to just possess the data. They must be able to drive change through legacy thinking and infrastructures as things change around the entire industry as it struggles to adapt to a major new segment – the millenials – who increasingly use mobile devices and demand more contextual services as well as a seamless and highly analytic driven & unified banking experience – akin to what they commonly experience via the internet – at web properties like Facebook, Amazon, Google or Yahoo etc
  3. Automate Back & Mid Office Processes Across the WM Value Chain – The needs to forge a closer banker/client experience is not just driving demand around data silos & streams themselves but also forcing players to move away from paper based models to more of a seamless, digital & highly automated model to rework a ton of existing back & front office processes – which is the weakest link in the chain. These processes range from risk data aggregation, supranational compliance (AML,KYC, CRS & FATCA), financial reporting across a range of global regions & Cyber Security. Can the Data architectures & the IT systems  that leverage them be created in such a way that they permit agility while constantly learning & optimizing their behaviors across national regulations, InfoSec & compliance requirements? Can every piece of actionable data be aggregated,secured, transformed and reported on in such a way that it’s quality across the entire lifecycle is guaranteed? 
  4. Tune existing business models based on client tastes and feedback – While Automation 1.0 focuses on digitizing processes, rules & workflow as stated above; Automation 2.0 implies strong predictive modeling capabilities working at large scale – systems that constantly learn and optimize products & services based on client needs & preferences. The clear ongoing theme in the WM space is constant innovation. Firms need to ask themselves if they are offering the right products that cater to an increasingly affluent yet dynamic clientele. This is the area where firms need to show that they can compete with the FinTechs (Wealthfront, Nutmeg, Fodor Bank et al) to attract younger customers.

Now that we have covered the business bases, what foundational technology choices comprise the satisfaction of the above? Lets examine that first at a higher level and then in more detail.

Ten Key Overall System Architecture Tenets – 

The design and architecture of a solution as large and complex as a WM enterprise is a multidimensional challenge. The below illustration catalogs the four foundational capabilities of such a system – Omnichannel, Mobile Native Experiences, Massive Data processing capabilities, Cloud Computing & Predictive Analytics – all operating at scale.

NextGen_WM

                            Illustration – Top Level Architectural Components 

Here are some of the key global design characteristics for a common architecture framework :

  • The Architecture shall support automated application delivery, configuration management & deployment
  • The Architecture shall support a high degree of data agility and data intelligence. The end goal being that that every customer click, discussion & preference shall drive an analytics infused interaction between the advisor and the client
  • The Architecture shall support algorithmic capabilities that enable the creation of new services like automated (or Robo) advisors
  • The Architecture shall support a very high degree of scale across many numbers of users, interactions & omni-channel transactions while working across global infrastructure
  • The Architecture shall support deployment across cost efficient platforms like a public or private cloud. In short, the design of the application shall not constrain the available deployment options – which may vary because of cost considerations. The infrastructure options supported shall range from virtual machines to docker based containers – whether running on a public cloud, private cloud or in a hybrid cloud
  • The Architecture shall support small, incremental changes to business services & data elements based on changing business requirements 
  • The Architecture shall support standardization across application stacks, toolsets for development & data technology to a high degree
  • The Architecture shall support the creation of a user interface that is highly visual and feature rich from a content standpoint when accessed across any device
  • The Architecture shall support an API based model to invoke any interaction – by a client or an advisor or a business partner
  • The Architecture shall support the development and deployment of an application that encourages a DevOps based approach
  • The Architecture shall support the easy creation of scalable business processes that natively emit business metrics from the time they’re instantiated to throughout their lifecycle

Given the above list of requirements – the application architecture that is a “best fit” is shown below.

WM_Arch

                   Illustration – Target State Architecture for Digital Wealth Management 

Lets examine each of the tiers starting from the lowest –

Infrastructure Tier –

Cloud Computing across it’s three main delivery models (IaaS, PaaS & SaaS) is largely a mainstream endeavor in financial services and no longer an esoteric adventure only for brave innovators. A range of institutions are either deploying or testing cloud-based solutions that span the full range of cloud delivery models. These capabilities include –

IaaS (infrastructure-as-a-service) to provision compute, network & storage, PaaS (platform-as-a-service) to develop applications & exposing their business services as  SaaS (software-as-a-service) via APIs.

Choosing Cloud based infrastructure – whether that is secure public cloud  (Amazon AWS or Microsoft Azure) or an internal private cloud (OpenStack etc) or even a hybrid approach is a safe and sound bet for WM applications. Business innovation and transformation are best enabled by a cloud based infrastructure.

Data Tier – 

While banking data tiers are usually composed of different technologies like RDBMS, EDW (Enterprise Data Warehouses), CMS (Content Management Systems) & Big Data etc. My recommendation for the target state is largely dominated by a Big Data Platform powered by Hadoop. Given the focus of the digital Wealth Manager to leverage algorithmic asset management and providing predictive analytics to create tailored & managed portfolios for their clients – Hadoop is a natural fit as it is fast emerging as the platform of choice for analytic applications.  

Financial services in general and Wealth Management specifically deal with manifold data types ranging from Customer Account data, Transaction Data, Wire Data, Trade Data, Customer Relationship Management (CRM), General Ledger and other systems supporting core banking functions. When one factors in social media feeds, mobile clients & other non traditional data types, the challenge is   

The reasons for choosing  Hadoop as the dominant technology in the data tier are the below – 

  1. Hadoop’s ability to ingest and work with all the above kinds of data & more (using the schema on read method) has been proven at massive scale. Operational data stores are being built on Hadoop at a fraction of the cost & effort involved with older types of data technology (RDBMS & EDW)
  2. The ability to perform multiple types of processing on a given data set. This processing varies across batch, streaming, in memory and realtime which greatly opens up the ability to create, test & deploy closed loop analytics quicker than ever before
  3. The DAS (Direct Attached Storage) model that Hadoop provides fits neatly in with the horizontal scale out model that the services, UX and business process tier leverage. This keeps cuts Capital Expenditure  to a bare minimum.
  4. The ability to retain data for long periods of time thus providing WM applications with predictive models that can reason on historical data
  5. Hadoop provides the ability to run a massive volumes of models in a very short amount of time helps with modeling automation
  6. Due to it’s parallel processing nature, Hadoop can run calculations (pricing, risk, portfolio, reporting etc) in minutes versus the hours it took using older technology
  7. Hadoop has to work with existing data investments and to augment them with data ingestion & transformation leaving EDW’s to perform complex analytics that they excel at – a huge bonus.

Services Tier –

The overall goals of the services tier are to help design, develop,modify and deploy business components in such a way that overall WM application delivery follows a continuous delivery/deployment (CI/CD) paradigm.Given that WM Platforms are some of the most complex financial applications out there, this also has the ancillary benefit of leaving different teams – digital channels, client on boarding, bill pay, transaction management & mid/back office teams to develop and update their components largely independent of other teams. Thus a large monolithic WM enterprise platform is decomposed into its constituent services which are loosely coupled and each is focused on one independent & autonomous business task only. The word ‘task’ here referring to a business capability that has tangible business value.

A highly scalable, open source & industry leading platform as a service (PaaS) like Red Hat’s OpenShift is recommended as the way of building out and hosting this tier.  Microservices have moved from the webscale world to fast becoming the standard for building mission critical applications in many industries. Leveraging a PaaS such as OpenShift provides a way to help cut the “technical debt” that has plagued both developers and IT Ops. OpenShift provides the right level of abstraction to encapsulate microservices via it’s native support for Docker Containers. This also has the concomitant advantage of standardizing application stacks, streamlining deployment pipelines thus leading the charge to a DevOps style of building applications. 

Further I recommend that service designer take the approach that their micro services can be deployed in a SaaS application format going forward – which usually implies taking an API based approach.

Now, the services tier has the following global responsibilities – 

  1. Promote a SOA style of application development
  2. Support component endpoint invocation via standards based REST APIs
  3. Promote a cloud, OS & ,development language agnostic style of application development
  4. Promote Horizontal scaling and resilience

 

Predictive Analytics & Business Process Tier – 

Though segments of the banking industry have historically been early adopters of analytics, the wealth management space has largely been a laggard. However, the large datasets that are prevalent in WM as well as the need to drive customer interaction & journeys, risk & compliance reporting, detecting fraud etc calls for a strategic relook at this space. 

Techniques like Machine Learning, Data Science & AI feed into core business processes thus improving them. For instance, Machine Learning techniques support the creation of self improving algorithms which get better with data thus making accurate business predictions. Thus, the overarching goal of the analytics tier should be to support a higher degree of automation by working with the business process and the services tier. Predictive Analytics can be leveraged across the value chain of WM – ranging from new customer acquisition to customer journey to the back office. More recently these techniques have found increased rates of adoption with enterprise concerns from cyber security to telemetry data processing.

Though most large banks do have pockets of BPM implementations that are adding or beginning to add significant business value, an enterprise-wide re-look at the core revenue-producing activities is called for, as is a deeper examination of driving competitive advantage. BPM now has evolved into more than just pure process management. Meanwhile, other disciplines have been added to BPM — which has now become an umbrella term. These include business rules management, event processing, and business resource planning.

WM firms are fertile ground for business process automation, since most managers across their various lines of business are simply a collection of core and differentiated processes. Examples are private banking (with processes including onboarding customers, collecting deposits, conducting business via multiple channels, and compliance with regulatory mandates such as KYC and AML); investment banking (including straight-through-processing, trading platforms, prime brokerage, and compliance with regulation); payment services; and portfolio management (including modeling model portfolio positions and providing complete transparency across the end-to-end life cycle). The key takeaway is that driving automation can result not just in better business visibility and accountability on behalf of various actors. It can also drive revenue and contribute significantly to the bottom line.

A business process system should allow an IT analyst or customer or advisor to convey their business process by describing the steps that need to be executed in order to achieve the goal (and explain the order of those steps, typically using a flow chart). This greatly improves the visibility of business logic, resulting in higher-level and domain-specific representations (tailored to finance) that can be understood by business users and should be easier to monitor by management. Again , leveraging a PaaS such as OpenShift in conjunction with an industry leading open source BPMS (Business Process Management System) such as JBOSS BPMS provides an integrated BPM capability that can create cloud ready and horizontally scalable business processes.

User Experience Tier – 

The UX (User Experience) tier fronts humans – client. advisor, regulator, management and other business users across all touchpoints. An API tier is provided for partner applications and other non-human actors to interact with business service tier. 

The UX tier has the following global responsibilities  – 

  1. Provide a consistent user experience across all channels (mobile, eBanking, tablet etc) in a way that is a seamless and non-siloded view. The implication is that clients should be able to begin a business transaction in channel A and be able to continue them in channel B where it makes business sense.
  2. Understand client personas and integrate with the business & predictive analytic tier in such a way that the UX is deeply integrated with the overall information architecture
  3. Provide advanced visualization (wireframes, process control, social media collaboration) and cross partner authentication & single sign on
  4. The UX shall also be designed is such a manner that it’s design, development & ongoing enhancement follow an agile & DevOps method.

Putting it all together- 

How do all of the above foundational technologies (Big Data, UX,Cloud, BPM & Predictive Analytics) help encourage a virtuous cycle?

  1. WM Applications that are omnichannel, truly digital and thus highly engaging  have been proven to drive higher rates of customer interaction
  2. Higher and more long-lived  customer interactions (across channels) drives increased product uptake & increased revenue per client while constantly producing more valuable data
  3. Increased & relevant data volumes in turn help improve predictive capabilities of customer models as they can constantly be harnessed to drive higher insight and visibility into a range of areas – client tastes, product fit & business strategy
  4. These in turn provide valuable insights to drive improvements in products & services
  5. Rinse and Repeat – constantly optimize and learn on the go

This cycle needs to be accelerated helping the creation of a learning organization which can outlast competition by means of a culture of unafraid experimentation and innovation.

Summary

New Age technology platforms designed around the four key business needs  (Client experience, Advisor productivity, a highly Automated backoffice & a culture of constant innovation)  will create immense operational efficiency, better business models, increased relevance and ultimately drive revenues. These will separate the visionaries, leaders from the laggards in the years to come.