Can Your CIO Do Digital?

Business model innovation is the new contribution of IT”  — Werner Boeing, CIO, Roche Diagnostics

Digital Is Changing the Role of the Industry CIO…

A Motley crew of some what interrelated technologies – Cloud Computing, Big Data Platforms, Predictive Analytics & Mobile Applications are changing the enterprise IT landscape. The common paradigm that captures all of them is Digital. The immense business value of Digital technology no longer in question both from a customer as well as an enterprise standpoint. However, the Digital space calls for strong and visionary leadership both from a business & IT standpoint.

Business Boards and CXOs are now concerned about their organization’s overall level and maturity of digital investments. And the tangible business value in existing business operations– (e.g increasing sales & customer satisfaction, detecting fraud, driving down business & IT costs etc)-but also in helping finetune or create new business models by leveraging Digital paradigms. It is thus an increasingly accurate argument that smart applications & ecosystems built around Digitization will dictate enterprise success.

The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous micro level interactions with global consumers/customers/clients/stockholders or patients depending on the vertical you operate in. Initially enterprises viewed Digital as a bolt-on or a fresh color of paint on an existing IT operation.

How did that change over the last five years?

Mobile applications first begun forcing the need for enterprise to begin supporting multiple channels of interaction with their consumers. We have seen how how exploding data generation across the global economy has become a clear & present business & IT phenomenon. Data volumes are rapidly expanding across industries. However, while the production of data by Mobile Applications that has increased but it is also driving the need for organizations to derive business value from it, using advanced techniques such as Data Science and Machine Learning. As a first step, this calls for the collection & curation of data from dynamic,  and highly distributed sources such as consumer transactions, B2B interactions, machines such as ATM’s & geo location devices, click streams, social media feeds, server & application log files and multimedia content such as videos etc – using Big Data. Often these workloads are run on servers hosted on an agile infrastructure such as a Public or Private Cloud.

As one can understand from the above paragraph, the Digital Age calls for a diverse set of fresh skills – both from IT leadership and the rank & file. The role of the Chief Information Officer (CIO) is thus metamorphosing from being an infrastructure service provider to being the overall organizational thought leader in the Digital Age.

The question is – Can Industry CIOs adapt?

The Classic CIO is a provider of IT Infrastructure services.. 

what_cios_think                                                Illustration: The Concerns of a CIO..

So what do CIOs typically think about nowadays?

  1. Keep the core stable and running so IT delivers minimal services to the business and disarm external competition
  2. Are parts of my business really startups and should they be treated as such and should they be kept away from the shackles of inflexible legacy IT? Do I need a digital strategy?
  3. What does the emergence of the 3rd platform (Cloud, Mobility,Social and Big Data) imply?
  4. Where can I show the value of expertise and IT to the money making lines of business?
  5. How can one do all the above while keeping track of Corporate and IT security?

 CIO’s who do not adapt are on the road to Irrelevance…

Where CIOs are being perceived as managing complex legacy systems, the new role of Chief Digital Officer (CDO) has gained currency. The idea that a parallel & more agile IT organization can be created and run to create an ecosystem of innovation & that the office of the CDO is the right place to drive these innovative applications.

Why is that?

  1. CIOs that cannot or that seem dis-engaged with creating innovation through IT are headed the way of the dodo. At the enterprise officer – CIO/CTO level, it becomes very obvious that more than ever “IT is not just a complementary function or a supplementary service but IT is the Business”. If that was merely something that we all paid lip-service to in the past, it is hard reality now. So it is not a case of which company can make the best widgets or has the fastest trading platforms or efficient electronic health records. It is whose enterprise IT can provide the best possible results within a given cost that will win. Its up to the CIOs to deliver and deliver in such a way that large established organizations can compete with upstarts who do not have the same kind of enterprise constraints & shackles.
  2. Innovation & information now follow an “outside in” model. As opposed to data and value being generated by internal functions (sales,engineering, customer fulfillment, core business processes etc) . Enterprise customers are beginning to now operate in what I like to think of as the new normal: entropy.  It’s these conditions that make it imperative for IT Leadership to reconsider their core business applications at the enterprise level. Does internal IT infrastructure need to look more like those of the internet giants?
  3. As a result of the above trends, CIOs are clearly now business level stakeholders more than ever. This means that they need to engage & understand their business at a deep level from an ecosystem and competitive standpoint. Those that cannot do it are neither very effective nor in those positions for long.
  4. Also,it is not merely enough to be a passive stakeholder, CIOs have to deliver on two very broad fronts. The first is to deliver core services (aka standardized functions) on time and at a reasonable cost. These are things like core banking systems, email, data backups etc. Ensuring smooth operation running transactional systems like ERP/business processing systems in manufacturing, decision support systems, classic IT infrastructure, claims management systems in Insurance and Billing systems in Healthcare. The systems that need to run to keep the business operations.The focus here is to deliver on these on time and within SLAs to increasingly demanding internal customers. Like running the NYC subway – no one praises you for keeping things humming day in and out but all hell breaks loose when the trains are nonoperational for any reason. A thankless task but one essentially needed to win the credibility with lines of business.
  5. The advent of public cloud means that internal IT no longer has a monopoly and a captive internal customer base even with core services. If one cannot compete with the likes of Amazon AWS or any of the SaaS based clouds that are mushrooming on a quarterly basis, you will find that soon enough you have to co-exist with Not-So-Shadow IT. The industry has seen enough back-office CIOs who are not perceived by their organizations as having a largely irrelevant role in the evolution of the larger enterprise.
  6. Despite the continued focus on running a strong core as the price of CIO admission to the internal strategic dances, transformation is starting to emerge as a key business driver and is making its way into the larger industry. It is no longer the province of Wall St trading shops or a Google or a Facebook. Innovation as in “adopt this strategy and reinvent your IT and change the business”. The operative word here is incremental rather than disruptive innovation. More on this key point later.
  7. Most rank and file IT personnel in general cannot really keep up with all the nomenclature of technology. For instance, a majority do not really understand umbrella concepts like Cloud, Mobility and Big Data. They know what these mean at a high level but the complex technology underpinnings, various projects & the finer nuances are largely lost on enterprise customers. There are two stark choices from a time perspective that face overworked IT personnel – a) Do you want to increase your value to your corporation by learning to speak the lingua franca of your business and by investing in those skills away from a traditional IT employee mindset? b) do you want to increase your IT depth in your area of expertise.The first makes one a valued collaborator and paves your way up within the chain, the second may definitely increase your marketability in the industry but it is not that easy to keep up. We find that an increasing number of employees choose the first path which creates interesting openings and arbitrage opportunities for other groups in the organization. The CIO needs to step up and be the internal change agent.

CONCLUSION…

Enterprise wide business innovation will continue to be designed around the four key technologies  (Big Data, Cloud Computing, Technology & Platforms). Business Platforms created leveraging these technologies will create immense operational efficiency, better business models, increased relevance to customers and ultimately drive revenues. Such platforms will separate the visionaries, leaders from the laggards in the years to come. As often noticed, the keyword accompanying transformation is often digital. This means a renewed focus on making IT services appealing to millennial or the self service generation – be they customers or employees or partners. This really touches all areas of enterprise IT while leaving behind a significant impact on organizational culture.

This is the age of IT with no boundaries – the question is whether the role of the CIO will largely remain unscathed in the years to come.

Why the Insurance Industry Needs to Learn from Banking’s Risk Management Nightmares..

risk_management_montage

(Image Credit – ENC Consulting)

Why Systemic Financial Crises Are a Broad Failure of Risk Management…

Various posts in this blog have catalogued the practice of  risk management in the financial services industry. To recap briefly, the Great Financial Crisis (GFC) of 2008 was a systemic failure that brought about large scale banking losses across the globe. Considered by many economists to be the worst economic crisis since the Great Depression [1], it not only precipitated the collapse of large financial institutions across the globe but also triggered the onset of sovereign debt crises across Greece, Iceland et al.

Years of deregulation & securitization (a form of risk transfer) combined with expansionary monetary policy during the Greenspan years, in the United States, led to the unprecedented availability of easy consumer credit in lines such as mortgages, credit cards and auto. The loosening of lending standards led to the rise of Subprime Mortgages which were often underwritten using fraudulent practices. Investment Banks were only too happy to create mortgage backed securities (MBS) which were repackaged and sold across the globe to willing institutional investors. Misplaced financial incentives in banking were also a key cause of this mindless financial innovation.

The health of entire global high finance thus rested on the ability of the US consumer to make regular payments on their debt obligations – especially  on their mortgages. However, artificially inflated housing prices began to decline in 2004 and the rate of refinancing dropped, the rate of foreclosures assumed mammoth proportions. Global investors begin to thus suffer significant losses. The crisis assumed the form of a severe liquidity crunch leading to a crisis of confidence among counter parties in the financial system.

Global & National Regulatory Authorities had to step in to conduct massive bailouts of banks. Yet stock markets suffered severe losses as housing markets collapsed causing a large crisis of confidence. Central Banks & Federal Governments responded with massive monetary & fiscal policy stimulus thus yet again crossing the line of Moral Hazard.Risk Management practices in 2008 were clearly inadequate at multiple levels ranging from department to firm to regulatory levels. The point is well made that the while the risks that individual banks ran were seemingly rational on an individual level however taken as a whole, the collective position was irrational & unsustainable. This failure to account for the complex global financial system was reflected across the chain of risk data aggregation, modeling & measurement.

 The Experience Shows That Risk Management Is A Complex Business & Technology Undertaking…

What makes Risk Management a complex job is the nature of Global Banking circa 2016?

Banks today are complex entities engaged in many kinds of activities. The major ones include –

  • Retail Banking – Providing cookie cutter financial services ranging from collecting customer deposits, providing consumer loans, issuing credit cards etc. A POV on Retail Banking at – http://www.vamsitalkstech.com/?p=2323 
  • Commercial Banking –  Banks provide companies with a range of products ranging from business loans, depository services to other financial investments.
  • Capital Markets  – Capital Markets groups provide underwriting services & trading services that engineer custom derivative trades for institutional clients (typically Hedge Funds, Mutual Funds, Corporations, Governments and high net worth individuals and Trusts) as well as for their own treasury group.  They may also do proprietary trading on the banks behalf for a profit – although it is this type of trading that Volcker Rule is seeking to eliminate. A POV on Retail Banking at- http://www.vamsitalkstech.com/?p=2175
  • Wealth Management – Wealth Management provide personal investment management, financial advisory, and planning disciplines directly for the benefit of high-net-worth (HNWI) clients. A POV on Wealth Management at – http://www.vamsitalkstech.com/?p=1447

Firstly, Banks have huge loan portfolios across all of the above areas (each with varying default rates) such as home mortgages, credit loans, commercial loans etc . In the Capital Markets space, a Bank’s book of financial assets gets more complex due to the web of counter-parties across the globe across a range of complex assets such as derivatives. Complex assets mean complex mathematical models that calculate risk exposures across many kinds of risk. These complex models for the most part did not take tail risk and wider systemic risk into account while managing risk.

Secondly, the fact that markets turn in unison during periods of (downward) volatility – which ends up endangering the entire system. Finally, complex and poorly understood financial instruments in the derivatives market had made it easy for Banks to take on highly leveraged positions which placed their own firms & counter parties at downside risk. These models were entirely dependent on predictable historical data which never modeled “black swan” events. That means while the math may have been complex, it never took on sophisticated scenario analysis into account.

Regulatory Guidelines ranging from Basel III to Dodd Frank to MiFiD II to the FRTB (the new kid on the regulatory block) have been put in place by international and national regulators post 2008. The overarching goal being to prevent a repeat of the GFC where taxpayers funded bailouts for managers of a firm – who profit immensely on the upside.

These Regulatory mandates & pressures have begun driving up Risk and Compliance expenditures to unprecedented levels. The Basel Committee guidelines on risk data reporting & aggregation (RDA), Dodd Frank, Volcker Rule as well as regulatory capital adequacy legislation such as CCAR are causing a retooling of existing risk regimens. The Volcker Rule prohibits banks from trading on their own account (proprietary trading) & greatly curtails their investments in hedge funds. The regulatory intent is to avoid banker speculation with retail funds which are insured by the FDIC. Banks have to thus certify across their large portfolios of positions as to which trades have been entered for speculative purposes versus hedging purposes.

The impact of the Volcker Rule has been to shrink margins in the Capital Markets space as business moves to a a flow based trading model that relies less on proprietary trading and more on managing trading for clients. At the same time risk management gets more realtime in key areas such as  market, credit and liquidity risks.

A POV on FRTB is at the below link.

A POV on the FRTB (Fundamental Review of the Trading Book)…

Interestingly enough one of the key players in the GFC was AIG –  an insurance company with a division – FP (Financial Products)- that really operated like a Hedge Fund by looking to insure downside risk it never thought it needed to payout on. 

Which Leads Us to the Insurance Industry…

For the most part of their long existence, insurance companies were relatively boring – they essentially provided protection against adverse events such as loss of property, life & health risks. The consumer of insurance products is a policyholder who makes regular payments called premiums to cover themselves. The major lines of insurance business can be classified into life insurance, non-life insurance and health insurance. Non life insurance is also termed P&C (Property and Casualty) Insurance. While insurers collect premiums, they invest these funds in relatively safer areas such as corporate bonds etc.

Risks In the Insurance Industry & Solvency II…

While the business model in insurance is essentially inverted & more predictable as compared to banking, insurers have to grapple with the risk of ensuring that enough reserves have been set aside for payouts to policyholder claims.  It is very important for them to have a diversified investment portfolio as well as ensure that profitability does not suffer due to defaults on these investments. Thus firms need to ensure that their investments are diverse – both from a sector as well as from a geographical exposure standpoint.

Firms thus need to constantly calculate and monitor their liquidity positions & risks. Further, insurers are constantly entering into agreements with banks and reinsurance companies – which also exposes them to counterparty credit risk.

From a global standpoint, it is interesting that US based insurance firms are largely regulated at the state level while non-US firms are regulated at the federal level. The point is well made that insurance firms have had a culture of running a range of departmentalized analytics as compared to the larger scale analytics that the Banks described above need to run.

In the European Union, all 27 member countries (including the United Kingdom) are expected to adhere to Solvency II [2] from 2016. Solvency II replaced the long standing Solvency I – which only calculates capital for underwriting risk.

Whereas Solvency I calculates capital only for underwriting risks, Solvency II is quite similar to Basel II – discussed below – and imposes guidelines for insurers to calculate investment as well as operational risks.

Towards better Risk Management..Basel III

There are three pillars to Solvency II [2].

  • Pillar 1 sets out quantitative rules and is concerned with the calculation of capital requirements and the types of capital that are eligible.
  • Pillar 2 is concerned with the requirements for the overall insurer supervisory review process &  governance.
  • Pillar 3 focuses on disclosure and transparency requirements.

The three pillars are therefore analogous to the three pillars of Basel II.

Why Bad Data Practices will mean Poor Risk Management & higher Capital Requirements under Solvency II..

While a detailed discussion of Solvency II will follow in a later post, it imposes new  data aggregation, governance and measurement criteria on insurers –

  1. The need to identify, measure and offset risks across the enterprise and often in realtime
  2. Better governance of risks across not just historical data but also fresh data
  3. Running simulations that take in a wider scope of measures as opposed to a narrow spectrum of risks
  4. Timely and accurate Data Reporting

The same issues that hobble banks in the Data Landscape are sadly to be found in insurance as well.

The key challenges with current architectures –

  1.  A high degree of Data is duplicated from system to system leading to multiple inconsistencies at the summary as well as transaction levels. Because different groups perform different risk reporting functions (e.g Credit and Market Risk) – the feeds, the ingestion, the calculators end up being duplicated as well.
  2. Traditional Risk algorithms cannot scale with this explosion of data as well as the heterogeneity inherent in reporting across multiple kinds of risks as needed for Solvency II. E.g Certain kinds of Credit Risk need access to around years of historical data where one is looking at the probability of the counter-party defaulting & to obtain a statistical measure of the same. All of these analytics are highly computationally intensive.
  3. Risk Model and Analytic development needs to be standardized to reflect realities post Solvency II. Solvency II also implies that from an analytics standpoint, a large number of scenarios on a large volume of data. Most Insurers will need to standardize their analytic libraries across their various LOBs. If Banks do not look to move to an optimized data architecture, they will incur tens of millions of dollars in additional hardware spend.

Summary

We have briefly covered the origins of regulatory risk management in both banking and insurance. Though the respective business models vary across both verticals, there is a good degree of harmonization in the  regulatory progression.  The question is if insurers can learn from the bumpy experiences of their banking counterparts in the areas of risk data aggregation and measurement.

References..

[1] https://en.wikipedia.org/wiki/Financial_crisis_of_2007%E2%80%9308

[2] https://en.wikipedia.org/wiki/Solvency_II_Directive_2009

A POV on the FRTB (Fundamental Review of the Trading Book)…

Regulatory Risk Management evolves…

The Basel Committee of supranational supervision was put in place to ensure the stability of the financial system. The Basel Accords are the frameworks that essentially govern the risk taking actions of a bank. To that end, minimum regulatory capital standards are introduced that banks must adhere to. The Bank of International Settlements (BIS) established  1930, is the world’s oldest international financial consortium. with 60+ member central banks, representing countries from around the world that together make up about 95% of world GDP. BIS stewards and maintains the Basel standards in conjunction with member banks.

The goal of Basel Committee and the Financial Stability Board (FSB) guidelines are to strengthen the regulation, supervision and risk management of the banking sector by improving risk management and governance. These have taken on an increased focus to ensure that a repeat of financial crisis 2008 comes to pass again. Basel III (building upon Basel I and Basel II) also sets new criteria for financial transparency and disclosure by banking institutions.

Basel III – the last prominent version of the Basel standards published in 2012 (named for the town of Basel in Switzerland where the committee meets) prescribes enhanced measures for capital & liquidity adequacy and were developed by the Basel Committee on Banking Supervision with voluntary worldwide applicability.  Basel III covers credit, market, and operational risks as well as liquidity risks. As this is known, BCBS 239 –  guidelines do not just apply to the G-SIBs (the Globally Systemically Important Banks) but also to the D-SIBs (Domestic Systemically Important Banks).Any important financial institution deemed ‘too big to fail” needs to work with the regulators to develop a “set of supervisory expectations” that would guide risk data aggregation and reporting.

Basel III & other Risk Management topics were covered in these previous posts – http://www.vamsitalkstech.com/?p=191 && http://www.vamsitalkstech.com/?p=667

Enter the FTRB (Fundamental Review of the Trading Book)…

In May 2012, the Basel Committee on Banking Supervision (BCBS) again issued a consultative document with an intention of revising the way capital was calculated for the trading book. These guidelines which can be found here in their final form [1] were repeatedly refined based on comments from various stakeholders & quantitative studies. In Jan 2016, a final version of this paper was released. These guidelines are now termed  the Fundamental Review of the Trading Book (FRTB) or unofficially as some industry watchers have termed – Basel IV. 

What is new with the FTRB …

The main changes the BCBS has made with the FRTB are – 

  1. Changed Measure of Market Risk – The FRTB proposes a fundamental change to the measure of market risk. Market Risk will now be calculated and reported via Expected Shortfall (ES) as the new standard measure as opposed to the venerated (& long standing) Value At Risk (VaR).  As opposed to the older method of VaR with a 99% confidence level, expected shortfall (ES) with a 97.5% confidence level is proposed. It is to be noted that for normal distributions, the two metrics should be the same however the ES is much superior at measuring the long tail. This is a recognition that in times of extreme economic stress, there is a tendency for multiple asset classes to move in unison. Consequently, under the ES method capital requirements are anticipated to be much higher.
  2. Model Creation & Approval – The FRTB also changes how models are approved & governed.  Banks that want to use the IMA (Internal Model Approach) need to pass  a set of rigorous tests so that they are not forced to used the Standard Rules approach (SA) for capital calculations. The fear is that the SA will increase capital requirements. The old IMA approach has now been revised and made more rigorous in a way that it enables supervisors to remove internal modeling permission for individual trading desks. This approach now enforces more consistent identification of material risk factors across banks, and constraints on hedging and diversification. All of this is now going to be done at a desk level instead of the entity level. FRTB moves the responsibility of showing compliant models, significant backtesting & PnL attribution to the desk level.
  3. Boundaries between the Regulatory Books – The FRTB also assigns explicit boundaries between the trading book (the instruments the bank intends to trade) and the bank book (the instruments held to maturity). These rules have been redefined in such a way that banks now have to contend with stringent rules for internal transfers between both. The regulatory motivation is to eliminate a given bank’s ability to designate individual positions as belonging to either book. Given the different accounting treatment for both, there is a feeling that bank’s were resorting to capital arbitrage with the goal of minimizing regulatory capital reserves. The FRTB also introduces more stringent reporting and data governance requirements for both which in conjunction with the well defined boundary between books. All of these changes should lead to a much better regulatory framework & also a revaluation of the structure of trading desks. 
  4. Increased Data Sufficiency and Quality – The FRTB regulation also introduces Non-Modellable risk factors (NMRF). Risk factors are non modellabe if certain aspects that pertain to the availability and sufficiency of the data are an issue . Thus with the NMRF, Banks now need increased data sufficiency and quality requirements that go into the model itself. This is a key point, the ramifications of which we will discuss in the next section.
  5. The FRTB also upgrades its standardized approach to data structuring – with a new standardized approach (SBA) which is more sensitive to various risk factors across different asset classes as compared to the Basel II SA. Regulators now determine the sensitivities in the data. Approvals will also be granted at the desk level rather than at the entity level.  The revised SA should provide a consistent way to measure risk across geographies and regions, giving regulatory a better way to compare and aggregate systemic risk. The sensitivities based approach should also allow banks to share a common infrastructure between the IMA approach and the SA approach. Thera are a set of buckets and risk factors that are prescribed by the regulator which instruments can then be mapped to.
  6. Models must be seeded with real and live transaction data – Fresh & current transactions will now need to be entered into the calculation of capital requirements as of the date on which they were conducted. Not only that, though reporting will take place at regular intervals, banks are now expected to manage market risks on a continuous basis -almost daily.
  7. Time Horizons for Calculation – There are also enhanced requirements for data granularity depending on the kind of asset. The FRTB does away with the generic 10 day time horizon for market variables in Basel II to time periods based on liquidity of these assets. It propose five different time horizons – 10 day, 20 day, 60 day, 120 day and 250 days.

FRTB_Horizons

                                 Illustration: FRTB designated horizons for market variables (src – [1])

To Sum Up the FRTB… 

The FRTB rules are now clear and they will have a profound effect on how market risk exposures are calculated. The FRTB clearly calls out the specific instruments in the trading book vs the banking book. With the new switch over to Expected Shortfall (ES) @ 97.5% over VaR @ 99% confidence levels – it should cause increased reserve requirements. Furthermore, the ES calculations will be done keeping liquidity considerations of the underlying instruments with a historical simulation approach ranging from 10 days to 250 days of stressed market conditions. Banks that use a pure IMA approach will now have to move to IMA plus the SA method.

The FRTB compels Banks to create unified teams from various departments – especially Risk, Finance, the Front Office (where trading desks sit) and Technology to address all of the above significant challenges of the regulation.

From a technology capabilities standpoint, the FRTB now presents banks with both a data volume, velocity and analysis challenge. Let us now examine the technology ramifications.

Technology Ramifications around the FRTB… 

The FRTB rules herald a clear shift in how IT architectures work across the Risk area and the Back office in general.

  1. The FRTB calls for a single source of data that pulls data across silos of the front office, trade data repositories, a range of BORT (Book of Record Transaction Systems) etc. With the FRTB, source data needs to be centralized and available in one location where every feeding application can trust it’s quality.
  2. With both the IMA and the SBA in the FRTB, many more detailed & granular data inputs (across desks & departments) need to be fed into the ES (Expected Shortfall) calculations from varying asset classes (Equity, Fixed Income, Forex, Commodities etc) across multiple scenarios. The calculator frameworks developed or enhanced for FRTB will need ready & easy access to realtime data feeds in addition to historical data. At the firm level, the data requirements and the calculation complexity will be even more higher as it needs to include the entire position book.

  3. The various time horizons called out also increase the need to run a full spectrum of analytics across many buckets. The analytics themselves will be more complex than before with multiple teams working on all of these areas. This calls out for standardization of the calculations themselves across the firm.

  4. Banks will have to also provide complete audit trails both for the data and the processes that worked on the data to provide these risk exposures. Data lineage, audit and tagging will be critical.

  5. The number of runs required for regulatory risk exposure calculations will dramatically go up under the new regime. The FRTB requires that each risk class be calculated separately from the whole set. Couple this with increased windows of calculations as discussed  in #3 above- means that more compute processing power and vectorization.

  6. FRTB also implies that from an analytics standpoint, a large number of scenarios on a large volume of data. Most Banks will need to standardize their libraries across the house. If Banks do not look to move to a Big Data Architecture, they will incur tens of millions of dollars in hardware spend.

The FRTB is the most pressing in a long list of Data Challenges facing Banks… 

The FRTB is yet another regulatory mandate that lays bare the data challenges facing every Bank. Current Regulatory Risk Architectures are based on traditional relational databases (RDBMS) architectures with 10’s of feeds from Core Banking Systems, Loan Data, Book Of Record Transaction Systems (BORTS) like Trade & Position Data (e.g. Equities, Fixed Income, Forex, Commodities, Options etc),  Wire Data, Payment Data, Transaction Data etc. 

These data feeds are then tactically placed in memory caches or in enterprise data warehouses (EDW). Once the data has been extracted, it is transformed using a series of batch jobs which then prepare the data for Calculator Frameworks to which run the risk models on them. 

All of the above applications need access to medium to large amounts of data at the individual transaction Level. The Corporate Finance function within the Bank then makes end of day adjustments to reconcile all of this data up and these adjustments need to be cascaded back to the source systems down to the individual transaction or classes of transaction levels. 

These applications are then typically deployed on clusters of bare metal servers that are not particularly suited to portability, automated provisioning, patching & management. In short, nothing that can automatically be moved over at a moment’s notice. These applications also work on legacy proprietary technology platforms that do not lend themselves to flexible & a DevOps style of development.

Finally, there is always need for statistical frameworks to make adjustments to customer transactions that somehow need to get reflected back in the source systems. All of these frameworks need to have access to and an ability to work with terabtyes (TBs) of data.

Each of above mentioned risk work streams has corresponding data sets, schemas & event flows that they need to work with, with different temporal needs for reporting as some need to be run a few times in a day (e.g. Traded Credit Risk), some daily (e.g. Market Risk) and some end of the week (e.g Enterprise Credit Risk). 

One of the chief areas of concern is that the FRTB may require a complete rewrite of analytics libraries. Under the FRTB, front office libraries will need to do Enterprise Risk –  a large number of analytics on a vast amount of data. Front office models cannot make all the assumptions that enterprise risk can to price a portfolio accurately. Front office systems run a limited number of scenarios thus trading off timeliness for accuracy – as opposed to enterprise risk.

Most banks have stringent vetting processes in place and all the rewritten analytic assets will need to be passed through that. Every aspect of the math of the analytics needs to be passed through this vigorous process. All of this will add to compliance costs as vetting process costs typically cost multiples of the rewrite process. The FRTB has put in place stringent model validation standards along with hypothetical portfolios to benchmark these.

The FRTB also requires data lineage and audit capabilities for the data. Banks will need to establish visual representation of the overall process as data flows from the BORT systems to the reporting application. All data assets have to be catalogued and a thorough metadata management process instituted.

What Must Bank IT Do… 

Given all of the above data complexity and the need to adopt agile analytical methods  – what is the first step that enterprises must adopt?

There is a need for Banks to build a unified data architecture – one which can serve as a cross organizational repository of all desk level, department level and firm level data.

The Data Lake is an overarching data architecture pattern. Lets define the term first. A data lake is two things – a small or massive data storage repository and a data processing engine. A data lake provides “massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs“. Data Lake are created to ingest, transform, process, analyze & finally archive large amounts of any kind of data – structured, semistructured and unstructured data.

The Data Lake is not just a data storage layer but one that can allow different users (traders, risk managers, compliance etc) plug in calculators that work on data that spans intra day activity as well as data across years. Calculators can then be designed that can work on data with multiple runs to calculate Risk Weighted Assets (RWAs) across multiple calibration windows.

The below illustration is a depiction of goal is to create a cross company data-lake containing all asset data and compute applied to the data.

RDA_Vamsi

                              Illustration – Data Lake Architecture for FRTB Calculations

1) Data Ingestion: This encompasses creation of the L1 loaders to take in Trade, Position, Market, Loan, Securities Master, Netting  and Wire Transfer data etc across trading desks. Developing the ingestion portion will be the first step to realizing the overall architecture as timely data ingestion is a large part of the problem at most institutions. Part of this process includes understanding examples of a) data ingestion from the highest priority of systems b) apply the correct governance rules to the data. The goal is to create these loaders for versions of different systems (e.g Calypso 9.x) and to maintain it as part of the platform moving forward. The first step is to understand the range of Book of Record transaction systems (lending, payments and transactions) and the feeds they send out. The goal would be to create the mapping to a release of an enterprise grade Open Source Big Data Platform e.g HDP (Hortonworks Data Platform) to the loaders so these can be maintained going forward.

2) Data Governance: These are the L2 loaders that apply the rules to the critical fields for Risk and Compliance. The goal here is to look for gaps in the data and any obvious quality problems involving range or table driven data. The purpose is to facilitate data governance reporting.

3) Entity Identification: This step is the establishment and adoption of a lightweight entity ID service. The service will consist of entity assignment and batch reconciliation.

4) Developing L3 loaders: This phase will involve defining the transformation rules that are required in each risk, finance and compliance area to prep the data for their specific processing.

5) Analytic Definition: Running the analytics that are to be used for FRTB.

6) Report Definition: Defining the reports that are to be issued for each risk and compliance area.

References..

[1] https://www.bis.org/bcbs/publ/d352.pdf

A Reference Architecture for The Open Banking Standard..

This is the second in a series of four posts on the Open Banking Standard (OBS) in the UK. This second post will briefly look at the strategic drivers for banks while proposing an architectural style or approach for incumbents to drive change in their platforms to achieve OBS Compliance. We will examine the overall data layer implications in the next post. The final post will look at key strategic levers and possible business models that the standard could help banks to drive innovation towards. 

Introduction…

The Open Banking Standard will steward the development of layers of guidelines (API interoperability standards, data security & privacy and governance) which primarily deal with data sharing in banking. The belief is that this regulation will ultimately spur open competition and unlock innovation. For years, the industry has grappled with fundamental platform issues that are native to every domain of banking. Some of these include systems are siloe-d by function, platforms that are inflexible in responding to rapidly changing market conditions & consumer tastes. Bank IT is perceived by the business to be glacially slow in responding to their needs.

The Open Banking Standard (OBS) represents a vast opportunity for banking organizations in multiple ways. First off, Bank IT has the luxury of using the regulatory mandate to slowly re-architect hitherto inflexible and siloe-d business systems. Secondly, doing so will enable Banks to significantly monetize their vast data resources in several key business areas.  

This will need to change with the introduction of Open Banking Standard. Banks that do not change will not be able to derive and sustain a competitive advantage. PSD2 Compliance (Payment Systems Directive – 2) – which will be mandated by the EU is one of the first layers in the OBS. Further layers will include API standards definitions for business processes (e.g View Account, Transfer Funds, Chargebacks, Dispute Handling etc). 

The OBWG (Open Banking Working Group) standards include the following key constituencies & their requirements [1] – 

 1. Customers: defined as account holders & businesses who agree to sharing their data & any publishers who share open datasets 

2. Data attribute providers: defined as banks & other financial services providers whose customers produce data as part of daily banking activities 

3. Third parties: Interested developers, financial services startups aka FinTechs, and any organisations (e.g  Retail Merchants) who can leverage the data to provide new views & products  

It naturally follows from the above, that the key technical requirements of the framework will include:

1. A set of Data elements, API definitions and Security Standards to provide both data security and a set of access restrictions 

2. A Governance model, a body which will develop & oversee the standards 

3. Developer resources, which will enable third parties to discover, educate and experiment.

The Four Strategic Drivers in the Open Bank Standard …

Clearly the more intelligently a firm harness technology (in pursuit of OBS compliance goals) will determine it’s overall competitive advantage.  This important to note since a range of players across the value chain (the above Third Parties as designated by the standard) can now obtain seamless access to a variety of data. Once obtained the data can help the Third Parties reimagine it in manifold ways. For example they can help consumers make better personal financial decisions for their clients at the expense of the Banks owning the data. For instance, FinTechs have generally been able to make more productive use of client data. They do this by providing clients with intuitive access to cross asset data, tailoring algorithms based on behavioral characteristics  and by providing clients with a more engaging and unified experience.

So, the four strategic business goals that OBS compliant architectures need to solve in the long run – 

  1. Digitize The Customer Journey –  Bank clients who use services like Uber, Zillow, Amazon etc in their daily lives are now very vocal in demanding a seamless experience across all of their banking ervices using digital channels.  The vast majority of Bank applications still lag the innovation cycle, are archaic & are separately managed. The net issue with this is that the client is faced with distinct user experiences ranging from client onboarding to servicing to transaction management. Such applications need to provide anticipatory or predictive capabilities at scale while understand the specific customers lifestyles, financial needs & behavioral preferences. 
  2. Provide Improved Access to Personal Financial Management & Improved Lending Processes  –  Provide consumers with a single aggregated picture of all their accounts. Also improve lending systems by providing more efficient access to loans by incorporating a large amount of contextual data in the process.
  3. Automate Back & Mid Office Processes Across Lending, Risk, Compliance & Fraud – The needs to forge a closer banker/client experience is not just driving demand around data silos & streams themselves but also forcing players to move away from paper based models to more of a seamless, digital & highly automated model to rework a ton of existing back & front office processes. These processes range from risk data aggregation, supranational compliance (AML,KYC, CRS & FATCA), financial reporting across a range of global regions & Cyber Security. Can the Data architectures & the IT systems  that leverage them be created in such a way that they permit agility while constantly learning & optimizing their behaviors across national regulations, InfoSec & compliance requirements? Can every piece of actionable data be aggregated,secured, transformed and reported on in such a way that it’s quality across the entire lifecycle is guaranteed? 
  4. Tune Existing Business Models Based on Client Tastes and Feedback – While the initial build out of the core architecture may seem to focus on digitizing interactions and exposing data via APIs. What follows fast is strong predictive modeling capabilities working at large scale where systems need to constantly learn and optimize their interactions, responsiveness & services based on client needs & preferences. 

The Key System Architecture Tenets…

The design and architecture of a solution as large and complex as a reference architecture for Open Banking is a multidimensional challenge and it will vary at every institution based on their existing investments, vendor products & overall culture. 

The OBS calls out the following areas of data as being in scope – Customer transaction data, customer reference data, aggregated data and sensitive commercial data. A thorough review of the OBWSG standard leads one to suggest a logical reference architecture as called out below.

Based on all the above, the Open Bank Architecture shall – 

  • Support an API based model to invoke any business process or data elements based on appropriate security  by a third party . E.g client or an advisor or a business partner
  • Support the development and deployment of an application that encourages a DevOps based approach
  • Support the easy creation of scalable business processes (e.g. client on boarding, KYC, Payment dispute check etc) that natively emit business metrics from the time they’re instantiated to throughout their lifecycle
  • Support automated application delivery, configuration management & deployment
  • Support a high degree of data agility and data intelligence. The end goal being that that every customer click, discussion & preference shall drive an analytics infused interaction between the Bank and the client
  • Support algorithmic capabilities that enable the creation of new services like automated (or Robo) advisors
  • Support a very high degree of scale across many numbers of users, interactions & omni-channel transactions while working across global infrastructure
  • Shall support deployment across cost efficient platforms like a public or private cloud. In short, the design of the application shall not constrain the available deployment options – which may vary because of cost considerations. The infrastructure options supported shall range from virtual machines to docker based containers – whether running on a public cloud, private cloud or in a hybrid cloud
  • Support small, incremental changes to business services & data elements based on changing business requirements 
  • Support standardization across application stacks, toolsets for development & data technology to a high degree
  • Shall support the creation of a user interface that is highly visual and feature rich from a content standpoint when accessed across any device

 

Reference Architecture…

Now that we have covered the business bases, what foundational technology choices comprise the satisfaction of the above? Lets examine that first at a higher level and then in more detail.

Given the above list of requirements – the application architecture that is a “best fit” is shown below.

Open Banking Architecture Diagram

                   Illustration – Candidate Reference Architecture for the Open Bank Standard

Lets examine each of the tiers starting from the lowest –

Infrastructure Layer…

Cloud Computing across it’s three main delivery models (IaaS, PaaS & SaaS) is largely a mainstream endeavor in financial services and no longer an esoteric adventure only for brave innovators. A range of institutions are either deploying or testing cloud-based solutions that span the full range of cloud delivery models. These capabilities include –

IaaS (infrastructure-as-a-service) to provision compute, network & storage, PaaS (platform-as-a-service) to develop applications & exposing their business services as  SaaS (software-as-a-service) via APIs.

Choosing Cloud based infrastructure – whether that is secure public cloud  (Amazon AWS or Microsoft Azure) or an internal private cloud (OpenStack etc) or even a hybrid approach is a safe and sound bet for these applications. Business innovation and transformation are best enabled by a cloud based infrastructure – whether public or private.

 

Data Layer…

While banking data tiers are usually composed of different technologies like RDBMS, EDW (Enterprise Data Warehouses), CMS (Content Management Systems) & Big Data etc. My recommendation for the OBSWG target state is largely dominated by a Big Data Platform powered by Hadoop technology. The vast majority of initial applications recommended by the OBSWG call out for predictive analytics to create tailored Customer Journeys. Big Data is a natural fit as it is fast emerging as the platform of choice for analytic applications.

Financial services firms specifically deal with manifold data types ranging from Customer Account data, Transaction Data, Wire Data, Trade Data, Customer Relationship Management (CRM), General Ledger and other systems supporting core banking functions. When one factors in social media feeds, mobile clients & other non traditional data types, the challenge is not just one of data volumes but also variety and the need to draw conclusions from fast moving data streams by commingling them with years of historical data.

The reasons for choosing Big Data as the dominant technology in the data tier are the below – 

  1. Hadoop’s ability to ingest and work with all the above kinds of data & more (using the schema on read method) has been proven at massive scale. Operational data stores are being built on Hadoop at a fraction of the cost & effort involved with older types of data technology (RDBMS & EDW)
  2. The ability to perform multiple types of processing on a given data set. This processing varies across batch, streaming, in memory and realtime which greatly opens up the ability to create, test & deploy closed loop analytics quicker than ever before
  3. The DAS (Direct Attached Storage) model that Hadoop provides fits neatly in with the horizontal scale out model that the services, UX and business process tier leverage. This keeps cuts Capital Expenditure  to a bare minimum.
  4. The ability to retain data for long periods of time thus providing WM applications with predictive models that can reason on historical data
  5. Hadoop provides the ability to run a massive volumes of models in a very short amount of time helps with modeling automation
  6. Due to it’s parallel processing nature, Hadoop can run calculations (pricing, risk, portfolio, reporting etc) in minutes versus the hours it took using older technology
  7. Hadoop has to work with existing data investments and to augment them with data ingestion & transformation leaving EDW’s to perform complex analytics that they excel at – a huge bonus.

Services Layer…

The overall goals of the OBSWG services tier are to help design, develop,modify and deploy business components in such a way that overall WM application delivery follows a continuous delivery/deployment (CI/CD) paradigm.Given that WM Platforms are some of the most complex financial applications out there, this also has the ancillary benefit of leaving different teams – digital channels, client on boarding, bill pay, transaction management & mid/back office teams to develop and update their components largely independent of other teams. Thus a large monolithic WM enterprise platform is decomposed into its constituent services which are loosely coupled and each is focused on one independent & autonomous business task only. The word ‘task’ here referring to a business capability that has tangible business value.

A highly scalable, open source & industry leading platform as a service (PaaS) is recommended as the way of building out and hosting banking business applications at this layer.  Microservices have moved from the webscale world to fast becoming the standard for building mission critical applications in many industries. Leveraging a PaaS such as OpenShift provides a way to help cut the “technical debt” that has plagued both developers and IT Ops. OpenShift provides the right level of abstraction to encapsulate microservices via it’s native support for Docker Containers. This also has the concomitant advantage of standardizing application stacks, streamlining deployment pipelines thus leading the charge to a DevOps style of building applications. 

Further I recommend that service designer take the approach that their micro services can be deployed in a SaaS application format going forward – which usually implies taking an API based approach.

Now, the services tier has the following global responsibilities – 

  1. Promote a Microservices/SOA style of application development
  2. Support component endpoint invocation via standards based REST APIs
  3. Promote a cloud, OS & ,development language agnostic style of application development
  4. Promote Horizontal scaling and resilience

Predictive Analytics & Business Process Layer…

Though segments of the banking industry have historically been early adopters of analytics, areas being targeted by the OBSWG – Retail lines of business &Payments have generally been laggards. However, the large datasets that are prevalent in Open Bank Standard world as well as the need to drive customer interaction & journeys, risk & compliance reporting, detecting fraud etc calls for a strategic relook at this space. 

Techniques like Machine Learning, Data Science & AI feed into core business processes thus improving them. For instance, Machine Learning techniques support the creation of self improving algorithms which get better with data thus making accurate business predictions. Thus, the overarching goal of the analytics tier should be to support a higher degree of automation by working with the business process and the services tier. Predictive Analytics can be leveraged across the value chain of the Open Bank Standard – ranging from new customer acquisition to customer journey to the back office. More recently these techniques have found increased rates of adoption with enterprise concerns from cyber security to telemetry data processing.

Another area is improved automation via light weight business process management (BPM). Though most large banks do have pockets of BPM implementations that are adding or beginning to add significant business value, an enterprise-wide re-look at the core revenue-producing activities is called for, as is a deeper examination of driving competitive advantage. BPM now has evolved into more than just pure process management. Meanwhile, other disciplines have been added to BPM — which has now become an umbrella term. These include business rules management, event processing, and business resource planning.

Financial Services firms general are fertile ground for business process automation, since most managers across their various lines of business are simply a collection of core and differentiated processes. Examples are private banking (with processes including onboarding customers, collecting deposits, conducting business via multiple channels, and compliance with regulatory mandates such as KYC and AML); investment banking (including straight-through-processing, trading platforms, prime brokerage, and compliance with regulation); payment services; and portfolio management (including modeling model portfolio positions and providing complete transparency across the end-to-end life cycle). The key takeaway is that driving automation can result not just in better business visibility and accountability on behalf of various actors. It can also drive revenue and contribute significantly to the bottom line.

A business process system should allow an IT analyst or customer or advisor to convey their business process by describing the steps that need to be executed in order to achieve the goal (and explain the order of those steps, typically using a flow chart). This greatly improves the visibility of business logic, resulting in higher-level and domain-specific representations (tailored to finance) that can be understood by business users and should be easier to monitor by management. Again , leveraging a PaaS such as OpenShift in conjunction with an industry leading open source BPMS (Business Process Management System) such as JBOSS BPMS provides an integrated BPM capability that can create cloud ready and horizontally scalable business processes.

API & UX Layer…

The API & UX (User Experience) tier fronts humans – clients. business partners, regulators, internal management and other business users across omnichannel touchpoints. A standards based API tier is provided for partner applications and other non-human actors to interact with business service tier. Once the OBSWG defines the exact protocols, data standards & formats – this should be straightforward to implement.

The API/UX tier has the following global responsibilities  – 

  1. Provide a seamless experience across all channels (mobile, eBanking, tablet etc) in a way that is a continuous and non-siloed. The implication is that clients should be able to begin a business transaction in channel A and be able to continue them in channel B where it makes business sense.
  2. Understand client personas and integrate with the business & predictive analytic tier in such a way that the API is loosely yet logically integrated with the overall information architecture
  3. Provide advanced visualization (wireframes, process control, social media collaboration) and cross partner authentication & single sign on
  4. Both the API & UX shall also be designed is such a manner that their design, development & ongoing enhancement lends themselves to an Agile & DevOps methodology.

It can all come together…

In most existing Banking systems, siloed functions have led to brittle data architectures operating on custom built legacy applications. This problem is firstly compounded by inflexible core banking systems and secondly exacerbated by a gross lack of standardization in application stacks underlying capabilities like customer journey, improved lending & fraud detection. These factors inhibit deployment flexibility across a range of platforms thus leading to extremely high IT costs and technical debut. The consequence is that these inhibit client facing applications from using data in a manner that constantly & positively impacts the client experience. There is clearly a need to provide an integrated digital experience across a global customer base. And then to offer more intelligent functions based on existing data assets. Current players do possess a huge first mover advantage as they offer highly established financial products across their large (and largely loyal & sticky) customer bases, a wide networks of physical locations, rich troves of data that pertain to customer accounts & demographic information. However, it is not enough to just possess the data. They must be able to drive change through legacy thinking and infrastructures as things change around the entire industry as it struggles to adapt to a major new segment – the millenials – who increasingly use mobile devices and demand more contextual services as well as a seamless and highly analytic driven & unified banking experience – akin to what they commonly experience via the internet – at web properties like Facebook, Amazon, Google or Yahoo etc

Summary

Platforms designed technology platforms designed around the four key business needs   will create immense operational efficiency, better business models, increased relevance and ultimately drive revenues. These will separate the visionaries, leaders from the laggards in the years to come. The Open Bank Standard will be a catalyst in this immense disruption. 

REFERENCES…

[1] The Open Banking Standard –
https://theodi.org/open-banking-standard

The Open Banking Standard – The Five Major Implications for UK Banks..

“Banking as a service has long sat at the heart of our economy. In our digitally enabled world, the need to seamlessly and efficiently connect different economic agents who are buying and selling goods and services, is critical. The Open Banking Standard is a framework for making banking data work better: for customers; for businesses and; for the economy as a whole.” – OBWG (Open Bank Working Group) co-chair and Barclays executive Matt Hammerstein

Introducing Open Banking Standards…

On a global basis, both the Financial Services and the Insurance industry are facing an unprecedented amount of change driven by factors like changing client preferences and the emergence of new technology—the Internet, mobility, social media, etc. These changes are immensely profound, especially with the arrival of  the “FinTechs”—technology-driven applications that are upending long-standing business models across all sectors from retail banking to wealth management & capital markets. Complement this with members of a major new segment, Millennials. They are increasingly use mobile devices, demanding more contextual services and expecting a seamless unified banking experience—something akin to what they  experience on web properties like Facebook, Amazon, Uber, Google or Yahoo, etc.  These web scale startups are doing so by expanding their wallet share of client revenues by offering contextual products tailored to individual client profiles. Their savvy use of segmentation data and predictive analytics enables the delivery of bundles of tailored products across multiple delivery channels (web, mobile, call center banking, point of sale, ATM/kiosk etc.).

Supra national authorities and national government in Europe have taken note of the need for erstwhile protected industries like Banking to stay competitive in this brave new world.

With the passage of the second revision of the ground breaking Directive on Payment Services Directive (PSD-2),  the European Parliament has adopted the legal foundation of the creation of a EU-wide single payments area (SEPA)[1].  While the goal of the PSD is to establish a set of modern, digital industry rules for all payment services in the European Union; it has significant ramifications for the financial services industry as it will surely current business models & foster new areas of competition. While the PSD-2 has gotten the lions share of press interest, the UK government has quietly been working on an initiative to create a standard around allowing Banking organizations to share their customer & transactional data with certified third parties via an open API.  The outgoing PM David Cameron’s government had in fact outlined these plans in the 2015 national budget.

open-bank-project

The EU and the UK governments have recognized that in order for Europe to move into the vision of one Digital Market – the current system of banking calls for change. And they foresee this change will be driven by digital technology. This shakeup will happen via the increased competition that will result as various financial services are unbundled by innovative developers. To that end, by 2019 – all banks should make customer data – their true crown jewels – openly accessible via an open standards based API.

The Open Bank Working Standard Report API…

 The U.K. has been working on an open standard for its financial system for nearly a year. The Open Bank Working Group (OBWP) was created to set standards how banking data should created and accessed openly. This initiative covers the following broad areas – Data Standards, API Standards & Security Standards to protect consumers while spurring innovation via open competition.

Open_Banking_Scope

Illustration: Components of the Open Banking Standard (ref – OBWG Working Group)

Under the Open Banking Standard – expected to be legal reality over the next 2-3 years, any banking customer or authorized 3rd party provider can leverage APIs to gain access to their data and transactions across a whole range of areas ranging from Retail Banking to Business Banking to Commercial Banking.

Open Standards can actually help banks by helping them source data from external providers. For instance, the Customer Journey problem has been an age old issue in banking which has gotten exponentially more complicated over the last five years as the staggering rise of mobile technology and the Internet of Things (IoT) have vastly increased the number of enterprise touch points that customers are exposed to in terms of being able to discover & purchase new products/services. In an OmniChannel world, an increasing number of transactions are being conducted online. In verticals like Retail and Banking, the number of online transactions approaches an average of 40%. Adding to the problem, more and more consumers are posting product reviews and feedback online. Banks thus need to react in realtime to piece together the source of consumer dissatisfaction.  Open Standards will help increase the ability of banks to pull in data from external sources to enrich their limited view of customers.

The Implications of Open Bank Standard…

The five implications of Open Bank Project –

  1. Banks will be focused on building platforms that can drive ecosystems of applications around them.  Banks have thus far been largely focused on delivering commodity financial services using well understood distribution strategies. Most global banks have armies of software developers but their productivity around delivering innovation has been close to zero. Open APIs will primarily force more thinking around how banking products are delivered to the end consumer. The standards for this initiative are primarily open source in origin, though they’re widely accepted across the globe – REST,OAuth etc.
  2. However it is not a zero sum game, Banks can themselves benefit by building business models around monetizing their data assets as their distribution channels will go global & costs will change around Open Bank. To that end existing Digital efforts should be brought in line with Open Bank Standard  The best retail banks will not only seek to learn from, but sometimes partner with, emerging fintech players to integrate new digital solutions and deliver exceptional customer experience. To cooperate and take advantage of fintechs, banks will require new partnering capabilities. To heighten their understanding of customers’ needs and to deliver products and services that customers truly value, banks will need new capabilities in data management and analytics. Using Open Bank APIs, developers across the world can create applications that offer new services (in conjunction with retailers, for example), aggregate financial information or even help in financial planning. Banks will have interesting choices to make between acting as Data Producer or Consumer or Aggregator or even a Distributor based on specific business situations.
  3. Regulators will also benefit substantially by using Partner APIs to both access real time reports  & share data across a range of areas. The lack of realtime data access across a range of risk, compliance and cyber areas has been a long standing problem that can be solved by an open standards based API framework [2].  E.g.  Market/Credit/Basel Risk Based Reporting, AML watch list data and Trade Surveillance etc.
  4. Data Architectures are key to Open Bank Standard –  Currently most industry players are woeful at putting together a comprehensive Single View of their Customers (SVC). Due to operational data silos, each department possess a siloe-d & limited view of the customer across multiple channels. These views are typically inconsistent, lack synchronization with other departments & miss a high amount of potential cross-sell and up-sell opportunities. Data lakes and realtime data processing techniques will be critical to meeting this immense regulatory requirement.
  5. Despite the promise, large gaps still remain in the Open Bank Project. Critical areas like project governance, Service Level Agreements (SLA) for API users in terms of uptime, quality of service are still left unaddressed.

 Open Banking Standard will spur immense changes..

Prior to the Open Banking Standard, Banks recognize the need to move to a predominantly online model by providing consumers with highly interactive, engaging and contextual experiences that span multiple channels—branch banking, eBanking, POS, ATM, etc. Business goals are engagement & increasing profitability per customer for both micro and macro customer populations with the ultimate goal of increasing customer lifetime value (CLV). The Open Banking Standard brings technology approaches to the fore in terms of calling it out as a strategic differentiator.  Banks need to move to a fresh business, data and process approach as a way of staying competitive and relevant. Done right, Open Bank Standards will help the leaders cement their market position.

REFERENCES…

[1] The Open Banking Standard –
https://theodi.org/open-banking-standard

[2]Big Data – Banking’s New Weapon Against Financial Crime – http://www.vamsitalkstech.com/?p=806

The Five Deadly Sins of Financial Services IT..

THE STATE OF GLOBAL FINANCIAL SERVICES IT ARCHITECTURE…

This blog has time & again discussed how Global, Domestic and Regional banks need to be innovative with their IT platform to constantly evolve their product offerings & services. This is imperative due to various business realities –  the increased competition by way of the FinTechs, web scale players delivering exciting services & sharply increasing regulatory compliance pressures. However, systems and software architecture has been a huge issue at nearly every large bank across the globe.

Regulation is also afoot in parts of the globe which will give non traditional banks access to hitherto locked customer data. E.g PSD-2 in the European Union. Further, banking licenses have been more easily granted to non-banks which are primarily technology pioneers. e.g. Paypal

It’s 2016 and Banks are waking up to the fact that IT Architecture is a critical strategic differentiator. Players that have agile & efficient architecture platforms, practices can not only add new service offerings but also able to experiment across a range of analytic led offerings that create & support multi-channel offerings. These digital services can now be found abundantly areas ranging from Retail Banking, Capital Markets, Payments & Wealth Management esp at the FinTechs.

So, How did we get here…

The Financial Services IT landscape – no matter the segment – one picks across the spectrum – Capital Markets, Retail & Consumer Banking, Payment Networks & Cards, Asset Management etc are all largely predicated on a few legacy anti-patterns. These anti-patterns have evolved over the years from a systems architecture, data architecture & middleware standpoint.

These anti-patterns have resulted in a mishmash of organically developed & shrink wrapped systems that do everything from running critical Core Banking Applications to Trade Lifecycle to Securities Settlement to Financial Reporting etc.  Each of these systems operates in an application, workflow, data silo with it’s own view of the enterprise. These are all kept in sync largely via data replication & stove piped process integration.

If this sounds too abstract, let us take an example &  a rather topical one at that. One of the most critical back office functions every financial services organization needs to perform is Risk Data Aggregation & Regulatory Reporting (RDARR). This spans areas from Credit Risk, Market Risk, Operational Risk , Basel III, Solvency II etc..the list goes on.

The basic idea in any risk calculation is to gather a whole range of quality data in one place and to run computations to generate risk measures for reporting.

So, how are various risk measures calculated currently? 

Current Risk Architectures are based on traditional relational databases (RDBMS) architectures with 10’s of feeds from Core Banking Systems, Loan Data, Book Of Record Transaction Systems (BORTS) like Trade & Position Data (e.g. Equities, Fixed Income, Forex, Commodities, Options etc),  Wire Data, Payment Data, Transaction Data etc. 

These data feeds are then tactically placed in memory caches or in enterprise data warehouses (EDW). Once the data has been extracted, it is transformed using a series of batch jobs which then prepare the data for Calculator Frameworks to which run the risk models on them. 

All of the above need access to large amounts of data at the individual transaction Level. The Corporate Finance function within the Bank then makes end of day adjustments to reconcile all of this data up and these adjustments need to be cascaded back to the source systems down to the individual transaction or classes of transaction levels. 

These applications are then typically deployed on clusters of bare metal servers that are not particularly suited to portability, automated provisioning, patching & management. In short, nothing that can automatically be moved over at a moment’s notice. These applications also work on legacy proprietary technology platforms that do not lend themselves to flexible & a DevOps style of development.

Finally, there is always need for statistical frameworks to make adjustments to customer transactions that somehow need to get reflected back in the source systems. All of these frameworks need to have access to and an ability to work with terabtyes (TBs) of data.

Each of above mentioned risk work streams has corresponding data sets, schemas & event flows that they need to work with, with different temporal needs for reporting as some need to be run a few times in a day (e.g. Traded Credit Risk), some daily (e.g. Market Risk) and some end of the week (e.g Enterprise Credit Risk)

Five_Deadly_Sins_Banking_Arch

                          Illustration – The Five Deadly Sins of Financial IT Architectures

Let us examine why this is in the context of these anti-patterns as proposed below –

THE FIVE DEADLY SINS…

The key challenges with current architectures –

  1. Utter, total and complete lack of centralized Data leading to repeated data duplication  – In the typical Risk Data Aggregation application – a massive degree of Data is duplicated from system to system leading to multiple inconsistencies at the summary as well as transaction levels. Because different groups perform different risk reporting functions (e.g Credit and Market Risk) – the feeds, the ingestion, the calculators end up being duplicated as well. A huge mess, any way one looks at it. 
  2. Analytic applications which are not designed for throughput – Traditional Risk algorithms cannot scale with this explosion of data as well as the heterogeneity inherent in reporting across multiple kinds of risks. E.g Certain kinds of Credit Risk need access to around 200 days of historical data where one is looking at the probability of the counter-party defaulting & to obtain a statistical measure of the same. The latter are highly computationally intensive and can run for days. 
  3. Lack of Application Blueprint, Analytic Model & Data Standardization – There is nothing that is either SOA or microservices-like and that precludes best practice development & deployment. This only leads to maintenance headaches. Cloud Computing enforces standards across the stack. Areas like Risk Model and Analytic development needs to be standardized to reflect realities post BCBS 239. The Volcker Rule aims to ban prop trading activity on part of the Banks. Banks must now report on seven key metrics across 10s of different data feeds across PB’s of data. Most cannot do that without undertaking a large development and change management headache.
  4. Lack of Scalability –  It must be possible to operate it as a central system that can scale to carry the full load of the organization and operate with hundreds of applications built by disparate teams all plugged into the same central nervous system.One other factor to consider is the role of cloud computing in customer retention efforts. The analytical computational power required to understand insights from gigantic data sets is costly to maintain on an individual basis. The traditional owned data center will probably not disappear, but banks need to be able to leverage the power of the cloud to perform big data analysis in a cost-effective manner.
    EndFragment
  5. A Lack of Deployment Flexibility –  The application & data requirements dictate the deployment platforms. This massive anti pattern leads to silos and legacy OS’s that can not easily be moved to Containers like Docker & instantiated by a modular Cloud OS like OpenStack.

THE BUSINESS VALUE DRIVERS OF EFFICIENT ARCHITECTURES …

Doing IT Architecture right and in a responsive manner to the business results in critical value drivers that that are met & exceeded this transformation are – 

  1. Effective Compliance with increased Regulatory Risk mandates ranging from Basel – III, FTRB, Liquidity Risk – which demand flexibility of all the different traditional IT tiers.
  2. An ability to detect and deter fraud – Anti Money Laundering (AML) and Retail/Payment Card Fraud etc
  3. Fendoff competition from the FinTechs
  4. Exist & evolve in a multichannel world dominated by the millennial generation
  5. Reduced costs to satisfy pressure on the Cost to Income Ratio (CIR)
  6. The ability to open up data & services that operate on the customer data to other institutions

 A uniform architecture that works across of all these various types would seem a commonsense requirement. However, this is a major problem for most banks. Forward looking approaches that draw heavily from microservices based application development, Big Data enabled data & processing layers, the adoption of Message Oriented Middleware (MOM) & a cloud native approach to developing applications (PaaS) & deployment (IaaS) are the solution to the vexing problem of inflexible IT.

The question is if banks can change before they see a perceptible drop in revenues over the years?  

Deter Financial Crime by Creating an Effective Anti Money Laundering (AML) Program…(1/2)

THE AML CHALLENGE CONTINUES UNABATED…

As this blog has repeatedly catalogued over the last year here[1], here[2] and here[3], Money Laundering is a massive global headache and one of the biggest crimes against humanity. Not a month goes by when we do not hear of billions of dollars in ill gotten funds being stolen from developing economies via corruption as well as from proceeds of nefarious whether it is the Panama papers or banks unwittingly helping drug cartels launder money.

I have seen annual estimates of global money laundering flows ranging anywhere from $ 1 trillion to 2 trillion – almost 5% of global GDP.  Almost all of this is laundered via Retail & Merchant Banks,  Payment Networks, Securities & Futures firms, Casino Services & Clubs etc – which explains why annual AML related fines on Banking organizations run into the billions and are increasing every year. However, the number of SARs (Suspicious Activity Reports) filed by banking institutions are much higher as a category as compared to the numbers filed by these other businesses.

The definition of Financial Crimes is fairly broad & encompasses a large area of definition – the traditional money laundering activity, financial fraud like identity theft/check fraud/wire fraud, terrorist activity, tax evasion, securities market manipulation, insider trading and other kinds of securities fraud. Financial institutions across the spectrum of the market now need to comply with the regulatory mandate at both the global as well as the local market level.

What makes AML such a hard subject for Global Banks which should be innovating quite easily?

The issues which bedevil smooth AML programs include –

  • the complex nature of banking across retail, commercial, wealth management & capital markets; global banks now derive around 40% of revenue from non traditional markets (North America & Western Europe)
  • the scale of customer activity ranging from 5 to 50 million at the large global banks
  • patchwork of local regulations, risk and compliance reporting requirements. E.g. Stringent compliance requirements in the US & UK but softer requirements elsewhere
  • tens of distribution channels
  • growing volumes of transactions causing requirements for complex analytics
  • the need to constantly integrate 3rd party information of lists of politically exposed persons of interest (PEPs) using manual means
  • technology while ensuring the availability of banking services to millions of underserved populations – also makes it easy for the launderers to conduct & mask their activities

The challenges are hard but the costs of non-compliance are severe. Banks have been fined billions of dollars, compliance officers face potential liability & institutional reputation takes a massive hit. Supra national authorities like the United Nations (UN) and the European Union (EU) can also impose sanctions when they perceive that AML violations threaten human rights & the rule of law.

TECHNOLOGY IS THE ANSWER…

Many Banks have already put in rules, policies & procedures to detect AML violations and have also invested in substantial teams staffed by money laundering risk officers (MLRO) & headed by compliance officers. These rules to detect money laundering work based on thresholds and patterns that breached such criteria. The issue with this is that the money launderers themselves are in the class of statisticians and they constantly devise new rules to hide their tracks.

The various elements that make up the risk to banks and financial institutions and the technology they use to detect these can be broken down into five main areas & work streams as shown below.

AML_Workstreams

                                Illustration: The Five Workstreams of AML programs

  1. Customer Due Diligence  – this involves gathering information from the client as well as on-boarding data from external sources to verify these details and to establish a proper KYC (Know Your Customer) program.
  2. Entity Analysis – identifying relationships between institutional clients as well as retail clients to understand the true social graph. Bank compliance officers now have gone beyond KYC (Know Your Customer) to know their customer’s customer, or KYCC.[4]
  3. Downstream Analytics – detecting advanced patterns of behavior among clients & the inter-web of transactions with a view to detecting hidden patterns of money laundering. This also involves assessing client risk during specific points in the banking lifecycle, such as account opening, transactions above a certain monetary value. These data points could signal potentially illegitimate activity based on any number of features associated with such transactions. Any transaction could also lead to the filing of a suspicious activity report (SAR)
  4. Ongoing Monitoring  – Help aggregate such customer transactions across multiple geographies for pattern detection and reporting purposes. This involves creating a corporate taxonomy of rules that capture a natural language description of the conditions, patterns denoting various types of financial crimes – terrorist financing, mafia laundering, drug trafficking, identity theft etc.
  5. SAR Investigation Lifecycle – These rules trigger downstream workflows to allow human investigation on such transactions

QUANTIFIABLE BENEFITS FROM DOING IT WELL…

Financial institutions that leverage new Age technology (Big Data, Predictive Analytics, Workflow) in these five areas will be able to effectively analyze financial data and deter potential money launderers before they are able to proceed, providing the institution with protection in the form of full compliance with the regulations.

The business benefits include –

  • Detect AML violations on a proactive basis thus reducing the probability of massive fines 
  • Save on staffing expenses for Customer Due Diligence (CDD)
  • Increase accurate production of suspicious activity reports (SAR)
  • Decrease the percent of corporate customers with AML-related account closures in the past year by customer risk level and reason – thus reducing loss of revenue
  • Decrease the overall KYC profile backlog across geographies
  • Help create Customer 360 views that can help accelerate CLV (Customer Lifetime Value) as well as Customer Segmentation from a cross-sell/up-sell perspective

CONCLUSION…

Virtually every leading banking institution, securities firm, payment provider understands that they need to enhance their AML capabilities by a few notches and also need to constantly evolve them as fraud itself morphs.

The question is can they form a true picture of their clients (both retail and institutional) on a real time basis, monitor every banking interaction while understanding it’s true context when merged with historical data, detect unusual behavior. Further creating systems that learn  from these patterns truly helps minimize money laundering.

The next and final post in this two part series will examine how Big Data & Analytics help with each of the work streams discussed above.

REFERENCES…

[1] Building AML Regulatory Platforms for the Big Data Era – http://www.vamsitalkstech.com/?p=5

[2]Big Data – Banking’s New Weapon Against Financial Crime – http://www.vamsitalkstech.com/?p=806

[3] Reference Architecture for AML
– http://www.vamsitalkstech.com/?p=833

[4] WSJ – Know Your Customer’s Customer is the New Norm – http://blogs.wsj.com/riskandcompliance/2014/10/02/the-morning-risk-report-know-your-customers-customer-is-new-norm/

Demystifying Digital – Why Customer 360 is the Foundational Digital Capability – ..(1/3)

The first post in this three part series on Digital Foundations introduces the concept of Customer 360 or Single View of Customer (SVC). We will discuss the need for & the definition of the SVC as part of the first step in any Digital Transformation endeavor. We will also discuss specific benefits from both a business & operational state that are enabled by SVC. The second post in the series introduces the concept of a Customer Journey. The third & final post will focus on a technical design & architecture needed to achieve both these capabilities.
 
In an era of exploding organizational touch points, how many companies can truly claim that they know & understand their customers, their needs & evolving preferences deeply and from a realtime perspective?  
How many companies can claim to keep up as a customers product & service usage matures and keep them engaged by cross selling new offerings. How many can accurately predict future revenue from a customer based on their current understanding of their profile?
The answer is not at all encouraging.
Across industries like Banking, Insurance, Telecom & Manufacturing, the ability to get a unified view of the customer & their journey is at the heart of the the enterprise ability to promote relevant offerings & detect customer dissatisfaction. 
  • Currently most industry players are woeful at putting together this comprehensive Single View of their Customers (SVC). Due to operational silos, each department possess a siloed & limited view of the customer across multiple channels. These views are typically inconsistent, lack synchronization with other departments & miss a high amount of potential cross-sell and up-sell opportunities.
  • The Customer Journey problem has been an age old issue which has gotten exponentially more complicated over the last five years as the staggering rise of mobile technology and the Internet of Things (IoT) have vastly increased the number of enterprise touch points that customers are exposed to in terms of being able to discover & purchase new products/services. In an OmniChannel world, an increasing number of transactions are being conducted online. In verticals like Retail and Banking, the number of online transactions approaches an average of 40%. Adding to the problem, more and more consumers are posting product reviews and feedback online. Companies thus need to react in realtime to piece together the source of consumer dissatisfaction.
Another large component of customer outreach are Marketing analytics & the ability to run effective campaigns to recruit customers.

The most common questions that a lot of enterprises fail to answer accurately are –

  1. Is the Customer happy with their overall relationship experience?
  2. What mode of contact do they prefer? And at what time? Can Customers be better targeted at these channels at those preferred times?
  3. What is the overall Customer Lifetime Value (CLV) or how much profit we are able to generate from this customer over their total lifetime?
  4. By understanding CLV across populations, can we leverage that to increase spend on marketing & sales for products that are resulting in higher customer value?
  5. How do we increase cross sell and up-sell of products & services?
  6. Does this customer fall into a certain natural segment and if so, how can we acquire most customers like them?
  7. Can different channels (Online, Mobile, IVR & POS) be synchronized ? Can Customers begin a transaction in one channel and complete it in any of the others without having to resubmit their data?

The first element in Digital is the Customer Centricity & it must naturally follow that a 360 degree view is a huge aspect of that.

Customer360

                                       Illustration – Customer 360 view & its benefits

So what information is specifically contained in a Customer 360 –

The 360 degree view is a snapshot of the below types of data –

  • Customer’s Demographic information – Name, Address, Age etc
  • Length of the Customer-Enterprise relationship
  • Products and Services purchased overall
  • Preferred Channel & time of Contact
  • Marketing Campaigns the customer has responded to
  • Major Milestones in the Customers relationship
  • Ongoing activity – Open Orders, Deposits, Shipments, Customer Cases etc
  • Ongoing Customer Lifetime Value (CLV) Metrics and the Category of customer (Gold, Silver, Bronze etc)
  • Any Risk factors – Likelihood of Churn, Customer Mood Alert, Ongoing issues etc
  • Next Best Action for Customer

How can Big Data technology help?

Leveraging the ingestion and predictive capabilities of a Big Data based platform, banks can provide a user experience that rivals Facebook, Twitter or Google and provide a full picture of customer across all touch points.

Big Data enhances the Customer 360 capability in the following ways  –  

  1. Obtaining a realtime Single View of the Customer (typically a customer across multiple channels, product silos & geographies) across years of account history 
  2. Customer Segmentation by helping businesses understand customer segments down to the individual level as well as at a segment level
  3. Performing Customer sentiment analysis by combining internal organizational data, clickstream data, sentiment analysis with structured sales history to provide a clear view into consumer behavior.
  4. Product Recommendation engines which provide compelling personal product recommendations by mining realtime consumer sentiment, product affinity information with historical data.
  5. Market Basket Analysis, observing consumer purchase history and enriching this data with social media, web activity, and community sentiment regarding past purchase and future buying trends.

Customer 360 can help improve the following operational metrics of a Retailer or a Bank or a Telecom immensely.

  1. Cost to Income ratio; Customers Acquired per FTE; Sales and service FTE’s (as percentage of total FTE’s), New Accounts Per Sales FTE etc
  2.  Sales conversion rates across channels, Decreased customer attrition rates etc.
  3. Improved Net promotor scores (NPS), referral based sales etc

Customer 360 is thus basic digital capability every organization needs to offer their customers, partners & internal stakeholders. This implies a re-architecture of both data management and business processes automation.

The next post will discuss the second critical component of Digital Transformation – the Customer Journey.

Embedding A Culture of Business Analytics into the Enterprise DNA..

IT driven business transformation is always bound to fail” – Amber Storey, Sr Manager, Ernst & Young

The value of Big Data driven Analytics is no longer in question both from a customer as well as an enterprise standpoint. Lack of investment in an analytic strategy has the potential to impact shareholder value negatively.  Business Boards and CXOs are now concerned about their overall levels and maturity of investments in terms of business value – i.e increasing sales, driving down business & IT costs & helping create new business models. It is thus an increasingly accurate argument that smart applications & ecosystems built around them will increasingly dictate enterprise success.

Such examples among forward looking organizations abound across industries. These range from realtime analytics in manufacturing using IoT data streams across the supply chain, the use of natural language processing to drive patient care decisions in healthcare, more accurate insurance fraud detection & driving Digital interactions in Retail Banking etc to quote a few. 

However , most global organizations currently adopt a fairly tactical approach to ensuring the delivery of of traditional business intelligence (BI) and predictive analytics to their application platforms.  This departmental is quite suboptimal in ways as scaleable data driven decisions & culture not only empower decision-makers with up to date and realtime information but also help them develop long term insights into how globally diversified business operations are performing.  Scale is the key word here due to rapidly changing customer trends, partner, supply chain realities & regulatory mandates.

Scale implies speed of learning,  business agility across the organization in terms of having globally diversified operations turn on a dime thus ensuring that the business feels empowered.

A quick introduction to Business (Descriptive & Predictive) Analytics –

Business intelligence (BI) is a traditional & well established analytical domain that essentially takes a retrospective look at business data in systems of record. The goal for BI is to primarily look for macro or aggregate business trends across different aspects or dimensions such as time, product lines, business unites & operating geographies.

BI is primarily concerned with “What happened and what trends exist in the business based on historical data?“. The typical use cases for BI include budgeting, business forecasts, reporting & key performance indicators (KPI).

On the other hand, Predictive Analytics (a subset of Data Science) augments & builds on the BI paradigm by adding a “What could happen” dimension to the data in terms of –

  • being able to probabilistically predict different business scenarios across thousands of variables
  • suggesting specific business actions based on the above outcomes

Predictive Analytics does not intend to nor will it replace the BI domain but only adds significant business capabilities that lead to overall business success. It is not uncommon to find real world business projects leveraging both these analytical approaches.

Creating an industrial approach to analytics – 

Strategic business projects typically begin imbibing a BI/Predictive Analytics based approach as an afterthought to the other aspects of system architecture and buildout. This dated approach then ensures that analytics becomes external to and eventually operating in a reactive mode in the operation of business system.

Having said that, one does need to recognize that an industrial approach to analytics is a complex endeavor that depends on how an organization tackles the convergence of the below approaches –

  1. Organizational Structure
  2. New Age Technology 
  3. A Platforms Mindset
  4. Culture

Creating_An_Analytic_Culture

        Illustration – Embedding A Culture of Business Analytics into the Enterprise DNA..

Lets discuss them briefly – 

Organizational Structure – The historical approach has been to primarily staff analytics teams as a standalone division often reporting to a CIO. This team has responsibility for both the business intelligence as well as some silo of a data strategy. Such a piecemeal approach to predictive analytics ensures that business & application teams adopt a “throw it over the wall” mentality over time.

So what needs to be done? 

In the Digital Age, enterprises should look to centralize both data management as well as the governance of analytics as core business capabilities. I suggest a hybrid organizational structure where a Center of Excellence (COE) is created which reports to the office of the Chief Data Officer (CDO) as well as individual business analytic leaders within the lines of business themselves.

 This should be done to ensure that three specific areas are adequately tackled using a centralized approach- 

  • Investing in creating a data & analytics roadmap by creating a center of excellence (COE)
  • Setting appropriate business milestones with “lines of business” value drivers built into a robust ROI model
  • Managing Risk across the enterprise with detailed scenario planning

New Age Technology –

The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous online interactions with global consumers/customers/clients or patients. The goal is not just provide engaging visualization but also to personalize services clients care about across multiple modes of interaction. Mobile applications first begun forcing the need for enterprise to begin supporting multiple channels of interaction with their consumers. We have seen how how exploding data generation across the global economy has become a clear & present business & IT phenomenon. Data volumes are rapidly expanding across industries. However, while the production of data itself that has increased but it is also driving the need for organizations to derive business value from it. This calls for the collection & curation of data from dynamic,  and highly distributed sources such as consumer transactions, B2B interactions, machines such as ATM’s & geo location devices, click streams, social media feeds, server & application log files and multimedia content such as videos etc – using Big Data.

Cloud Computing is the ideal platform to provide the business with self service as well as rapid provisioning of business analytics. Every new application designed needs to be cloud native from the get go.

The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous online interactions with global consumers/customers/clients or patients. The goal is not just provide engaging Visualization but also to personalize services clients care about across multiple modes of interaction. Mobile applications first begun forcing the need for enterprise to begin supporting multiple channels of interaction with their consumers. For example Banking now requires an ability to engage consumers in a seamless experience across an average of four to five channels – Mobile, eBanking, Call Center, Kiosk etc.

A Platforms Mindset – 

As opposed to building standalone or one-off business applications, a Platform Mindset is a more holistic approach capable of producing higher revenues. Platforms abound in the webscale world at shops like Apple, Facebook & Google etc. Applications are constructed like lego blocks  and they reuse customer & interaction data to drive cross sell and up sell among different product lines. The key components here are to ensure that one starts off with products with high customer attachment & retention. While increasing brand value, it is key to ensure that customers & partners can also collaborate in the improvements in the various applications hosted on top of the platform.

Culture – Business value fueled by analytics is only possible if the entire organization operates on an agile basis in order to collaborate across the value chain. Cross functional teams across new product development, customer acquisition & retention, IT Ops, legal & compliance must collaborate in short work cycles to close the traditional business & IT innovation gap. Methodologies like DevOps who’s chief goal is to close the long-standing gap between the engineers who develop and test IT capability and the organizations that are responsible for deploying and maintaining IT operations – must be adopted. Using traditional app dev methodologies, it can take months to design, test and deploy software. No business today has that much time—especially in the age of IT consumerization and end users accustomed to smart phone apps that are updated daily. The focus now is on rapidly developing business applications to stay ahead of competitors that can better harness Big Data’s amazing business capabilities.

Summary- 

Enterprise wide business analytic approaches designed around the four key prongs  (Structure, Culture, Technology & Platforms)   will create immense operational efficiency, better business models, increased relevance and ultimately drive revenues. These will separate the visionaries, leaders from the laggards in the years to come.

What Lines Of Business Want From IT..

Relationship

                    Illustration: Business- IT Relationship (Image src – Pat.it)

Previous posts in this blog have discussed the fact that technological capabilities now make or break business models. It is critical for IT to operate in a manner that maximizes their efficiency while managing costs & ultimately delivering the right outcomes for the organization.

It is clear and apparent to me that the relationship lines of business (LOBs) have with their IT teams – typically central & shared – is completely broken at a majority of large organizations. Each side cannot seem to view either the perspective or the passions of the other. This dangerous dysfunction usually leads to multiple complaints from the business. Examples of which include –

  • IT is perceived to be glacially slow in providing infrastructure needed to launch new business initiatives or to amend existing ones. This leads to the phenomenon of ‘Shadow IT’ where business applications are  run on public clouds bypassing internal IT
  • Something seems to be lost in translation while conveying requirements to different teams within IT
  • IT is too focused on technological capabilities – Virtualization, Middleware, Cloud, Containers, Hadoop et al without much emphasis on business value drivers

So what are the top asks that Business has for their IT groups? I wager that there are five important focus areas –

  1. Transact in the language of the business –Most would agree that there has been too much of a focus on the technology itself – how it works,  what the infrastructure requirements are to host applications – cloud or on-prem, data engines to ingest and process it etc etc . The focus needs to be on customer needs that drive business value for an organization’s customers, partners, regulators & employees. Technology at it’s core is just an engine and does not exist in a vacuum. The most vibrant enterprises understand this ground reality and always ensure that business needs drive IT and not the other way around. It is thus highly important for IT leadership to understand the nuances of the business to ensure that their roadmaps (long and medium term) are being driven with business & competitive outcomes in mind. Examples of such goals are a common organization wide taxonomy across products, customers, logistics, supply chains & business domains. The shared emphasis on both business & IT should be on goals like increased profitability per customer, enhanced segmentation of both micro and macro customer populations with the ultimate goal of increasing customer lifetime value (CLV).
  2. Bi-Modal or “2 Speed” IT et al need to be business approach centric – Digital business models that are driving agile web-scale companies offer enhanced customer experiences built on product innovation and data driven business models. They are also encroaching into the domain of established industry players in verticals like financial services, retail, entertainment, telecommunications, transportation and insurance  by offering contextual & trendy products tailored to individual client profiles. Their savvy use of segmentation data  and realtime predictive analytics enables the delivery of bundles of tailored products across multiple delivery channels (web, mobile, point of sale, Internet, etc.). The enterprise approach has been to adopt a model known as Bi-Modal IT championed by Gartner. This model envisages two different IT camps – one focused on traditional applications and the other focused on innovation. Whatever be the moniker for this approach – LOBs need to be involved as stakeholders from the get-go & throughout the process of selecting technology choices that have downstream business ramifications. One of the approaches that is working well is increased cross pollination across both teams, collapsing artificial organizational barriers by adopting DevOps & ensuring that business has a slim IT component to rapidly be able to fill in gaps in IT’s business knowledge or capability.
  3. Self Service Across the board of IT Capabilities – Shadow IT (where business goes around the IT team) is not just an issue with infrastructure software but is slowly creeping up to business intelligence and advanced analytics apps. The delays associated with provisioning legacy data silos combined with using tools that are neither intuitive nor able to scale to deal with the increasing data deluge are making timely business analysis almost impossible to perform.  Insights delivered too late are not very valuable. Thus, LOBs are beginning  to move to a predominantly online SaaS (Software As A Service) model across a range of business intelligence applications. Reports, visual views of internal & external datasets are directly served to internal consumers based on data uploaded into a cloud based BI provider. These reports and views are then directly delivered to end users. IT needs to enable this capability and make it part of their range of offerings to the business.
  4. Help the Business think Analytically  – Business Process Automation (BPM) and Data Driven decision making are proven approaches used at data-driven organizations. When combined with Data and Business Analytics, this tends to be a killer combination. Organizations that are data & data metric driven are able to define key business processes that provide native support for key performance indicators (KPIs) that are critical and basic to their functioning. Applications developed by IT need to be designed in such a way that these KPIs can be communicate and broadcast across the organization constantly. Indeed a high percentage of organizations now have senior executive in place as the champion for BPM, Business Rules and Big Data driven analytics. These applications are also mobile native so that they can be provided access through a variety of mobile platforms for field based employees & back into the corporate firewall.
  5. No “Us vs Them” mentality – it is all “Us”  –  None of the above are only possible if the entire organization operates on an agile basis in order to collaborate across the value chain. Cross functional teams across new product development, customer acquisition & retention, IT Ops, legal & compliance must collaborate in short work cycles to close the traditional business & IT innovation gap.  One of chief goals of agile methodologies is to close the long-standing gap between the engineers who develop and test IT capability and business requirements for such capabilities.  Using traditional app dev methodologies, it can take months to design, test and deploy software – which is simply unsustainable. 

Business & IT need to collaborate. Period. –

The most vibrant enterprises that have implemented web-scale practices not only offer “IT/Business As A Service” but also have instituted strong cultures of symbiotic relationships between customers (both current & prospective), employees , partners and developers etc.

No business today has much time to innovation—especially in the age of IT consumerization where end users accustomed to smart phone apps that are often updated daily. The focus now is on rapidly developing business applications to stay ahead of competitors that can better harness technology’s amazing business capabilities.