A POV on Bank Stress Testing – CCAR & DFAST..

The recession of 2007 to 2009 was still the most painful since the Depression. At its depths, $15 trillion in household wealth had disappeared, ravaging the pensions and college funds of Americans who had thought their money was in good hands. Nearly 9 million workers lost jobs; 9 million people slipped below the poverty line; 5 million homeowners lost homes.”
― Timothy F. Geithner, Former Secretary of the US Treasury – “Reflections on Financial crises – 2014”

A Quick Introduction to Macroeconomic Stress Testing..

The concept of stress testing in banking is not entirely new. It has been practiced for years in global banks across specific business functions that deal with risk. The goal of these internal tests has been to assess firm wide capital adequacy in periods of economic stress. However,the 2008 financial crisis clearly exposed how unprepared the Bank Holding Companies (BHCs) were to systemic risk brought on as a result of severe macroeconomic distress. Thus the current raft of regulator driven stress tests are motivated from the taxpayer funded bailouts in 2008. Back then banks were neither adequately capitalized to cope with stressed economic conditions nor were their market,credit risk losses across portfolios sustainable.

In 2007, SCAP (Supervisory Capital Access Program) was enacted as a stress testing framework in the US that only 19 leading financial institutions (Banks, Insurers etc) had to adhere to. The exercise was not only focused on the quantity of capital available but also the quality- Tier 1 common capital – with the institution. The emphasis on Tier 1 Common Capital is important as it provided an institution with a higher absorption capacity with minimizing losses to higher capital tiers.  Tier 1 Common Capital can also be managed better during economic stress by adjusting dividends, share buybacks and related activities.

Though it was a one off, the SCAP was a stringent and rigorous test. The Fed performed SCAP audits on the results of all the 19 BHC’s – some of whom failed the test.

Following this in 2010, the Dodd Frank Act was enacted by the Obama Administration.The Dodd Frank Act also introduced it’s own stress test – DFAST (Dodd-Frank Act Stress Testing). DFAST requires BHCs with assets of $10 billion & above to run annual stress tests and to make the results public. The goal of these stress tests is multifold but they are conducted primarily to assure the public, the regulators that BHCs have adequately capitalized their portfolios. BHC’s are required to present detailed capital plans to the Fed.

The SCAP’s successor, CCAR (Comprehensive Capital Adequacy Review) was also enacted around that time. Depending on the overall risk profile of the institution, the CCAR mandates several qualitative & quantitative metrics that BHCs need to report on and make public for several stressed macroeconomic scenarios.


Comprehensive Capital Analysis and Review (CCAR) is a regulatory framework introduced by the Federal Reserve in order to assess, regulate, and supervise large banks and financial institutions – collectively referred to in the framework as Bank Holding Companies (BHCs).
– (WikiPedia)

  • Every year, an increasing number of Tier 2 banks come under the CCAR mandate. CCAR basically requires specific BHCs to develop a set of internal macroeconomic scenarios or use those developed by the regulators. Regulators would then get the individual results of these scenario runs from firms across a nine quarter time horizon. Regulators also develop their own systemic stress tests to verify if a given BHC can withstand negative economic scenarios and continue to operate their lending operations. CCAR coverage primarily includes retail banking operations, auto & home lending, trading, counter party credit risk, AFS (Available For Sale)/HTM (Hold To Maturity) securities etc. The CCAR covers all major kinds of risk – market, credit, liquidity and OpsRisk.
CCAR kicked off global moves by regulators to enforce the same of banks in their respective jurisdictions. The EU requires EBA stress testing. The UK is an example of a country that requires its own stress testing – the Prudential Regulatory Authority. The same evolution of the firm wide stress testing has been followed by other regulators over the world, for example, in Europe with the EBA stress testing. Emerging markets such as India and China are also following this trend. Every year, more and more BHCs are increasingly subject to CCAR reporting mandates.

Similarities & Differences between CCAR and DFAST..

To restate – the CCAR is an annual exercise by the Federal Reserve to assess whether the largest bank holding companies operating in the United States have sufficient capital to continue operations throughout times of economic and financial stress and that they have robust, forward-looking capital-planning processes that account for their unique risks.  As part of this exercise, the Federal Reserve evaluates institutions’ capital adequacy, internal capital adequacy assessment processes, and their individual plans to make capital distributions, such as dividend payments or stock repurchases. Dodd-Frank Act stress testing (DFAST)-an exercise similar to CCAR- is a forward-looking stress test conducted by the Federal Reserve for smaller financial institutions. It is supervised by the Federal Reserve to help assess whether institutions have sufficient capital to absorb losses and support operations during adverse economic conditions.

As part of CCAR reporting guidelines, the BHC’s have to explicitly call out

  1. their sources of capital given their risk profile & breadth of operations,
  2. the internal policies & controls for measuring capital adequacy &
  3. any upcoming business decisions (share buybacks, dividends etc) that may impact their capital adequacy plans.

While both CCAR and DFAST look very similar from a high level  – they both mandate that banks  conduct stress tests – they do differ in the details. DFAST is applicable to banks that have assets between 10-50 billion $. During the planning horizon phase, CCAR allows the BHCs to use their own capital action assessments while DFAST enforces a standardized set of capital actions.The DFAST scenarios represent baseline, adverse and severely adverse scenarios. The DFAST is supervised by the Fed, the OCC (Office of the Comptroller of Currency) and the FDIC.

                                                Summary of DFAST and CCAR (Source: E&Y) 

As can be seen from the above table, while DFAST is complementary to CCAR, both efforts are distinct testing exercises that rely on similar processes, data, supervisory exercises, and requirements. The Federal Reserve coordinates these processes to reduce duplicative requirements and to minimize regulatory burden. CCAR results are reported twice on an annual basis and BHCs are required to also incorporate Basel III capital ratios in their reports with Tier 1 capital ratios calculated using existing rules. DFAST is reported up annually and it does include Basel III reporting.

In a Nutshell…

In CCAR (and DFAST), the Fed is essentially asking the BHC’s the following questions –

(1) For your defined risk profile, please define a process of understanding and mapping the key stakeholders to carry out this process.

(2) Please ensure that you use clean internal data to compute your exposures in the event of economic stress. The entire process of data sourcing, cleaning, computation, analytics & reporting needs to be auditable.

(3) What macroeconomic stress scenarios did you develop in working with key lines of business ? What are the key historical assumptions in developing these? What are the key what-if scenarios that you have developed based on the stressed scenarios? The scenarios need to be auditable as well.

(4) We are then going to run our own macroeconomic numbers & run our own scenarios using our own exposure generators on your raw data.

(5) We want to see how close both sets of numbers are.

Both CCAR and DFAST scenarios are expressed in stressed macroeconomic factors and financial indicators. The regulators typically provide these figures on a quarterly basis a few reporting periods in advance.

What are some examples of these scenarios?
  • Measures of Index Turbulence – E.g. In a certain quarter, regulators might establish that the S&P 500 would go down 30%; Decrease in key indices like home, commercial property & other asset prices.
  • Measures of  Economic Activity – E.g. US unemployment rate spikes, higher interest rates, increased inflation. What if unemployment ran to 14%? What does that do to my mortgage portfolio – the default rates increase and this is what it looks like.
  • Measures of Interest Rate Turbulence –  E.g. US treasury yields, Interest rates on US mortgages etc.

Based on this information, banks would then assess the impact of these economic scenarios as reflected in market and credit losses to their portfolios. This would help them estimate how their capital base would behave in this situation. These internal CCAR metrics are then sent over to the regulators. Every Bank has their own models based on their understanding which the Fed needs to review as well for completeness and quality.

The Fed uses the CCAR and DFAST results to evaluate capital adequacy, the quality of the capital adequacy assessment process and then evaluates the BHC’s plans to make capital distributions using dividends & share repurchases etc in the context of the results. The BHC’s boards of directors are required to approve and sign off on these plans.

What do CCAR & DFAST entail of Banks?

Well, six important things as the above illustration captures –

    1. CCAR is fundamentally very different from other umbrella risk types in that it has a strong external component in terms of reporting on internal bank data to the regulatory authorities. CCAR reporting is done by sending over internal bank Book of Record Transaction (BORT) data from their lending systems (with hundreds of manual adjustments) to the regulators for them to run their models to assess capital adequacy.  Currently , most banks do some model reporting internally which are based on canned CCAR algos in tools like SAS/Spark computed for a few macroeconomic stress scenarios.
    2. Both CCAR and DFAST stress the same business processes, data resources and governance mechanisms. They are both a significant ask on the BHCs from the standpoint of planning, execution and governance. BHCs have found them daunting with the new D-SIB’s that enter the mandate are faced with implementing these programs that need significant organizational and IT spend.
    3. Both CCAR and DFAST challenge the banks on data collection, quality, lineage and reporting. The Fed requires that data needs to be accurate, comprehensive and clean. Data Quality is the single biggest challenge to stress test compliance. Banks need to work on a range of BORT (Book of Record Transaction Systems) like Core Banking, Lending Portfolios, Position data and any other data needed to accurate reflect the business. There is also a reconciliation process that is typically used to reconcile risk data with the GL (General Ledger). For instance if a BHC’s lending portfolio is $4 billion based on the raw summary data. Once reconciliation is performed – it seems to be around $3 billion after adjustments. The regulator runs the aforesaid macroeconomic scenarios at $4 billion and the exposures are naturally off.
    4. Contrary to popular perception -the heavy lifting from is typically not in creating and running the exposure calculations for stress testing. The creation of these is relatively straightforward. Banks historically have had their own analytics groups produce these macroeconomic models. They also already have 10s of libraries in place that can be modified to create supervisory scenarios for CCAR/DFAST- the baseline, adverse & severely adverse. The critical difference with stress testing is that silo-ed models and scenarios need be unified along with the data.
    5. Model development in Banks usually follows a well defined lifecycle.Most of Liquidity Assessment and Liquidity Groups within Banks currently have a good base of quants with a clean separation of job duties. For instance, while one group produces scenarios, others work on exposures that feed into liquidity engines to calculate liquidity. The teams running these liquidity assessments are good candidates to run the CCAR/DFAST models as well. The calculators themselves will need to be rewritten for Big Data using something like SAS/ Spark.
    6. Transparency must be demonstrated down to the source data level. And banks need to be able to document all capital classification and computation rules to a sufficient degree to meet regulatory requirements during the auditing and review process.

The Technology Implications of  CCAR/DFAST..

It can clearly be seen that regulatory stress testing derives inputs from virtually every banking function. Then it should come as no surprise that  it follows that from a technology point of view there are several implications :

    • CCAR and DFAST impact a range of systems, processes and controls. The challenges that most Banks have in integrating front office trading desk data (Position data, pricing data and reporting) with back-office systems –  risk & finance are making the job of accurately reporting on stress numbers all the more difficult. These are causing most BHC’s to resort to manual data operations, analytics and complicated reconciliation process across the front, mid and back offices.
    • Not just from a computation & reporting library standardization, banks need to be able to perform common data storage for data from a range of BORT systems.
    • Banks also need to standardize on data taxonomies across all of these systems.
    • To that end, Banks need to stop creating more silos data across Risk and Finance functions; as I have often advocated in this blog – a move to a Data Lake enabled architecture is appropriate as a way of eliminating silos and the problem of unclean data which is sure to invite regulatory sanction.
    • Banks need to focus on Data Cleanliness by setting appropriate governance and audit-ability policies
    • Move to a paradigm of bringing compute to large datasets instead of the other way around
    • Move towards in memory analytics to transform, aggregate and analyze data in real time across many dimensions to obtain an understanding of the banks risk profile at any given point in time

A Reference Architecture for CCAR and DFAST..

 I recommend readers review the below post on FRTB Architecture as it contains core architectural and IT themes that are broadly applicable to CCAR and DFAST as well.

A Reference Architecture for the FRTB (Fundamental Review of the Trading Book)

Conclusion..

As can be seen from the above, both CCAR & DFAST require a holistic approach across the value chain (model development, data sourcing, reporting) across Risk, Finance and Treasury functions.  Further Regulators are increasingly demanding an automated process across risk & capital calculations under various scenarios using accurate and consistent data. The need of the hour for BHCs is to move to a common model for data storage, stress modeling and testing. Doing this can only ensure that the metrics and outputs of capital adequacy can be produced accurately and in a timely manner, thus satisfying the regulatory mandate.

References –

[1] Federal Reserve CCAR Summary Instructions 2016

https://www.federalreserve.gov/newsevents/press/bcreg/bcreg20160128a1.pdf

Why Platform as a Service (PaaS) Adoption will take off in 2017..

???????????????????????????

Since the time Steve Ballmer went ballistic professing his love for developers, it has been a virtual mantra in the technology industry that developer adoption is key to the success of a given platform. On the face of it – Platform as a Service(PaaS) is a boon to enterprise developers who are tired of the inefficiencies of old school application development environments & stacks. Further, a couple of years ago, PaaS seemed to be the flavor of the future given the focus on Cloud Computing. This blogpost focuses on the advantages of the generic PaaS approach while discussing its lagging slow rate of adoption in the cloud computing market – as compared with it’s cloud cousins – IaaS (Infrastructure as a Service) and SaaS (Software as a Service).

Platform as a Service (PaaS) as the foundation for developing Digital, Cloud Native Applications…

Call them Digital or Cloud Native or Modern. The nature of applications in the industry is slowly changing. So are the cultural underpinnings of the development process and culture themselves- from waterfall to agile to DevOps. At the same time, Cloud Computing and Big Data are enabling the creation of smart data applications. Leading business organizations are cognizant of the need to attract and retain the best possible talent – often competing with the FANGs (Facebook, Amazon, Netflix & Google).

Couple all this with the immense industry and venture capital interest around container oriented & cloud native technologies like Docker – you have a vendor arms race in the making. And the prize is to be chosen as the standard for building industry applications.

Thus, infrastructure is enabling but in the end- it is the applications that are Queen or King.

That is where PaaS comes in.

Why Digital Disruption is the Cure for the Common Data Center..

Enter Platform as a Service (PaaS)…

Platform as a Service (PaaS) is one of the three main cloud delivery models, the other two being IaaS (Infrastructure such as compute, network & storage services) and SaaS (Business applications delivered over a cloud). A collection of different cloud technologies, PaaS focuses exclusively on application development & delivery. PaaS advocates a new kind of development based on native support for concepts like agile development, unit testing, continuous integration, automatic scaling, while providing a range of middleware capabilities. Applications developed on these can be deployed out as services & managed across thousands of application instances.

In short, PaaS is the ideal platform for creating & hosting digital applications. What can PaaS provide that older application development toolchains and paradigms cannot?

While the overall design approach and features vary across every PaaS vendor – there are five generic advantages from a high level –

  1. PaaS enables a range of Application, Data & Middleware components to be delivered as API based services to developers on any given Infrastructure as a Service (IaaS).  These capabilities include-  Messaging as a service, Database as a service, Mobile capabilities as a service, Integration as a service, Workflow as a service, Analytics as a service for data driven applications etc. Some PaaS vendors also provide ability to automate & manage APIs for business applications deployment on them – API Management.
  2. PaaS provides easy & agile access to the entire suite of technologies used while creating complex business applications. These range from programming languages to application server (and lightweight) runtimes to programming languages to CI/CD toolchains to source control repositories.
  3. PaaS provides the services which enables a seamless & highly automated manner of building the complete life cycle of building and delivering web applications and services on the internet. Industry players are infusing software delivery processes with practices such as continuous delivery (CD) and continuous integration (CI). For large scale applications such as those built in web scale shops, financial services, manufacturing, telecom etc – PaaS abstracts away the complexities of building, deploying & orchestrating infrastructure thus enabling instantaneous developer productivity. This is a key point – with it’s focus on automation – PaaS can save application and system administrators precious time and resources in managing the lifecycle of elastic applications
  4. PaaS enables your application to be ‘kind of cloud’ agnostic & can enable applications to be run on any cloud platform whether public or private. This means that a PaaS application developed on Amazon AWS can easily be ported to Microsoft Azure to VMWare vSphere to Red Hat RHEV etc
  5. PaaS can help smoothen organizational Culture and Barriers – The adoption of a PaaS forces an agile culture in your organization – one that pushes cross pollination among different business, dev and ops teams. Most organizations are just now beginning to go bimodal for greenfield applications can benefit immensely from choosing a PaaS as a platform standard.

The Barriers to PaaS Adoption Will Continue to Fall In 2017..

In general, PaaS market growth rates do not seem to line up well when compared with the other broad sections of the cloud computing space, namely IaaS (Infrastructure as a Service) and SaaS (Software as a Service). 451 Research’s Market Monitor forecasts that the total market for cloud computing (including PaaS, IaaS and infrastructure software as a service (ITSM, backup, archiving) – will hit $21.9B in 2016 more than doubling to $44.2bB by 2020. Of that, some analyst estimates contend that PaaS will be a relatively small $8.1 billion.

451-research-paas_vs_saas_iaas

  (Source – 451 Research)

The advantages that PaaS confers have sadly also caused its relatively low rate of adoption as compared to IaaS and SaaS.

The reasons for this anemic rate of adoption include, in my opinion  –  

  1. Poor Conception of the Business Value of PaaS –  This is the biggest factor holding back explosive growth in this category. PaaS is a tremendously complicated technology & vendors have not helped by stressing on the complex technology underpinnings (containers, supported programming languages, developer workflow, orchestration, scheduling etc etc) as opposed to helping clients understand the tangible business drivers & value that enterprise CIOs can derive from this technology. Common drivers include increased time to market for digital capabilities, man hours saved in maintaining complex applications, ability to attract new talent etc. These factors will vary for every customer but it is up to frontline Sales teams to help deliver this message in a manner that is appropriate to the client.
  2. Yes, you can do DevOps without PaaS but PaaS helps a long way  – Many Fortune 500 organizations are drawing up DevOps strategies which do not include a PaaS & are based on a simplified CI/CD pipeline. This is to the detriment of both the customer organization & the industry as PaaS can vastly simplify a range of complex runtime & lifecycle services that would otherwise need to be cobbled together by the customer as the application moves from development to production. There is simply a lack of knowledge in the customer community about where a PaaS fits in a development & deployment toolchain.
  3. Smorgasbord of Complex Infrastructure Choices – The average leading PaaS includes a range of open source technologies ranging from containers to runtimes to datacenter orchestration to scheduling to cluster management tools. This makes it very complex from the perspective of Corporate IT – not just it terms of running POCs and initial deployments but also to manage a highly complex stack. It is incumbent on the open source projects to abstract away the complex inner workings to drive adoption  -whether by design or by technology alliances.
  4. You don’t need Cloud for PaaS but not enough Technology Leaders get that – This one is perception. The presence of an infrastructural cloud computing strategy is not a necessary condition for PaaS. 
  5. The false notion that PaaS is only fit for massively scalable, greenfield applications – Industry leading PaaS’s (like Red Hat’s OpenShift) support a range of technology approaches that can help cut technical debt. They donot limit deployment on an application server platform such as JBOSS EAP or WebSphere or WebLogic, or a lightweight framework like Spring.
  6. PaaS will help increase automation thus cutting costs – For developers of applications in Greenfield/ New Age spheres such as IoT, PaaS can enable the creation of thousands of instances in a “Serverless” fashion. PaaS based applications can be composed of microservices which are essentially self maintaining – i.e self healing and self scalable up or down; these microservices are delivered (typically) by IT as Docker containers using automated toolchains. The biggest requirement in large datacenters – human involvement – is drastically reduced if PaaS is used – while increasing agility, business responsiveness and efficiencies.

Conclusion…

My goal for this post was to share a few of my thoughts on the benefits of adopting a game changing technology. Done right, PaaS can provide a tremendous boost to building digital applications thus boosting the bottom line. Beginning 2017, we will witness PaaS satisfying critical industry use cases as leading organizations build end-to-end business solutions that covers many architectural layers.

References…

[1] http://www.forbes.com/sites/louiscolumbus/2016/03/13/roundup-of-cloud-computing-forecasts-and-market-estimates-2016/#3d75915274b0

Payment Providers – How Big Data Analytics Provides New Opportunities in 2017

                                                         Image Credit – JDL Group

Payments Industry in 2017..

The last post in this blog (handy link below) discussed my predictions for the payments market in 2017. The payments industry is large, quite diverse from a capabilities standpoint while being lucrative from a revenue standpoint.

My Last Post for the Year – Predictions for the Global Payments Industry in 2017

Why is that?

First, payments are both an essential daily function for consumers and corporates alike which means a constant annual growth in transaction volumes. Volumes are the very lifeblood of the industry.

Second, thanks to the explosion of technology capabilities especially around Smartphones & Smart Apps – the number of avenues that consumers can use to make payments has virtually surged.

Thirdly, an increasing number of developing economies such as China, India and Brazil are slowly moving over massive consumer populations over to digital payments from previously all cash economies.

Finally, in developed economies – the increased regulatory push  in the form of standards like PSD2 (Payments Systems Directive 2) have begun blurring boundaries between traditional players and the new upstarts.

All of these factors have the Payments industry growing at a faster clip than most other areas of finance. No wonder, payments startups occupy pride of place in the FinTech boom.

The net net of all this is that payments will continue to offer a steady and attractive stream of investments for players in this area.

Big Data Driven Analytics in the Payments Industry..

Much like the other areas of finance, the payments industry can benefit tremendously from adopting the latest techniques in data storage and analysis. Let us consider the important ways in which they can leverage the diverse and extensive data assets they possess to perform important business functions –

  1. Integrating all the complex & disparate functions of Payments Platforms
    Most payment providers offer a variety of services. E.g. credit cards, debit cards and corporate payments. Integrating different kinds of payment types – credit cards, debit cards, Check, Wire Transfers etc into one centralized payment platform. This helps with internal efficiencies (e.g collapsing redundant functions such as fraud, risk scoring, reconciliation, reporting into one platform) but also with external services offered to merchants (e.g. forecasting, analytics etc).
  2. Detect Payments Fraud
    Big Data is dramatically changing that approach with advanced analytic solutions that are powerful and fast enough to detect fraud in real time but also build models based on historical data (and deep learning) to proactively identify risks.

    http://www.vamsitalkstech.com/?p=1098

  3. Risk Scoring of Payments in Realtime & Batch 
    Payment Providers assess the risk score of transactions in realtime depending upon various attributes (e.g. Consumer’s country of origin, IP Address etc). Big Data enables these attributes to become granular by helping support advanced statistical techniques to incorporate behavioral (e.g. transaction is out of normal behavior for a consumers buying patterns), temporal and spatial techniques.
  4. Detect Payments Money Laundering (AML)
    A range of Big Data techniques are being deployed  to detect money laundering disguised as legitimate payments.

    http://www.vamsitalkstech.com/?p=2559

  5. Understand Your Customers Better
    Payment providers can create a single view of a Cardholder across multiple accounts & channels of usage. Doing this will enable cross sell/upsell and better customer segmentation. The below picture says it all.

    http://www.vamsitalkstech.com/?p=2517

  6. Merchant Analytics 
    Payment providers have been sitting on petabytes of customer data and have only now began waking up to the possibilities of monetizing this data. An area of increasing interest is to provide sophisticated analytics to merchants as a way of driving merchant rewards programs. Retailers, Airlines and other online merchants need to understand what segments their customers fall into as well as what the best avenues are to market to each of them. E.g. Webapp, desktop or tablet etc. Using all of the Payment Data available to them, Payment providers can help Merchant Retailers understand their customers better as well as improve their loyalty programs.
  7. Cross Sell & Up Sell New Payment & Banking Products & Services
    Most payment service providers are also morphing into online banks. Big Data based Data Lakes support the integration of regular banking  capabilities such as bill payment, person-to-person payments and account-to-account transfers to streamline the payments experience beyond the point of sale. Consumers can then move and manage money at the time they choose: instantly, same-day, next-day or on a scheduled date in the future
  8. Delivering the best possible highly personalized Payments Experience
    Mobile Wallets offer the consumer tremendous convenience by Data Lakes support the integration of capabilities such as bill payment, person-to-person payments and account-to-account transfers to streamline the payments experience beyond the point of sale. Consumers can then move and manage money at the time they choose: instantly, same-day, next-day or on a scheduled date in the future

Conclusion..

As we have discussed in previous posts in this blog, the payments industry is at the cusp (if not already, in the midst) of a massive disruption. Business strategies will continue to be driven by technology especially Big Data Analytics. Whether this is in Defense (cut costs, optimize IT, defend against financial crimes or augment existing cyber security) or playing Offense (signing up new customers, better cross sell and data monetization) – Big Data will continue to be a key capability in the industry.