Why the Insurance Industry Needs to Learn from Banking’s Risk Management Nightmares..

risk_management_montage

(Image Credit – ENC Consulting)

Why Systemic Financial Crises Are a Broad Failure of Risk Management…

Various posts in this blog have catalogued the practice of  risk management in the financial services industry. To recap briefly, the Great Financial Crisis (GFC) of 2008 was a systemic failure that brought about large scale banking losses across the globe. Considered by many economists to be the worst economic crisis since the Great Depression [1], it not only precipitated the collapse of large financial institutions across the globe but also triggered the onset of sovereign debt crises across Greece, Iceland et al.

Years of deregulation & securitization (a form of risk transfer) combined with expansionary monetary policy during the Greenspan years, in the United States, led to the unprecedented availability of easy consumer credit in lines such as mortgages, credit cards and auto. The loosening of lending standards led to the rise of Subprime Mortgages which were often underwritten using fraudulent practices. Investment Banks were only too happy to create mortgage backed securities (MBS) which were repackaged and sold across the globe to willing institutional investors. Misplaced financial incentives in banking were also a key cause of this mindless financial innovation.

The health of entire global high finance thus rested on the ability of the US consumer to make regular payments on their debt obligations – especially  on their mortgages. However, artificially inflated housing prices began to decline in 2004 and the rate of refinancing dropped, the rate of foreclosures assumed mammoth proportions. Global investors begin to thus suffer significant losses. The crisis assumed the form of a severe liquidity crunch leading to a crisis of confidence among counter parties in the financial system.

Global & National Regulatory Authorities had to step in to conduct massive bailouts of banks. Yet stock markets suffered severe losses as housing markets collapsed causing a large crisis of confidence. Central Banks & Federal Governments responded with massive monetary & fiscal policy stimulus thus yet again crossing the line of Moral Hazard.Risk Management practices in 2008 were clearly inadequate at multiple levels ranging from department to firm to regulatory levels. The point is well made that the while the risks that individual banks ran were seemingly rational on an individual level however taken as a whole, the collective position was irrational & unsustainable. This failure to account for the complex global financial system was reflected across the chain of risk data aggregation, modeling & measurement.

 The Experience Shows That Risk Management Is A Complex Business & Technology Undertaking…

What makes Risk Management a complex job is the nature of Global Banking circa 2016?

Banks today are complex entities engaged in many kinds of activities. The major ones include –

  • Retail Banking – Providing cookie cutter financial services ranging from collecting customer deposits, providing consumer loans, issuing credit cards etc. A POV on Retail Banking at – http://www.vamsitalkstech.com/?p=2323 
  • Commercial Banking –  Banks provide companies with a range of products ranging from business loans, depository services to other financial investments.
  • Capital Markets  – Capital Markets groups provide underwriting services & trading services that engineer custom derivative trades for institutional clients (typically Hedge Funds, Mutual Funds, Corporations, Governments and high net worth individuals and Trusts) as well as for their own treasury group.  They may also do proprietary trading on the banks behalf for a profit – although it is this type of trading that Volcker Rule is seeking to eliminate. A POV on Retail Banking at- http://www.vamsitalkstech.com/?p=2175
  • Wealth Management – Wealth Management provide personal investment management, financial advisory, and planning disciplines directly for the benefit of high-net-worth (HNWI) clients. A POV on Wealth Management at – http://www.vamsitalkstech.com/?p=1447

Firstly, Banks have huge loan portfolios across all of the above areas (each with varying default rates) such as home mortgages, credit loans, commercial loans etc . In the Capital Markets space, a Bank’s book of financial assets gets more complex due to the web of counter-parties across the globe across a range of complex assets such as derivatives. Complex assets mean complex mathematical models that calculate risk exposures across many kinds of risk. These complex models for the most part did not take tail risk and wider systemic risk into account while managing risk.

Secondly, the fact that markets turn in unison during periods of (downward) volatility – which ends up endangering the entire system. Finally, complex and poorly understood financial instruments in the derivatives market had made it easy for Banks to take on highly leveraged positions which placed their own firms & counter parties at downside risk. These models were entirely dependent on predictable historical data which never modeled “black swan” events. That means while the math may have been complex, it never took on sophisticated scenario analysis into account.

Regulatory Guidelines ranging from Basel III to Dodd Frank to MiFiD II to the FRTB (the new kid on the regulatory block) have been put in place by international and national regulators post 2008. The overarching goal being to prevent a repeat of the GFC where taxpayers funded bailouts for managers of a firm – who profit immensely on the upside.

These Regulatory mandates & pressures have begun driving up Risk and Compliance expenditures to unprecedented levels. The Basel Committee guidelines on risk data reporting & aggregation (RDA), Dodd Frank, Volcker Rule as well as regulatory capital adequacy legislation such as CCAR are causing a retooling of existing risk regimens. The Volcker Rule prohibits banks from trading on their own account (proprietary trading) & greatly curtails their investments in hedge funds. The regulatory intent is to avoid banker speculation with retail funds which are insured by the FDIC. Banks have to thus certify across their large portfolios of positions as to which trades have been entered for speculative purposes versus hedging purposes.

The impact of the Volcker Rule has been to shrink margins in the Capital Markets space as business moves to a a flow based trading model that relies less on proprietary trading and more on managing trading for clients. At the same time risk management gets more realtime in key areas such as  market, credit and liquidity risks.

A POV on FRTB is at the below link.

A POV on the FRTB (Fundamental Review of the Trading Book)…

Interestingly enough one of the key players in the GFC was AIG –  an insurance company with a division – FP (Financial Products)- that really operated like a Hedge Fund by looking to insure downside risk it never thought it needed to payout on. 

Which Leads Us to the Insurance Industry…

For the most part of their long existence, insurance companies were relatively boring – they essentially provided protection against adverse events such as loss of property, life & health risks. The consumer of insurance products is a policyholder who makes regular payments called premiums to cover themselves. The major lines of insurance business can be classified into life insurance, non-life insurance and health insurance. Non life insurance is also termed P&C (Property and Casualty) Insurance. While insurers collect premiums, they invest these funds in relatively safer areas such as corporate bonds etc.

Risks In the Insurance Industry & Solvency II…

While the business model in insurance is essentially inverted & more predictable as compared to banking, insurers have to grapple with the risk of ensuring that enough reserves have been set aside for payouts to policyholder claims.  It is very important for them to have a diversified investment portfolio as well as ensure that profitability does not suffer due to defaults on these investments. Thus firms need to ensure that their investments are diverse – both from a sector as well as from a geographical exposure standpoint.

Firms thus need to constantly calculate and monitor their liquidity positions & risks. Further, insurers are constantly entering into agreements with banks and reinsurance companies – which also exposes them to counterparty credit risk.

From a global standpoint, it is interesting that US based insurance firms are largely regulated at the state level while non-US firms are regulated at the federal level. The point is well made that insurance firms have had a culture of running a range of departmentalized analytics as compared to the larger scale analytics that the Banks described above need to run.

In the European Union, all 27 member countries (including the United Kingdom) are expected to adhere to Solvency II [2] from 2016. Solvency II replaced the long standing Solvency I – which only calculates capital for underwriting risk.

Whereas Solvency I calculates capital only for underwriting risks, Solvency II is quite similar to Basel II – discussed below – and imposes guidelines for insurers to calculate investment as well as operational risks.

Towards better Risk Management..Basel III

There are three pillars to Solvency II [2].

  • Pillar 1 sets out quantitative rules and is concerned with the calculation of capital requirements and the types of capital that are eligible.
  • Pillar 2 is concerned with the requirements for the overall insurer supervisory review process &  governance.
  • Pillar 3 focuses on disclosure and transparency requirements.

The three pillars are therefore analogous to the three pillars of Basel II.

Why Bad Data Practices will mean Poor Risk Management & higher Capital Requirements under Solvency II..

While a detailed discussion of Solvency II will follow in a later post, it imposes new  data aggregation, governance and measurement criteria on insurers –

  1. The need to identify, measure and offset risks across the enterprise and often in realtime
  2. Better governance of risks across not just historical data but also fresh data
  3. Running simulations that take in a wider scope of measures as opposed to a narrow spectrum of risks
  4. Timely and accurate Data Reporting

The same issues that hobble banks in the Data Landscape are sadly to be found in insurance as well.

The key challenges with current architectures –

  1.  A high degree of Data is duplicated from system to system leading to multiple inconsistencies at the summary as well as transaction levels. Because different groups perform different risk reporting functions (e.g Credit and Market Risk) – the feeds, the ingestion, the calculators end up being duplicated as well.
  2. Traditional Risk algorithms cannot scale with this explosion of data as well as the heterogeneity inherent in reporting across multiple kinds of risks as needed for Solvency II. E.g Certain kinds of Credit Risk need access to around years of historical data where one is looking at the probability of the counter-party defaulting & to obtain a statistical measure of the same. All of these analytics are highly computationally intensive.
  3. Risk Model and Analytic development needs to be standardized to reflect realities post Solvency II. Solvency II also implies that from an analytics standpoint, a large number of scenarios on a large volume of data. Most Insurers will need to standardize their analytic libraries across their various LOBs. If Banks do not look to move to an optimized data architecture, they will incur tens of millions of dollars in additional hardware spend.

Summary

We have briefly covered the origins of regulatory risk management in both banking and insurance. Though the respective business models vary across both verticals, there is a good degree of harmonization in the  regulatory progression.  The question is if insurers can learn from the bumpy experiences of their banking counterparts in the areas of risk data aggregation and measurement.

References..

[1] https://en.wikipedia.org/wiki/Financial_crisis_of_2007%E2%80%9308

[2] https://en.wikipedia.org/wiki/Solvency_II_Directive_2009

A POV on the FRTB (Fundamental Review of the Trading Book)…

Regulatory Risk Management evolves…

The Basel Committee of supranational supervision was put in place to ensure the stability of the financial system. The Basel Accords are the frameworks that essentially govern the risk taking actions of a bank. To that end, minimum regulatory capital standards are introduced that banks must adhere to. The Bank of International Settlements (BIS) established  1930, is the world’s oldest international financial consortium. with 60+ member central banks, representing countries from around the world that together make up about 95% of world GDP. BIS stewards and maintains the Basel standards in conjunction with member banks.

The goal of Basel Committee and the Financial Stability Board (FSB) guidelines are to strengthen the regulation, supervision and risk management of the banking sector by improving risk management and governance. These have taken on an increased focus to ensure that a repeat of financial crisis 2008 comes to pass again. Basel III (building upon Basel I and Basel II) also sets new criteria for financial transparency and disclosure by banking institutions.

Basel III – the last prominent version of the Basel standards published in 2012 (named for the town of Basel in Switzerland where the committee meets) prescribes enhanced measures for capital & liquidity adequacy and were developed by the Basel Committee on Banking Supervision with voluntary worldwide applicability.  Basel III covers credit, market, and operational risks as well as liquidity risks. As this is known, BCBS 239 –  guidelines do not just apply to the G-SIBs (the Globally Systemically Important Banks) but also to the D-SIBs (Domestic Systemically Important Banks).Any important financial institution deemed ‘too big to fail” needs to work with the regulators to develop a “set of supervisory expectations” that would guide risk data aggregation and reporting.

Basel III & other Risk Management topics were covered in these previous posts – http://www.vamsitalkstech.com/?p=191 && http://www.vamsitalkstech.com/?p=667

Enter the FTRB (Fundamental Review of the Trading Book)…

In May 2012, the Basel Committee on Banking Supervision (BCBS) again issued a consultative document with an intention of revising the way capital was calculated for the trading book. These guidelines which can be found here in their final form [1] were repeatedly refined based on comments from various stakeholders & quantitative studies. In Jan 2016, a final version of this paper was released. These guidelines are now termed  the Fundamental Review of the Trading Book (FRTB) or unofficially as some industry watchers have termed – Basel IV. 

What is new with the FTRB …

The main changes the BCBS has made with the FRTB are – 

  1. Changed Measure of Market Risk – The FRTB proposes a fundamental change to the measure of market risk. Market Risk will now be calculated and reported via Expected Shortfall (ES) as the new standard measure as opposed to the venerated (& long standing) Value At Risk (VaR).  As opposed to the older method of VaR with a 99% confidence level, expected shortfall (ES) with a 97.5% confidence level is proposed. It is to be noted that for normal distributions, the two metrics should be the same however the ES is much superior at measuring the long tail. This is a recognition that in times of extreme economic stress, there is a tendency for multiple asset classes to move in unison. Consequently, under the ES method capital requirements are anticipated to be much higher.
  2. Model Creation & Approval – The FRTB also changes how models are approved & governed.  Banks that want to use the IMA (Internal Model Approach) need to pass  a set of rigorous tests so that they are not forced to used the Standard Rules approach (SA) for capital calculations. The fear is that the SA will increase capital requirements. The old IMA approach has now been revised and made more rigorous in a way that it enables supervisors to remove internal modeling permission for individual trading desks. This approach now enforces more consistent identification of material risk factors across banks, and constraints on hedging and diversification. All of this is now going to be done at a desk level instead of the entity level. FRTB moves the responsibility of showing compliant models, significant backtesting & PnL attribution to the desk level.
  3. Boundaries between the Regulatory Books – The FRTB also assigns explicit boundaries between the trading book (the instruments the bank intends to trade) and the bank book (the instruments held to maturity). These rules have been redefined in such a way that banks now have to contend with stringent rules for internal transfers between both. The regulatory motivation is to eliminate a given bank’s ability to designate individual positions as belonging to either book. Given the different accounting treatment for both, there is a feeling that bank’s were resorting to capital arbitrage with the goal of minimizing regulatory capital reserves. The FRTB also introduces more stringent reporting and data governance requirements for both which in conjunction with the well defined boundary between books. All of these changes should lead to a much better regulatory framework & also a revaluation of the structure of trading desks. 
  4. Increased Data Sufficiency and Quality – The FRTB regulation also introduces Non-Modellable risk factors (NMRF). Risk factors are non modellabe if certain aspects that pertain to the availability and sufficiency of the data are an issue . Thus with the NMRF, Banks now need increased data sufficiency and quality requirements that go into the model itself. This is a key point, the ramifications of which we will discuss in the next section.
  5. The FRTB also upgrades its standardized approach to data structuring – with a new standardized approach (SBA) which is more sensitive to various risk factors across different asset classes as compared to the Basel II SA. Regulators now determine the sensitivities in the data. Approvals will also be granted at the desk level rather than at the entity level.  The revised SA should provide a consistent way to measure risk across geographies and regions, giving regulatory a better way to compare and aggregate systemic risk. The sensitivities based approach should also allow banks to share a common infrastructure between the IMA approach and the SA approach. Thera are a set of buckets and risk factors that are prescribed by the regulator which instruments can then be mapped to.
  6. Models must be seeded with real and live transaction data – Fresh & current transactions will now need to be entered into the calculation of capital requirements as of the date on which they were conducted. Not only that, though reporting will take place at regular intervals, banks are now expected to manage market risks on a continuous basis -almost daily.
  7. Time Horizons for Calculation – There are also enhanced requirements for data granularity depending on the kind of asset. The FRTB does away with the generic 10 day time horizon for market variables in Basel II to time periods based on liquidity of these assets. It propose five different time horizons – 10 day, 20 day, 60 day, 120 day and 250 days.

FRTB_Horizons

                                 Illustration: FRTB designated horizons for market variables (src – [1])

To Sum Up the FRTB… 

The FRTB rules are now clear and they will have a profound effect on how market risk exposures are calculated. The FRTB clearly calls out the specific instruments in the trading book vs the banking book. With the new switch over to Expected Shortfall (ES) @ 97.5% over VaR @ 99% confidence levels – it should cause increased reserve requirements. Furthermore, the ES calculations will be done keeping liquidity considerations of the underlying instruments with a historical simulation approach ranging from 10 days to 250 days of stressed market conditions. Banks that use a pure IMA approach will now have to move to IMA plus the SA method.

The FRTB compels Banks to create unified teams from various departments – especially Risk, Finance, the Front Office (where trading desks sit) and Technology to address all of the above significant challenges of the regulation.

From a technology capabilities standpoint, the FRTB now presents banks with both a data volume, velocity and analysis challenge. Let us now examine the technology ramifications.

Technology Ramifications around the FRTB… 

The FRTB rules herald a clear shift in how IT architectures work across the Risk area and the Back office in general.

  1. The FRTB calls for a single source of data that pulls data across silos of the front office, trade data repositories, a range of BORT (Book of Record Transaction Systems) etc. With the FRTB, source data needs to be centralized and available in one location where every feeding application can trust it’s quality.
  2. With both the IMA and the SBA in the FRTB, many more detailed & granular data inputs (across desks & departments) need to be fed into the ES (Expected Shortfall) calculations from varying asset classes (Equity, Fixed Income, Forex, Commodities etc) across multiple scenarios. The calculator frameworks developed or enhanced for FRTB will need ready & easy access to realtime data feeds in addition to historical data. At the firm level, the data requirements and the calculation complexity will be even more higher as it needs to include the entire position book.

  3. The various time horizons called out also increase the need to run a full spectrum of analytics across many buckets. The analytics themselves will be more complex than before with multiple teams working on all of these areas. This calls out for standardization of the calculations themselves across the firm.

  4. Banks will have to also provide complete audit trails both for the data and the processes that worked on the data to provide these risk exposures. Data lineage, audit and tagging will be critical.

  5. The number of runs required for regulatory risk exposure calculations will dramatically go up under the new regime. The FRTB requires that each risk class be calculated separately from the whole set. Couple this with increased windows of calculations as discussed  in #3 above- means that more compute processing power and vectorization.

  6. FRTB also implies that from an analytics standpoint, a large number of scenarios on a large volume of data. Most Banks will need to standardize their libraries across the house. If Banks do not look to move to a Big Data Architecture, they will incur tens of millions of dollars in hardware spend.

The FRTB is the most pressing in a long list of Data Challenges facing Banks… 

The FRTB is yet another regulatory mandate that lays bare the data challenges facing every Bank. Current Regulatory Risk Architectures are based on traditional relational databases (RDBMS) architectures with 10’s of feeds from Core Banking Systems, Loan Data, Book Of Record Transaction Systems (BORTS) like Trade & Position Data (e.g. Equities, Fixed Income, Forex, Commodities, Options etc),  Wire Data, Payment Data, Transaction Data etc. 

These data feeds are then tactically placed in memory caches or in enterprise data warehouses (EDW). Once the data has been extracted, it is transformed using a series of batch jobs which then prepare the data for Calculator Frameworks to which run the risk models on them. 

All of the above applications need access to medium to large amounts of data at the individual transaction Level. The Corporate Finance function within the Bank then makes end of day adjustments to reconcile all of this data up and these adjustments need to be cascaded back to the source systems down to the individual transaction or classes of transaction levels. 

These applications are then typically deployed on clusters of bare metal servers that are not particularly suited to portability, automated provisioning, patching & management. In short, nothing that can automatically be moved over at a moment’s notice. These applications also work on legacy proprietary technology platforms that do not lend themselves to flexible & a DevOps style of development.

Finally, there is always need for statistical frameworks to make adjustments to customer transactions that somehow need to get reflected back in the source systems. All of these frameworks need to have access to and an ability to work with terabtyes (TBs) of data.

Each of above mentioned risk work streams has corresponding data sets, schemas & event flows that they need to work with, with different temporal needs for reporting as some need to be run a few times in a day (e.g. Traded Credit Risk), some daily (e.g. Market Risk) and some end of the week (e.g Enterprise Credit Risk). 

One of the chief areas of concern is that the FRTB may require a complete rewrite of analytics libraries. Under the FRTB, front office libraries will need to do Enterprise Risk –  a large number of analytics on a vast amount of data. Front office models cannot make all the assumptions that enterprise risk can to price a portfolio accurately. Front office systems run a limited number of scenarios thus trading off timeliness for accuracy – as opposed to enterprise risk.

Most banks have stringent vetting processes in place and all the rewritten analytic assets will need to be passed through that. Every aspect of the math of the analytics needs to be passed through this vigorous process. All of this will add to compliance costs as vetting process costs typically cost multiples of the rewrite process. The FRTB has put in place stringent model validation standards along with hypothetical portfolios to benchmark these.

The FRTB also requires data lineage and audit capabilities for the data. Banks will need to establish visual representation of the overall process as data flows from the BORT systems to the reporting application. All data assets have to be catalogued and a thorough metadata management process instituted.

What Must Bank IT Do… 

Given all of the above data complexity and the need to adopt agile analytical methods  – what is the first step that enterprises must adopt?

There is a need for Banks to build a unified data architecture – one which can serve as a cross organizational repository of all desk level, department level and firm level data.

The Data Lake is an overarching data architecture pattern. Lets define the term first. A data lake is two things – a small or massive data storage repository and a data processing engine. A data lake provides “massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs“. Data Lake are created to ingest, transform, process, analyze & finally archive large amounts of any kind of data – structured, semistructured and unstructured data.

The Data Lake is not just a data storage layer but one that can allow different users (traders, risk managers, compliance etc) plug in calculators that work on data that spans intra day activity as well as data across years. Calculators can then be designed that can work on data with multiple runs to calculate Risk Weighted Assets (RWAs) across multiple calibration windows.

The below illustration is a depiction of goal is to create a cross company data-lake containing all asset data and compute applied to the data.

RDA_Vamsi

                              Illustration – Data Lake Architecture for FRTB Calculations

1) Data Ingestion: This encompasses creation of the L1 loaders to take in Trade, Position, Market, Loan, Securities Master, Netting  and Wire Transfer data etc across trading desks. Developing the ingestion portion will be the first step to realizing the overall architecture as timely data ingestion is a large part of the problem at most institutions. Part of this process includes understanding examples of a) data ingestion from the highest priority of systems b) apply the correct governance rules to the data. The goal is to create these loaders for versions of different systems (e.g Calypso 9.x) and to maintain it as part of the platform moving forward. The first step is to understand the range of Book of Record transaction systems (lending, payments and transactions) and the feeds they send out. The goal would be to create the mapping to a release of an enterprise grade Open Source Big Data Platform e.g HDP (Hortonworks Data Platform) to the loaders so these can be maintained going forward.

2) Data Governance: These are the L2 loaders that apply the rules to the critical fields for Risk and Compliance. The goal here is to look for gaps in the data and any obvious quality problems involving range or table driven data. The purpose is to facilitate data governance reporting.

3) Entity Identification: This step is the establishment and adoption of a lightweight entity ID service. The service will consist of entity assignment and batch reconciliation.

4) Developing L3 loaders: This phase will involve defining the transformation rules that are required in each risk, finance and compliance area to prep the data for their specific processing.

5) Analytic Definition: Running the analytics that are to be used for FRTB.

6) Report Definition: Defining the reports that are to be issued for each risk and compliance area.

References..

[1] https://www.bis.org/bcbs/publ/d352.pdf