The previous post makes it clear that the series of market events that led to the Great Financial Crisis of 2008 was as a result of poor Risk Management practices in the banking system. The worst financial crisis since the Great Depression of the 1920’s, this crisis resulted in the liquidation or bankruptcy of major investment banks and insurance companies,an exercise of the ‘moral hazard’ and severe consequences to the economy in terms of job losses, credit losses and a general loss of the public’s confidence in the working of the financial system as a whole.
Improper and inadequate management of a major kind of financial risk – liquidity risk, was a major factor in the series of events in 2007 and 2008 which resulted in the failure of major investment banks including Lehman Brothers, Bear Stearns etc resulting in a full blown liquidity crisis. These banks had taken highly leveraged positions in the mortgage market, with massive debt to asset ratios & were unable to liquidate assets to wind up these positions in order to make debt payments to stay afloat as going concerns. This in turn led to counterparty risk i.e the hundreds of other firms they did business with counterparties – who would otherwise have been willing to extend credit to their trading partners – to begin refusing credit, which created the oft cited “credit crunch”.
Inadequate IT systems in terms of data management, reporting and agile methodologies are widely blamed for this lack of transparency into risk accounting – that critical function – which makes all the difference between well & poorly managed banking architectures.
At it’s core this is a data management challenge and the regulators now recognize that.
Thus, Basel Committee and the Financial Stability Board (FSB) has published an addendum to Basel III widely known as BCBS 239 (BCBS = Banking Committee on Banking Supervision) to provide guidance to enhance banks’ ability to identify and manage bank-wide risks. BCBS 239 guidelines do not just apply to the G-SIBs (the systemically important banks) but also to the D-SIBs (domestic systemically important banks) . Any important financial institution deemed ‘too big to fail” needs to work with the regulators to develop a “set of supervisory expectations” that would guide risk data aggregation and reporting.
The document can be read below in its entirety and covers four broad areas – a) Improved risk aggregation b) Governance and management c) Enhanced risk reporting d) Regular supervisory review
The business ramifications of BCBS 239 (banks are expected to comply by 2016) –
1. Banks shall measure risk across the enterprise i.e across all lines of business and across what I like to call “internal” (finance, compliance, GL & risk) and “external” domains (Capital Mkts, Retail, Consumer,Cards etc).
2. All key risk measurements need to be consistent & accurate across the above internal and external domains across multiple geographies & regulatory jurisdictions. A 360 degree view of every risk type is needed and this shall be consistent without discrepancies.
3.Delivery of these reports needs to be flexible and timely, an a on demand basis as needed.
4.Banks need to have strong data governance and ownership functions in place to measure this data across a complex organizational structure
The next post, we will get into technology ramifications and understand why current process & data management approaches are just not working. We will also look into reasons why a fresh approach in terms of a Big Data enabled architecture can serve as the foundation for risk management of any kind – credit, market, operational, liquidity, counter-party etc. Innovation in this key area helps the early adopters to disarm competition. The best managed banks manage their risks the best.