Amazon declares results..and stuns!


I hadn’t originally planned on writing this post but couldn’t resist as it is at the confluence of two passions of mine – information technology and the financial markets.

For those of you who follow the stock market, Amazon declared their quarterly results on July 23, a few days ago, and I believe this event marks a momentous shift in the technology business landscape.

First, lets do the numbers and examine why they signify a blowout quarter in many ways beyond just impressive numbers.

Amazon reported an explosive quarter reporting a topline of $23 billion in revenue an y/y increase of 27%, even turning a rare profit of $92 million! For those new to their story, Amazon has always bucked conventional thinking by neglecting short term profits in favor of investing heavily in nascent businesses and constantly finding efficiencies in it’s operating model. The stock shares a somewhat mixed relationship with Wall St.

Amazon’s public cloud business, Amazon Web Services (AWS), reported $1.6 billion in quarterly revenue at a run rate of $6 billion for the year. An astounding number and even more magnified given that it is hugely profitable compared to Amazon’s other businesses (Kindle, Retail etc) representing about 40% of profit at about 8% of overall sales. Jeff Bezos’s then quixotic venture into cloud computing is now looking remarkably prescient (see BusinessWeek cover below). AWS is now key to Amazon’s growth. Cloud Computing has well and truly arrived. An ecosystem of pioneers – Netflix, Airbnb, Zillow etc all use AWS as their primary technology platform.


Of course, Wall Street took note by sending the stock sharply upwards, an intra day gain of 10%. Amazon’s cloud has now pulled way ahead of even Microsoft’s. It is rumoured to have 7-10 times the capacity of all it’s combined competition.

Number two, Amazon (around 20 years old) is now the worlds largest retailer, handily overtaking the venerable Walmart (established 1962) and it’s market capitalization is now worth more than five times that of Target’s.

Of course, there are no guarantees in this business but one can confidently state that Amazon is the business that every retailer or tech provider now desperately wants to emulate. It’s competitive moat widens quarter by quarter across it’s slew of businesses esp in Cloud which now generates the bulk of it’s profits.

CNBC published a rather interesting article a few days ago where Bob Pisani reported that the tech heavy Nasdaq composite index was almost up but was being driven primarily by four stocks alone.

Market capitalization (in billions) as of July 20, 2015:
Apple: $747
Google: $468
Facebook: $268
Amazon: $220
Total: $1.7 trillion

These four web scale giants comprised 31% of the market cap of the Nasdaq and were responsible for driving up the index to historic highs. Apple, of course, is now the most valuable company in the world.

If you recollect, these were the very four companies we discussed in the blog a few weeks ago as examples of businesses that have leveraged IT in the most innovative manner possible to create new markets ( I reproduce a snippet from the blogpost.

The provenance of the term “WebScale IT”  is the recognition of the fact that the Web scale giants led by the Big Four – Google, Amazon, Facebook and Apple have built robust platforms (as opposed to standalone or loosely federated applications) that have not only contributed to their outstanding business success but have also led to the creation of (open source) software technologies that enable business systems to operate at massive scale in terms of billions of users and at millions of systems.  They have done all this while constantly churning out innovative offerings while still continuously adapting & learning from customer feedback. No mean feat this.

Why is this interesting and why should an IT practitioner care ?

In the earlier blogpost, I had outlined five major foundational capabilities that these pioneering organizations live by –
1. An outstanding approach to digitizing their product lines in a way that supports superior customer interactions and constantly creates new value.

2. Using data in a way that improves the efficiencies in back end supply chains as well as creating micro opportunities in evert customer interaction. As an example, in the earnings call, their CFO touted Amazon’s use of robotics in its large warehouses to lower costs. “We’re using software and algorithms to make decisions rather than people, which we think is more efficient and scales better,” he said. Used right, Data is enterprise destiny.

3. A relentless approach to cutting costs by adopting open source as well as not being afraid to incubate & even invent open technologies as applicable.

4. A mindset that constantly learns and collaborates from customer interactions as well as a flat hierarchy that encourages open communication. Amazon is famous for having a lean management chain and a culture of continuous delivery.

5. Building for the future by inculcating disruption into the organizational DNA. This is done by generating new ideas, being unafraid to cannibalize older (and even profitable) product lines and constant experimenting across new businesses. Reproduced below is a mind-map from Brad Stone from Bloomberg on Amazon’s many businesses.


Not every business needs to be as forward looking as Amazon but if you are looking to leave your competition behind, your IT & data architecture definitely need a re-look and a culture of unafraid experimentation & invention. And yes, a lot of those ideas will fail but you only need a few to succeed to realize massive business value.







Why Google’s embrace of the OpenStack consortium matters..

largeNewGoogleLogoFinalFlat-a                     OpenStack-logo

Google just announced a few days ago [1] that they are joining the OpenStack foundation as a sponsor member – which got me massively excited. This is both a glowing endorsement and a shot in the arm to he rapidly maturing OpenStack – the umbrella IaaS project – something I’ve always thought of as the defacto private cloud platform ( and one that is beginning to see massive adoption in the enterprise. This follows Google’s move, in 2014, to open source it’s container management project – Kubernetes with contributions from Red Hat, Microsoft among others.

So lets parse what this means for CTOs, CIOs and other IT leaders considering OpenStack or even general cloud technology from both a strategic and technical direction –

  1. Google is the third largest public cloud provider behind Amazon AWS and Microsoft Azure. This makes OpenStack definitely all the more viable (if it was not already before) as a technology to predicate your private cloud on. Expect to see more hybrid cloud capabilities in OpenStack going forward
  2. Docker and container based technology are on the march to becoming the defacto way of building and operating applications at scale. If you are designing agile applications that intend to serve business needs – you definitely need to consider Docker, Kubenetes, Mesos etc
  3. The killer app for Cloud are the applications. Thus, I hold that every new business project now needs to be cloud enabled from the get go and old ones reconfigured as appropiate. As you make this imminent move, consider Platform As A Service (PaaS) providers like OpenShift – which largely abstract away the complexity of using such new age technology (Docker & Kubernetes) and also provide out of the box integration with best of breed tools like Jenkins, Git and Maven etc
  4. We are not too far off from a future where applications developed on a PaaS run seamlessly on top of an OpenStack managed cloud with enterprise integration with everything from compute, network, storage, front-office (service catalogs and self service portals) & back-office systems (billing & chargeback)
  5. The first prong of webscale is Data. Expect to find full stack Hadoop clusters running in container based architectures in a year or two. Hortonworks has already begun the work of being able to provision and manage Hadoop across multiple providers, with it’s acquisition and integration of SequenceIO
  6. Google runs the world’s largest container workloads in their datacenters. They launch about 2 billion containers a week[1] and virtually every service at Google runs in a container – Gmail, Search etc. This kind of significant webscale expertise will only enrich OpenStack over time & lead to architectures that are multitenant at massive scale and are higher density than is currently possible with VMs. Are VMs going away anytime soon? No, but it will be a mixed world full of exciting technology choices for developers to make
  7. Application workloads that are certified on OpenStack can be designed to leverage the benefits of leveraging patterns of microservice based development & deployment that underpin kubernetes. Customers develop (docker) container based applications and let Kubernetes handle the placement and workload management
  8. I blogged about a month or so ago about OpenStack use cases and the fact that OpenStack could potentially be used as primarily a container based workload provisioning technology. With this move by Google, we are closer than ever to organizations running container based clouds. Expect to see organizations building out full scale vertical clouds that are only container based and OpenStack controlled
  9. The Kubernetes framework will be certified to work on OpenStack. Kubernetes can manage container deployments as a whole, which makes it a huge win for anyone running real world applications that can span multiple containera and complex datacenter deployments. OpenStack’s nascent container project – Magnum uses APIs to provision and manage containers. These containers are running on instances provisioned by Nova (VM or baremetal). Magnum already leverages Kubernetes for orchestration & supports Docker as the core container technology. The building blocks are all there to build exciting digital applications that are highly automated from a sys admins perspective & engaging from a customer perspective
  10. With this move, everyone that is someone in tech is a member of the OpenStack consortium. The last two holdouts being Amazon and Microsoft. And that is not a ding against these two behemoths.Amazon is an amazingly innovative business and constantly morphs from one avatar to the next. Microsoft has not only managed to stay relevant but is also leading the move to the public cloud with a panoply of exciting offerings. All said and done, enterprise IT now has some amazing choices to architect workloads that their business leaders and stakeholders demand.

Given this upsurge in the community and as the OpenStack train leaves the station, lock-in should not be a top concern on any enterprise IT practitioner’s mind.

References –

  1. OpenStack blog

  1. Fortune article

All aboard! Google climbs on the OpenStack train

Why Digital Transformation should force industry CIO’s to think Big Data, Webscale & Opensource..

You better start swimming, or you’ll sink like a stone. Because the Time’s they are a-changing.” – Bob Dylan (from the song)

I have long been an advocate of enterprise organizations giving themselves a makeover, albeit a calculated one, by building out an elastic,flexible and agile IT operation in areas of the business that demand such innovation. Gartner has a popular moniker for this – “Bimodal IT”.

And then I read this article today morning about BofA bringing in 60% of all it’s sales from Digital Channels. Behold that! A staggering 60%!  I reproduce the below snippet directly from the article..

Bank Of America might want to change its name to Digital Bank of America.

The Charlotte, N.C., megabank is more digital bank than conventional financial institution today. That’s because 60% of the bank’s “sales” are “all digital now,” Brian T. Moynihan, Chairman and CEO of Bank of America, told investors yesterday.

Moynihan also disclosed that about 6% of the bank’s digital “sales” – it is difficult to identify exactly what he means by “sales,” unfortunately – are via mobile device, “and that’s growing at 300%,” he said.

Moynihan’s disclosures yesterday were the most publicly detailed on digital banking at a major bank to date.

Ref –

BofA have been in the news the past couple of years whether that be about incubating OpenStack into their next generation architecture and innovative use of data & mobile technology in areas where they need to attract a critical, fast growing & coveted segment of the population – millenials. It is unclear if the digital portion is being wholly run out of this open source based infrastructure but we can draw some valuable object lessions from a technology strategy perspective.

To paraphrase the above point, how does such an approach boil down in terms of technology principles? And pray, what are the technology ingredients that make up a successful Digital Strategy or more importantly how do all the principles of webscale apply at a large organization? I would wager that there are five or six major factors, chief among them – an intelligent approach to leveraging data (ingesting, mining & linking microfeeds to existing data – thus a deep analytical approach based on predictive analytics and machine learning), an agile infrastructure based on cloud computing principles, a microservice based approach to building out software architectures, mobile platforms that accelerate customers abilities to Bank Anywhere, an increased focus on automation both from a business process to software system delivery  and finally a culture that encourages risk taking & a “fail fast” approach.

Lets examine each of these forces in turn, starting with Cloud Computing first.

It may safely be said that the first wave of cloud computing adoption or Cloud 1.0 is largely a mainstream endeavor in industries like financial services and no longer an esoteric adventure meant only for  brave innovators. A range of institutions are either deploying or testing cloud based solutions that span the full range of cloud delivery models – IaaS (Infrastructure as a service), PaaS (Platform as a service), SaaS (Software as a service) etc . Cloud has moved beyond proof of concept into production.


                                Figure 1 – IT is changing from service provider to service partner

As Brett King notes in his seminal work on Digital Banking a.ka Bank 3.0, this is the age of the hyper-connected consumer. Customers are expecting to be able to Bank from anywhere, be it a mobile device or use internet banking from their personal computer.

Thus there is significant pressure on Banking infrastructures in three major ways –

  • to be able to adapt to this new way of doing things and to be able to offer multiple channels and avenues for such consumers to come in
  • offer agile applications that can detect customer preferences and provide value added services on the fly. Services that not only provide a better experience but also help in building a longer term customer relationship
  • to be able to help the business prototype, test, refine and rapidly develop new business capabilities

The core nature of Corporate IT is thus changing from an agency dedicated to keeping the trains running on time to one that is focused on innovative approaches like being able to offer IT as a service (much like a utility) as discussed above. It is a commonly held belief that large Banks are increasingly turning into software development organizations.


Figure 2 – IT operations faces pressures from both business and line of business development teams

The example of web based start-ups forging ahead of established players in areas like mobile payments is a result of the former’s ability to quickly develop, host, and scale applications in a cloud environment. The benefits of rapid application and business capability development has largely been missing in the with Bank IT’s private application platform in the enterprise data center.


Figure 3 – Business Applications need to be brought to market faster at a very high velocity

None of the above has to mean increased cost both from an IT and manpower standpoint. The worlds leading web properties and Fortune 1000 institutions use a variety of open source technologies ranging from Hadoop to OpenStack to PaaS to Operating Systems & Virtualization. Robust and well supported offerings are now available across the spectrum and these can help cut IT budgets by billions of dollars.

Not entirely convinced of the value proposition? Reproduced below are some other metrics for Bank of America’s digital banking for last quarter (from the same article):

  • Bank of America has around 17.6 million mobile users, about 14% more than in the second quarter of 2014;
  • 13% of deposit transactions via mobile; and
  • 10,000 appointments scheduled via mobile device a week, up from 2,000 a year ago.

Ref –

The next post in this series will focus on Big Data and it’s global role in all aspects of banking by helping organizations mine the gold that is customer data (at a minimum).

We will look at business imperatives & usecases across seven key segments – Retail & Consumer banking, Wealth management, Capital Markets,Insurance, Credit Cards & Payment processing, Stock exchanges and Consumer Lending. My goal will be to talk about how the Hadoop ecosystem can not just satisfy existing usecases (yes, the ubiquitous datawarehouse augmentation) but business requirements across a spectrum and finally helping adopters build out Blue Oceans (i.e new markets).

How big does my Data need to be to be considered Big enough?

One of the questions I get a lot from clients is “We do not have massive sizes of data to justify incorporating Hadoop into our legacy business application(s) or application area. However, it is clearly a superior way of doing data processing compared to what we have historically done both from a business & technology perspective. The data volumes that our application needs to process are x GB at the most but there are now a variety of formats that we can’t support with a classical RDBMS/Data warehouse style approach. How do we go about tackling this situation? ”

The one thing to clear out of the way is that there is no single universally accepted definition of Big Data. I have always defined Big Data as “the point at which your application architecture breaks down in being able to process X GB/TB data volumes which are being sent in at an ingress velocity with an expectation from the business that insights will be gleaned at a specified egress velocity. The volumes encompass a variety of new data types – unstructured and semi-structured – in addition to the classical structured feeds.” If you are at that point, you do have a business first, then a Big Data problem, nay opportunity depending on how you look at it.

By this yardstick, the definition or consideration of Big Data varies application-by -application, line of business by line of business across the enterprise. What makes it Big Data for an OLTP application may not even be Small Data for an OLAP application.It all depends on the business context.

I just ran across this poll by KD-Nuggets which shows the size of the largest datasets analyzed. As can be seen from the below table, the average was in the 40-50 GB range – which begs the question as to what kind of dataset sizes are being worked on in data projects.

largest-data-mined-2013-vs-2012 (1)

While there are technical arguments in favor of or against processing smaller datasets using a Hadoop platform (more on HDFS block sizes, Namenode memory, performance etc in a follow-up post),but lets take a step back and consider some of the strategic goals that every CXO needs to keep in mind while considering Hadoop whatever the size of their data –

  1. The need to incorporate a Hadoop platform in your lines of business depends on what your business needs are. However, keep in mind that what may not even be a need today can potentially become an urgent business imperative tomorrow. As one CIO on Wall St put it in a recent conversation – “ We need to build a Hadoop platform & grow those skills as we need to learn to be disruptive. We will be disrupted if we don’t.” To me that quote sums it up, Big Data is about business disruption. Harnessing it can help you crack open new markets or new lines of thinking while the lack of can atrophy your business
  2. Understanding and building critical skills in this rapidly maturing technology area are key to retaining and hiring the brightest IT employees. Big Data surpasses Cloud Computing in it’s impact on the direction of a business and is not just a Yahoo or Google or Amazon thing anymore. Cloud is just a horizontal capability with no intrinsic business value. There surely is value in being able to provision compute, network and storage on the fly. However, there is no innate benefit unless you have successful applications running on them. And the currency of every successful application is good ole’ data
  3. Examples abound in every vertical about innovative shops leveraging data to gain competitive advantage. Don’t leave your enterprise beholden to a moribund data architecture
  4. Hadoop’s inherent parallel processing capabilities and ability to run complex analytics in record times (see TeraSort benchmark) provides significant savings in the biggest resource of them all – time. In 2015, Hadoop is neither a dark art nor alchemy. A leading vendor like Hortonworks provides robust quick start capabilities along with security, management and governance frameworks. What’s more, a plethora of existing database, data-warehouse & analytics vendors integrate readily & robustly with data in a Hadoop cluster
  5. Hadoop (Gen 2) is not just a data processing platform. It has multiple personas – a real time, streaming data, interactive platform for any kind of data processing (batch, analytical, in memory & graph based) along with search, messaging & governance capabilities built in. Whatever your application use-case, chances are that you can get a lot done even with a small Hadoop cluster. These run the gamut from Risk mgmt to Transaction Analysis to Drug discovery to IoT. Only limited by ones imagination

We are still in the early days of understanding how Big Data can impact our business & world. Over-regulating data management & architecture, discouraging experimentation among data & business teams, as a result of an overly conservative approach or long budget cycles, is a recipe for suboptimal business results.

IT executives in bimodal organizations recognize that being able to provide an environment where providing agile and responsive data feeds to business owners is key in creating and meeting customer needs. And while the use of enterprise data needs to be governed, there however need to be thresholds for experimentation.

Done right, your Big Data CoE (Center of Excellence) can be your next big profit center.

Big Data architectural approaches to Financial Risk Mgmt..

Risk management is not just a defensive business imperative but the best managed banks deploy their capital to obtain the best possible business outcomes. The last few posts have more than set the stage from a business and regulatory perspective. This one will take a bit of a deep dive into the technology.

Existing data architectures are siloed with bank IT creating or replicating data marts or warehouses to feed internal lines of business. These data marts are then accessed by custom reporting applications thus replicating/copying data many times over which leads to massive data management & governance challenges.

Furthermore, the explosion of new types of data in recent years has put tremendous pressure on the financial services datacenter, both technically and financially, and an architectural shift is underway in which multiple LOBs can consolidate their data into a unified data lake.

Banking data architectures and how Hadoop changes the game

Most large banking infrastructures , on a typical day, process millions of derivative trades. The main implication is that there are a large number of data inserts and updates to handle. Once the data is loaded into the infrastructure there needs to be complex mathematical calculations that need to be done in near real time to calculate intraday positions. Most banks use techniques like Monte Carlo modeling and other computational simulations to build & calculate these exposures. Hitherto, these techniques were extremely expensive from both the cost of hardware and software needed to run them. Neither were tools & projects available that supported a wide variety of data processing paradigms – batch, interactive, realtime and streaming.

The Data Lake supports multiple access methods (batch, real-time, streaming, in-memory, etc.) to a common data set which is the unified repository of all financial data, it also enables users to transform and view data in multiple ways (across various schemas) and deploy closed-loop analytics applications that bring time-to-insight closer to real time than ever before.


                                                 Figure 1 – From Data Silos to a Data Lake

Also, with the advent and widespread availability of Open Source software like Hadoop (I mean a full Hadoop platform ecosystem with Hortonworks Data Platform HDP, and it’s support of multiple computing frameworks like Storm, Spark, Kafka, MapReduce and HBase) which can turn a cluster of commodity x86 based servers into a virtual mainframe, cost is no longer a limiting factor. The Application ecosystem of a financial institution can now be a deciding factor in how data is created, ingested, transformed and exposed to consuming applications.

Thus clusters of inexpensive x86 servers running Linux and Hortonworks Data Platform (HDP) provide an extremely cost-effective environment for deploying and running simulations and stress tests.


                                                 Figure 2 – Hadoop now supports multiple processing engines

Finally, an HDP cluster with tools like Hadoop, Storm, and Spark is not limited to one purpose, like older dedicated-computing platforms. The same cluster you use for running stress tests can also be used for text mining, predictive analytics, compliance, fraud detection, customer sentiment analysis, and many many other purposes. This is a key point, once you can bring in siloed data into a data lake, it is available to running multiple business scenarios – limited only by the overall business scope.

Now typical Risk Management calculations require that for each time point, and for each product line, separate simulations are run to derive higher order results. Once this is done, the resulting intermediate data then needs to be aligned to collateral valuations, derivate settlement agreements and any other relevant regulatory data to arrive at a final portfolio position. Further there needs to be a mechanism to pull in data that needs be available from a reference perspective for a given set of clients and/or portfolios.

The following are the broad architectural goals for any such implementation –

* Provide a centralized location for aggregating at a housewide level and subsequent analysis of market data, counterparties, liabilities and exposures

* Support the execution of liquidity analysis on a intraday or multi-day basis while providing long term data retention capabilities

* Provide strong but optional capailities for layering in business workflow and rule based decisioning as an outcome of analysis

* Support the execution of liquidity analysis on a intraday or multi-day basis while providing long term data retention capabilities

* Provide strong but optional capailities for layering in business workflow and rule based decisioning as an outcome of analysis

At the same time, long term positions need to be calculated for stress tests, for instance, typically using at least 12 months of data pertaining to a given product set. Finally the two streams of data may be compared to produce a CVA (Credit Valuation Adjustment) value.

The average Investment Bank deals with potentially 50 to 80 future dates and upto 3,000 different market paths, thus computation resource demands are huge. Reports are produced daily, and under special conditions multiple times per day. What-if scenarios with strawman portfolios can also be run to assess regulatory impacts and to evaluate business options.


                                                Figure 3 – Overall Risk Mgmt Workflow


As it can be seen from the above, computing arbitrary functions on a large and growing master dataset in real time is a daunting problem (to quote Nathan Marz). There is no single product or technology approach that satisfies all business requirements. Instead, one has to use a variety of tools and techniques to build a complete Big Data system. I present two approaches both of whom have been tried and tested in enterprise architecture.


Solution Patterns 

Pattern 1 – Integrate a Big Data Platform with an In memory datagrid

There are broad needs for two distinct data tiers that can be identified based on the business requirements above –

  • It is very clear from the above that data needs to be pulled in near realtime, accessed in a low latency pattern as well as calculations performed on this data. The design principle here needs to be “Write Many and Read Many” with an ability to scale out tiers of servers.In memory datagrids (IMDGs) are very suitable for this use case as they support a very high write rate. IMDGs like GemFire & JBOSS Data Grid (JDG) are highly scalable and proven implementations of distributed datagrids that gives users the ability to store, access, modify and transfer extremely large amounts of distributed data. Further, these products offers a universal namespace for applications to pull in data from different sources for all the above functionality. A key advantage here is that datagrids can pool memory and can scaleout across a cluster of servers in a horizontal manner. Further, computation can be pushed into the tiers of servers running the datagrid as opposed to pulling data into the computation tier.
    To meet the needs for scalability, fast access and user collaboration, data grids support replication of datasets to points within the distributed data architecture. The use of replicas allows multiple users faster access to datasets and the preservation of bandwidth since replicas can often be placed strategically close to or within sites where users need them. IMDGs supports WAN replication, clustering, out of the box replication as well as support for multiple language clients.
  • The second data access pattern that needs to be supported is storage for data ranging from next day to months to years. This is typically large scale historical data. The primary data access principle here is “Write Once, Read Many”. This layer contains the immutable, constantly growing master dataset stored on a distributed file system like HDFS. The HDFS implementation in HDP 2.x offers all the benefits of a distributed filesystem while eliminating the SPOF (single point of failure) issue with the NameNode in a HDFS Cluster. With batch processing (MapReduce) arbitrary views – so called batch views are computed from this raw dataset. So Hadoop (MapReduce on YARN) is a perfect fit for the concept of the batch layer. Besides being a storage mechanism, the data stored in HDFS is formatted in a manner suitable for consumption from any tool within the Apache Hadoop ecosystem like Hive or Pig or Mahout.


                                      Figure 3 – System Architecture  

The overall system workflow is as below –

  1. Data is injected into the architecture in either an event based manner or in a batch based manner. HDP supports multiple ways of achieving this. One could either use a high performance ingest like Kafka or an ESB like Mule for the batch updates or directly insert data into the IMDG via a Storm layer. For financial data stored in RDBMS’s, one can write a simple Cacheloader to prime the grid.Each of these approaches offers advantages to the business. For instance, using CEP one can derive realtime insights via predefined business rules and optionally spin up new workflows based on those rules. Once the data is inserted into the grid, one can have the Grid automatically distribute the data via Consistent Hashing. Once the data is all there, fast incremental algorithms are run in memory and resulting data can be stored in a RDBMS for querying by Analytics/ Visualisation applications.

Such intermediate or data suitable for modeling or simulation can also be streamed into the long term storage layer.

Data is loaded into different partitions into the HDFS layer in two different ways – a) from the datasources themselves directly; b) from the JDG layer via a connector .

Pattern 2 – Utilize the complete featureset present in a Big Data Platform like Hortonworks HDP 2.3
** this integration was demonstrated at Red Hat Summit by Hortonworks, Mammoth Data and Red Hat  and is well captured at **
The headline is self explanatory but let’s briefly examine how you might perform a simple Monte Carlo calculation using Apache Spark. Spark is the ideal choice here due to the iterative nature of these calculations as well as the natural increase in performance in doing this from an in memory perspective. Spark enables major performance gains – applications in Hadoop clusters running Spark tend to run up to 100 times faster in memory and 10 times faster – this even on disk.

Apache Spark provides a comprehensive, unified framework to manage big data processing requirements with a variety of data sets that are diverse in nature (text data, graph data etc) as well as the source of data (batch v. real-time streaming data).

A major advantage of using Spark is that it allows programmers to develop complex, multi-step data pipelines using the directed acyclic graph (DAG) pattern while supporting in-memory data sharing across DAGs, so that data can be shared across jobs.

One important metric used in financial modeling is LVaR – Liquidity Adjusted Value at Risk. As we discussed in earlier posts, an important form of risk is Liquidity Risk, and LVaR is one important metric to represent Liquidity Risk. “Value at Risk” or VaR is no more than the probability that a given portfolio will exceed a given threshold loss over a given period of time.

For mathematical details of the calculation, please see  Extreme value methods with applications to finance by S.Y. Novak.

Now, Liquidity risk is divided into two types: funding liquidity risk (i.e can we make the payments on this position or liability? ) and market liquidity risk (where we ask – can we exit  this position if the market suddenly turns illiquid).

The incorporation of external liquidity risk into a  VaR results in LVaR. This essentially means adjusting the time period used in the VaR calculation, based on the expected length of time required to unwind the position.

Given that we have a need to calculate LVaR for a portfolio, we can accomplish this in a distributed fashion using Spark by doing the following:

  1. Implementing the low-level LVaR calculation in Java, Scala, or Python. With Spark it is straightforward to work with code written in any of these three languages. Spark also provides mature support for multiple programming languages – Java, Scala, Python etc & ships with a built-in set of over 80 high-level operators.
  2. Data Ingestion – all kinds of financial data – position data, market data, existing risk data, General Ledger etc –  is batched in i.e read from flat files stored in HDFS, or the initial values can be read from a relational database or other persistent store via Sqoop.
  3. Spark code written in Scala, Java or Python can leverage the database support provided by those languages. Once the data is read in, it resides in what Spark calls a RDD – a Resilient Distributed Dataset. A convenient representation of the input data, which leverages Spark’s fundamental processing model, would include in each input record the portfolio item details, along with the input range, and probability distribution information needed for the Monte Carlo simulation.
  4. If you have streaming data requirements, you can optionally leverage Kafka integration with Apache Storm to read one value at a time and perform some kind of storage like persist the data into a HBase cluster.In a modern data architecture built on Apache Hadoop, Kafka ( a fast, scalable and durable message broker)works in combination with Storm, HBase and Spark for real-time analysis and rendering of streaming data. Kafka has been used to message geospatial data from a fleet of long-haul trucks to financial data to sensor data from HVAC systems in office buildings.
  5. The next step is to perform a transformation on each input record (representing one portfolio item) which runs the Monte Carlo simulation for that item. The distributed nature of Spark will result in each simulation running in a unique worker process somewhere on one node in the overall cluster.
  6. After each individual simulation has run, running another transform over the RDD to perform any aggregate calculations, such as summing the portfolio threshold risk across all instruments in the portfolio at each given probability threshold.
  7. Output data elements can be written out to HDFS, or stored to a database like Oracle, HBase, or Postgres. From here, reports and visualizations can easily be constructed.
  8. Optionally layering in workflow engines can be used to present the right data to the right business user at the right time.  

Whether you choose one solution pattern over the other or mix both of them depends on your complex business requirements and other characteristics including-

  • The existing data architecture and the formats of the data (structured, semi structured or unstructured) stored in those systems
  • The governance process around the data
  • The speed at which the data flows into the application and the velocity at which insights need to be gleaned
  • The data consumers who need to access the final risk data whether they use a BI tool or a web portal etc
  • The frequency of processing of this data to produce risk reports i.e hourly or near real time (dare I say?)  ad-hoc or intraday


BCBS 239 and the need for smart data management

The previous post makes it clear that the series of market events that led to the Great Financial Crisis of 2008 was as a result of poor Risk Management practices in the banking system. The worst financial crisis since the Great Depression of the 1920’s, this crisis resulted in the liquidation or bankruptcy of major investment banks and insurance companies,an exercise of the ‘moral hazard’ and severe consequences to the economy in terms of job losses, credit losses and a general loss of the public’s confidence in the working of the financial system as a whole.

Improper and inadequate management of a major kind of financial risk – liquidity risk, was a major factor in the series of events in 2007 and 2008 which resulted in the failure of major investment banks including Lehman Brothers, Bear Stearns etc resulting in a full blown liquidity crisis. These banks had taken highly leveraged positions in the mortgage market, with massive debt to asset ratios & were unable to liquidate assets to wind up these positions in order to make debt payments to stay afloat as going concerns. This in turn led to counterparty risk i.e the hundreds of other firms they did business with counterparties – who would otherwise have been willing to extend credit to their trading partners – to begin refusing credit, which created the oft cited “credit crunch”.

Inadequate IT systems in terms of data management, reporting and agile methodologies are widely blamed for this lack of transparency into risk accounting – that critical function – which makes all the difference between well & poorly managed banking architectures.

At it’s core this is a data management challenge and the regulators now recognize that.

Thus, Basel Committee and the Financial Stability Board (FSB) has published an addendum to Basel III widely known as BCBS 239 (BCBS = Banking Committee on Banking Supervision) to provide guidance to enhance banks’ ability to identify and manage bank-wide risks. BCBS 239 guidelines do not just apply to the G-SIBs (the systemically important banks) but also to the D-SIBs (domestic systemically important banks) . Any important financial institution deemed ‘too big to fail” needs to work with the regulators to develop a “set of supervisory expectations” that would guide risk data aggregation and reporting.

The document can be read below in its entirety and covers four broad areas – a) Improved risk aggregation b) Governance and management c) Enhanced risk reporting d) Regular supervisory review

The business ramifications of BCBS 239 (banks are expected to comply by 2016) –

1. Banks shall measure risk across the enterprise i.e across all lines of business and across what I like to call “internal” (finance, compliance, GL & risk) and “external” domains (Capital Mkts, Retail, Consumer,Cards etc).

2. All key risk measurements need to be consistent & accurate across the above internal and external domains across multiple geographies & regulatory jurisdictions. A 360 degree view of every risk type is needed and this shall be consistent without discrepancies.

3.Delivery of these reports needs to be flexible and timely, an a on demand basis as needed.

4.Banks need to have strong data governance and ownership functions in place to measure this data across a complex organizational structure


The next post,  we will get into technology ramifications and understand why current process & data management approaches are just not working. We will also look into reasons why a fresh approach in terms of a Big Data enabled architecture can serve as the foundation for risk management of any kind – credit, market, operational, liquidity, counter-party etc.  Innovation in this key area helps the early adopters to disarm competition. The best managed banks manage their risks the best.

Towards better Risk Management..Basel III

Perhaps more than anything else, failure to recognize the precariousness and fickleness of confidence-especially in cases in which large short-term debts need to be rolled over continuously-is the key factor that gives rise to the this-time-is-different syndrome.Highly indebted governments, banks, or corporations can seem to be merrily rolling along for an extended period, when bang!-confidence collapses, lenders disappear, and a crisis hits.”   – This Time is Different (Carmen M. Reinhart and Kenneth Rogoff)

Not just in 2008, every major financial cataclysm in recent times has been a result of the derivative market running amok. Orange County’s bankruptcy in 1994, Long Term Capital Management in 1998, the Russian rouble crisis, Argentine default, Mexican peso etc – the list goes on and on. And the response from the government and the Fed in the vast majority of cases has been a bailout.

The below pictorial (courtesy AT Kearney) captures a timeline of the events that led to the great financial meltdown of 2008.


Source –

The set of events that cascaded down in 2008 was incredibly complex and so were the causes in terms of institutions and individuals. Many books have been written and more than five hard-hitting documentary films made on what is now part of history. 

Whatever be the mechanics of the financial instruments that caused the meltdown –  one thing everyone broadly agrees on was that easy standards with respect to granting credit (and specifically consumer mortgages in the US with the goal of securitizing & reselling them in tranches – the infamous CDO’s) were the main causes of the crisis.

Banks essentially granted easy mortgages with the goal of securitizing these and selling them into the financial markets as low risk & high return investments.  AIG Insurance’s financial products (FP) division created & marketed another complex instrument – credit default swaps – which effectively insured the buyer from losses in the case any of the original derivative instruments made a loss.

Thus, the entire house of cards was predicated on two broad premises –

1. that home prices in the bellwether US market would always go up
2. refinancing debt would be simple and easy

However, prices in large housing markets in Arizona & California first crashed leading to huge job losses in the construction industry. Huge numbers of borrowers started to default on their loans leaving Banks holding collateral of dubious value. Keep in mind that it was very hard for banks to even evaluate these complex portfolios for their true worth. 

None of their models predicted where things would end up in the event of the original loans defaulting – which all followed pretty quickly.

The overarching goal of any risk management regulation is to ensure that Banks understand and hedge adequately against these risks.

There are a few fundamental pieces of legislation that were put in place precisely to avoid this kind of meltdown. We will review a few of these – namely Basel III, Dodd Frank and CCAR.

Let’s start with Basel III.

Basel III (named for the town of Basel in Switzerland where the committee meets) essentially prescribes international standards for capital & liquidity adequacy and were developed by the Basel Committee on Banking Supervision with voluntary worldwide applicability. The Bank of International Settlements (BIS) established  1930, is the world’s oldest international financial consortium. with 60+ member central banks, representing countries from around the world that together make up about 95% of world GDP. BIS stewards and maintains the Basel III standards in conjunction with member banks.


The goal is to strengthen the regulation, supervision and risk management of the banking sector by improving risk management and governance so that a repeat of 2008 never happens where a few bad actors and a culture of wild west risk-taking threatens main street. Basel III (building upon Basel I and Basel II) also set a criteria for financial transparency and disclosure by banking institutions.

Basel III sets an overlay framework called “Pillars” in three major areas. 

Pillar 1 covers 

– the levels and quality of Capital that Banks need to set aside. There is now a minimum standard for high quality tier 1 capital
– Risk coverage in terms of credit analysis of complex securitized instruments
– Mandates higher capital for trading and derivative activities
– Standards for exposures to central counter-parties (CCP)
– A leverage ratio for off balance sheet exposures

Pillar 2 mandates firm-wide governance and risk exposure extending to off balance sheet activities.

Pillar 3 revises the minimum capital requirements and supervisory review process by developing a set of disclosure requirements which will allow the market participants to gauge the capital adequacy of an institution. The overarching goal of Pillar 3 is to provide market participants & bank boards with standard metrics to sufficiently understand a bank’s activities.

As a result of the Basel III standards and the list of Phase In arrangements that can be found here –,   Banks will have to raise, allocate and manage increased & higher quality reserves (equity and retained earnings) over the next few years which will have an impact on profitability. Basel III also looks to stabilizing the leverage ratio as a incremental multiple of the banks capital to prevent runaway risky betting which can destabilize the whole system .

Banks will need to conform to specific qualitative measures – the important ones being a tier one capital adequacy ratio, capital conservation buffer, minimum tier one capital, net stable funding ratio (NSFR) etc. The committee lays out specific targets (all risk weighted) all the way to 2019.

We will focus on Dodd-Frank,CCAR and BCBS 239 in the next post. The final part of this series will discuss the capabilities that are lacking in legacy IT risk platforms  and how these gaps can be well filled by Big Data technologies.

The Intelligent Banker needs better Risk Management


It is widely recognized that the series of market events that led to the Great Financial Crisis of 2008 was as a result of poor risk management practices in the banking system. The worst financial crisis since the Great Depression of the 1920’s, this crisis resulted in the liquidation or bankruptcy of major investment banks and insurance companies,an exercise of the ‘moral hazard’ and severe consequences to the economy in terms of job losses, credit losses and a general loss of the public’s confidence in the working of the financial system as a whole.

Wikipedia defines Financial risk management as “the practice of economic value in a firm by using financial instruments to manage exposure to risk, particularly credit risk and market risk. Other types include Foreign exchange, Shape, Volatility, Sector, Liquidity,
Inflation risks, etc. Similar to general risk management, financial risk management requires identifying its sources, measuring it, and plans to address them.”

Improper and inadequate management of a major kind of financial risk –
liquidity risk, was a major factor in the series of events in 2007 and
2008 which resulted in the failure of major investment banks including
Lehman Brothers, Bear Stearns etc resulting in a full blown liquidity
crisis. These banks had taken highly leveraged positions in the
mortgage market, with massive debt to asset ratios & were unable to
liquidate assets to wind up these positions in order to make debt
payments to stay afloat as going concerns.

This in turn led to a triggering of counterparty risk i.e the hundreds of other firms they did business with counter-parties – who would otherwise have been willing to extend
credit to their trading partners – to begin refusing credit, which
resulted in the “credit crunch”.

Inadequate IT systems in terms of data management, reporting and agile
methodologies are widely blamed for this lack of transparency into
risk accounting – that critical function – which makes all the
difference between well & poorly managed banking conglomerates.

Indeed, Risk management is not just a defensive business imperative
but the best managed banks can understand their holistic risks much
better to deploy their capital to obtain the best possible business

Just in case you got the impression that risk management is a somewhat
dated business imperative,it is very topical that the potential Greek
exit from the European monetary union is being termed as a massive
liquidity risk crisis by Bloomberg View on June 29th, 2015 by the
influential analyst Matt Levine. To reproduce his quote –  ” Three
Words Count in Bonds: Liquidity, Liquidity, Liquidity”. Here is a
Bloomberg News article finding that “there are three things that
matter in the bond market these days” and all of them are liquidity. I
was prepared for, like, two of them to be liquidity — liquidity,
liquidity and default risk, say — but, no, it’s “liquidity, liquidity
and liquidity.” One hundred percent liquidity! What were the odds?

Need one say more?

Risk practices are very closely intertwined with IT and data
architectures. Indeed data is the currency of the banking business.
Current industry-wide risk practices span the spectrum from the
archaic to the prosaic.

Areas like Risk and Compliance however provide unique and compelling
opportunities for competitive advantage  for those Banks that can
build agile data architectures that can help them navigate regulatory
changes faster and better than others.

Adopting fresh & new age approaches that leverage the best in Big Data, Analytics and Cloud Computing can result  in –

1. Improved insight and a higher degree of transparency in business
operations and capital allocation
2. Better governance procedures and policies that can help track risks
down to the transaction level
3.  Streamlined processes across the enterprise and across different
banking domains. Investment banking, Retail & Consumer, Private
banking etc.

Indeed, lines of businesses can drive more profitable products &
services once they understand their risk exposures better. Capital can
only be allocated more efficiently, better road-maps created in
support of business operations instead of constant fire-fights and
concomitant bad press.

In the next post in this three part series on Risk management, I would
like to examine the major regulatory regimes that Banks need to adhere
to post 2013. We will examine the Basel accords  (Basel III and the
addendum BCBS 239), Dodd Frank and CCAR.

The downstream implications of all of the above will also be reviewed in that post.