Why Enterprises should build Platforms and not just Standalone Applications…

                                                    Image Credit – Shutterstock 

Introduction..

The natural tendency in the world of Corporate IT is to create applications in response to business challenges. For instance, take any large Bank or Insurer or Manufacturer – you will find thousands of packaged applications that aim to solve a range of challenges from departmental level issues to enterprise-wide business problems. Over years these have given rise to application and infrastructure sprawl.

The application mindset creates little business value over the long run while creating massive technology headaches. For instance, the rationalization of these applications over time becomes a massive challenge in and of itself. At times, IT does not even understand how relevant some of these applications are to business users, who are even using them and the benefits derived. Over the last 15 years, Silicon Valley players such as Apple, Google, and Facebook et al have begun illustrating the power of building platforms that connect a range of users to the businesses that serve them. As the Network Effects connected to using these platforms have grown exponentially, so have the users.

What Corporate IT & business need to learn to do is to move to a Platform mindset.

The Platform Strategy…

Amazon is the perfect example of how to conceive and execute a platform strategy over a couple of decades. It began life as a retailer in 1994 and over time morphed into other complementary offerings such as Marketplace, AWS, Prime Video, Payments etc. These platforms have led to an ever-increasing panoply of services, higher revenues, promoted more directed consumer interactions and higher network effects. Each platform generates its own revenue stream and is a large standalone corporation in its own right. However, the sum of these platforms is higher than the sum of the individual products and this has led to Amazon becoming the most valuable company in the world (as of late 2017).

So what are the key business benefits and drivers of a platform oriented model?

Driver #1 Platforms enable you to build business ecosystems

Platforms enable enterprise business to orient their core capabilities better and to be able to deliver on those. Once that is done to a high degree of success, partners and other ecosystem players can plug in their capabilities.  The functionality that the platform provides is the ability to inter The challenge most times is that large companies always seem to play catchup with business models of nimbler players. When they do this, they often choose an application based approach which does not enable them to take a holistic view of their enterprise and the business ecosystems around them. In the Platform approach, IT departments move to more of a service model while delivering agile platforms and technology architectures for business lines to develop products around.

E.g. Post the PSD2 regulation, innovators in the European Banking system will become a prime example of platform led business ecosystems.

Why the PSD2 will Spark Digital Innovation in European Banking and Payments….

Driver #2 Platforms enable you to rethink and better the customer experience thus driving new revenue streams

The primary appeal of a platform based architecture is the ability to drive cross-sell and upsell opportunities. This increases not the number of products adopted by a given customer but also (and ultimately) the total revenue per customer.

The below blog post discusses how Payment Providers are increasingly using advanced analytics on their business platforms to generate not only increased topline/sales growth but also to defend against fraud and anti-money laundering (AML).

Payment Providers – How Big Data Analytics Provides New Opportunities in 2017

Driver #3 Platforms enable you to experiment with business models (e.g. Data Monetization)

The next progressive driver in leveraging both internal and external data is to use it to drive new revenue streams in existing lines of business.  This is also termed Data Monetization. Data Monetization is the organizational ability to turn data into cost savings & revenues in existing lines of business and to create new revenue streams. This requires fusing both internal and external data to create new analytics and visualization.

The Tao of Data Monetization in Banking and Insurance & Strategies to Achieve the Same…

Driver #4 Platforms destroy business process silos

One of the chief reasons that hold back an enterprise ability to innovate is the presence of both business and data silos. This is directly a result of an Application based approach. When underlying business processes & data sources are both fragmented, communication between business teams moves over to other internal & informal mechanisms such as email, chat and phone calls etc. This is an overall recipe for delayed business decisions which are ultimately ineffective as they depend more on intuition than are backed by data. The Platforms approach drives the organization towards unification and rationalization of both the data and the business process that creates it thus leading to a unified and consistent view of both across the business.

Why Data Silos Are Your Biggest Source of Technical Debt..

Driver #5 Platforms move you to become a Real-time Enterprise

Enterprises that are platform oriented does more strategic things right than wrong. They constantly experiment with creating new and existing business capabilities with a view to making them appealing to a rapidly changing clientele. They refine these using constant feedback loops and create platforms comprised of cutting-edge technology stacks that dominate the competitive landscape. The Real-Time enterprise demands that workers at many levels ranging from the line of business managers to executives have fresh, high quality and actionable information on which they can base complex yet high-quality business decisions.

The Three Habits of Highly Effective Real Time Enterprises…

Conclusion..

A business and IT strategy built on platform approaches enable an organization to take on a much wider & richer variety of business challenges.  This enables an organization to achieve outcomes that were not really possible with the Application model.

My take on Gartner’s Top 10 Strategic Trends for 2018 & beyond..

My vision for the future state of the digital economy – I see a movie. I see a story of everybody connected with very low latency, very high speed, ultra-dense connectivity available. Today you’re at the start of something amazing… I see the freeing up, not just of productivity and money, but also positive energy which can bring a more equal world.” -Vittorio Colao, CEO, Vodafone, Speaking at the World Economic Forum – Davos, Jan 2015

As is customary for this time of the year, Gartner Research rolled out their “Top 10 Strategic Technology Trends for 2018” report a few weeks ago –  https://www.gartner.com/newsroom/id/3812063. Rather than exclusively cover the IT technology landscape as in past years, Gartner has also incorporated some of the themes from the 2016 US Presidential election, namely fake news and content.My goal for this blogpost is to provide my frank take on these trends to the reader. Also, as always – to examine the potential impact of their recommendations from an enterprise standpoint.

Previous Gartner Reviews…

2016..

My take on Gartner’s Top 10 Strategic Technology Trends for 2016

2017..

My take on Gartner’s Top 10 Strategic Technology Trends for 2017

The predictions themselves can be organized in five specific clusters – Web-scale giants, Cryptocurrencies, Fake News & AI,  IT job markets & IoT/Security.

Let us consider  –

Prediction Cluster #1 -Of  Web Scale Giants, Bots & E-Commerce… 

This year, Gartner makes two key predictions from the standpoint of the webscale giants, namely the FANG (Facebook, Amazon, Netflix and Google/Alphabet) companies plus Apple. These companies now dominate whatever business areas they choose to operate in largely due to the general lack of traditional enterprise competition to their technology-infused business models. They have not only gained market leadership status in their core markets but are also branching into creating blue ocean business models. Gartner’s prediction is that by 2020, these giants – which will largely remain unchallenged –  will need to innovate via self-disruption to stay nimble and competitive.

This prediction is hard to disagree with and is fairly obvious to someone who has followed their growth over the years. Virtually every major advance in consumer technology, mobile business models, datacenter architectures, product development methodologies over the last ten years has originated at these companies. The question is how much of this forecasted organic disruption will happen due to their cannibalizing existing product lines or creating entirely new markets e.g. self-driving tech, VR/AR etc.

The critical reason these companies have such a wide business moat is that they’ve incubated the Digital Native customer category. Their users are highly comfortable with technology and use services offered (such as Google’s range of products, Facebook services such as the classical social media platform, Instagram,  Uber, Netflix, Amazon Prime etc) almost hourly in their daily lives. As I have noted before, these customers expect a similar seamless & contextual experience while engaging with the more mundane and traditional enterprises such as Banks, Telcos, Retailers, Insurance companies. They expect primarily expect a digital channel experience. These companies then have a dual fold challenge – not only to provide the best user expereince but also to store all this data as well as harness it for real-time insights in a way that is connected with internal marketing & sales.

As many studies have shown, companies that constantly harness data about their customers, internal operations and perform speedy analytics on this data often outshine their competition. Does that seem a bombastic statement? Not when you consider that almost half of all online dollars spent in the United States in 2016 were spent on Amazon and almost all digital advertising revenue growth in 2016 was accounted by two biggies –Google and Facebook.

Which leads us to the second prediction, that – by 2021 early adopter brands that redesign their websites to support visual and voice search will increase digital commerce revenue by 30%.

This prediction is also bolstered by the likes of comScore which notes that voice & visual search have rapidly become the second and third leg of online search. Every serious mobile app now supports both these modes. Further Amazon with their Alexa assistant is bringing this capability to bear in diverse areas such as home automation.

Virtual reality (VR) and augmented reality (AR) are technologies that will completely change the way humans interact with one another and with intelligent systems that make up the Digital Mesh.  Uses of these technologies will include gamification (to improve customer engagement with products and services), other customer & employee-facing applications etc.

Prediction Cluster #2 – By 2022, Cryptocurrencies create $1B of value in the Banking market…

We have discussed the subject of bitcoin and blockchain to some degree of depth over the last year and this prediction will seem safe and obvious to many. The explosion of market value in Bitcoin and other alt-currencies also supports the coming of age of cryptocurrencies. However, Gartner pegging cryptocurrency led business value at just $1B by 2022 seems way on the lower end. Cryptocurrencies are not only widely accepted in various forms of banking. E.g. Payments, Consumer Banking loans, Mortgages etc but they are on the verge of gaining Central Bank support. I expect an explosion in their usage and institutionalization over the next two-three year horizon. Every enterprise needs an Altcurrency and Blockchain strategy.

Blockchain For the Enterprise: Key Considerations..

Prediction Cluster #3 – Fake News and Counterfeit Reality run amok…

Keeping in line with the dominant theme of the US Presidential election of 2016, fake news has become a huge challenge across multiple social media platforms. This news is being manufactured by skilled writers working for foreign and often hostile governments as well as AI driven bots. Gartner forecasts that by 2022, the majority of news consumed in developed economies will be fake. This is a staggering indictment of the degree of criminality in creating a counterfeit reality. Germany has led the way in passing legislation that goes after criminals who sow racial discord by planting fake news on internet platforms which have more than 2 million users. [1] The law applies to online service providers who operate platforms that enable sharing and dissemination of data. If offending material is not removed from social network platforms within 24 hours, fines of upto  €50 million can be levied by the regulator.

Enterprises need to guard similarly against fake news being shared with a view to harming their corporate or product image. Putting in place strong cyber defenses and operational risk systems will be key.

Prediction Cluster #4 – IT jobs in the Digital Age…

We have spoken about the need for IT staff to retool themselves as Digital transformation & bimodal IT projects increasingly take a seat in the corporate agenda. Accordingly, IT needs to increasingly understand and communicate in the language of the business. Gartner increasingly forecasts that IT staff will become versatilists across the key disciplines of Infrastructure, Operations, and Architecture.

What Lines Of Business Want From IT..

Gartner also forecasts that AI related jobs will experience healthy growth staring in 2020. Until then AI will result in widespread time and effort savings with AI augmenting existing workers with time and productivity savings.

Prediction Cluster #6 – IoT and Security… 

There are two key predictions included this year from an IoT standpoint. The first is that by 2022, half of IoT security budgets will be spent towards remediation and device safety recalls rather than in providing protection. Clearly, as threat vectors increase into an enterprise by their adoption of IoT, it is key to put appropriate governance mechanisms to ensure perimeter defense and to ensure appropriate patching & security policies are followed. You are only as secure as the weakest devices inside your organizational perimeter.

Secondly, In three years or less, Gartner predicts that IoT capabilities will be included in 95% of new electronic designs. This is not a surprise given the proliferation of embedded devices and the improvements in operating systems such as embedded Linux. However, the key gains will be made in platforms that harness and make this data actionable.

A Digital Reference Architecture for the Industrial Internet Of Things (IIoT)..

The Numbers…

This year’s Gartner’s predictions have largely underwhelmed in three broad areas.

Firstly, the broad coverage of all leading tech trends that were evident in the earlier years is clearly missing. For instance, sensor technology enabling autonomous vehicles such as LIDAR (Light Detection and Ranging) being pioneered by the likes of Alphabet and Tesla is conspicuous by its absence on the list. Elon Musk has been on record saying that self-driven transportation is just two or three years away from being introduced by the car makers.  Next, any mention of 5G wireless capabilities which enable a range of IoT workloads is expected to be a reality in 2020. This is another obvious miss by Gartner.

Secondly, some of the most evident areas of enterprise innovation such as FinTechs, InsurTechs are conspicuous by their absence.

Thirdly, Gartner has included quantitative data such as percentages and dates that with each trend that can leave one scratching their head. It is unclear what methodology and logic were employed in arriving at such exact numbers.

References..

[1] “Germany’s Bold Gambit to Prevent Online Hate Crimes and Fake News Takes Effect” – Evelyn Douek, Lawfare
https://www.lawfareblog.com/germanys-bold-gambit-prevent-online-hate-crimes-and-fake-news-takes-effect

Want to go Cloud or Digital Native? You’ll Need to Make These Six Key Investments…

The ability for an enterprise to become a Cloud Native (CN) or Digitally Native (DN) business implies the need to develop a host of technology capabilities and cultural practices in support of two goals. First, IT becomes aligned with & responsive to the business. Second, IT leads the charge on inculcating a culture of constant business innovation. Given these realities, large & complex enterprises that have invested into DN capabilities often struggle to identify the highest priority areas to target across lines of business or in shared services. In this post, I want to argue that there are six fundamental capabilities large enterprises need to adopt housewide in order to revamp legacy systems. 

Introduction..

The blog has discussed a range of digital applications and platforms at depth. We have covered a range of line of business use cases & architectures – ranging from Customer Journeys, Customer 360, Fraud Detection, Compliance, Risk Management, CRM systems etc. While the specific details will vary from industry to industry, the common themes to all these implementations include a seamless ability to work across multiple channels, to predictively anticipate client needs and support business models in real-time. In short, these are all Digital requirements which have been proven in the webscale world with Google, Facebook, Amazon and Netflix et al.  Most traditional companies are realizing that the adopting the practices of these pioneering enterprises are a must for them to survive and thrive.

However, the vast majority of Fortune 500 enterprises need to overcome significant challenges in their migrating their legacy architecture stacks to a Cloud Native mode.  While it is very easy to slap mobile UIs via static HTML on existing legacy systems, without a re-engineering of their core, they can never realize the true value of digital projects. The end goal of such initiatives is to ensure that underlying systems are agile and able to be responsive to business requirements. The key question then becomes how to develop and scale these capabilities across massive organizations.

Legacy Monolithic IT as a Digital Disabler…

From a top-down direction, business leadership is requesting agiler IT delivery and faster development mechanisms to deal with competitive pressures such as social media streams, a growing number of channels, disruptive competitors and demanding millennial consumers. When one compares the Cloud Native (CN) model (@ http://www.vamsitalkstech.com/?p=5632) to the earlier monolithic deployment stack (@ http://www.vamsitalkstech.com/?p=5617), it is easily noticeable that there are a sheer number of technical elements and trends that enterprise IT is being forced to devise strategies for.

This pressure is being applied on Enterprise IT from both directions.

Let me explain…

In most organizations, the process of identifying the correct set of IT capabilities needed for line of business projects looks like the below –

  1. Lines of business leadership works with product management teams to request IT for new projects to satisfy business needs either in support of new business initiatives or to revamp existing offerings
  2. IT teams follow a structured process to identify the appropriate (siloed) technology elements to create the solution
  3. Development teams follow a mix of agile and waterfall models to stand up the solution which then gets deployed and managed by an operations team
  4. Customer needs and update requests get slowly reflected causing customer dissatisfaction
    Given this reality, how can legacy systems and architectures reinvent themselves to become Cloud Native?

Complexity is inevitable & Enterprises that master complexity will win…

The correct way to creating a CN/DN architecture is that certain technology investments need to be made by complex organizations to speed up each step of the above process. The key challenge in the CN process is to help incumbent enterprises kickstart their digital products to disarm competition.

The sheer number of offerings of the digital IT challenge is due in large part to a large number of technology trends and developments that have begun to have a demonstrable impact on IT architectures today. There are no fewer than nine—including social media and mobile technology, the Internet of Things (IoT), open ecosystems, big data and advanced analytics, and cloud computing et al.

Thus, the CN movement is a complex mishmash of technologies that straddle infrastructure, storage, compute and management. This is an obstacle that must be surmounted by enterprise architects and IT leadership to be able to best position their enterprise for the transformation that must occur.

Six Foundational Technology Investments to go Cloud Native…

There are six foundational technology investments that predicate the creation of a Cloud Native Application Architecture – IaaS, PaaS & Containers, Container Orchestration, Data Analytics & BPM, API Management & DevOps.

There are six layers that large enterprises will need to focus on to improve their systems, processes, and applications in order to achieve a Digital Native architecture. These investments can proceed in parallel.

#1 First and foremost, you will need an IaaS platform

An agile IaaS is an organization-wide foundational layer which provides unlimited capacity across a range of infrastructure services – compute, network, storage, and management. IaaS provides an agile but scalable foundation to deploy everything else on it without incurring undue complexity in development, deployment & management. Key tenets of the private cloud approach include better resource utilization, self-service provisioning and a high degree of automation. Core IT processes such as the lifecycle of resource provisioning, deployment management, change management and monitoring will need to be redone for an enterprise-grade IaaS platform such as OpenStack.

#2 You will need to adopt a PaaS layer with Containers at its heart  –

Containers are possibly the first infrastructure software category created by developers in mind. The prominence of Linux Containers has Docker coincided with the onset of agile development practices under the DevOps umbrella – CI/CD etc. Containers are an excellent choice to create agile delivery pipelines and continuous deployment. It is a very safe bet to make that in a few years, the majority of digital applications (or mundane applications for that matter) will transition to hundreds of services deployed on and running on containers.

Adopting a market leading Platform As A Service (PaaS) platform such as Red Hat’s OpenShift or CloudFoundry can provide a range of benefits from helping with container adoption, tools to help with CI/CD process, reliable rollout with A/B testing, Green-Blue deployments. A PaaS such as OpenShift adds auto-scaling, failover & other kinds of infrastructure management.

Why Linux Containers and Docker are the Runtime for the Software Defined Data Center (SDDC)..(4/7)

#3 You will need an Orchestration layer for Containers –

At their core, Containers enable the creation of multiple self-contained execution environments over the same operating system. However, containers are not enough in and of themselves – to drive large-scale DN applications. An Orchestration layer at a minimum, organizes groups of containers into applications, schedules them on servers that match their resource requirements, places the containers on complex network topology etc. It also helps with complex tasks such as release management, Canary releases and administration. The actual tipping point for large-scale container adoption will vary from enterprise to enterprise. However, the common precursor to supporting containerized applications at scale has to be an enterprise-grade management and orchestration platform. Again, a PaaS technology such as OpenShift provides two benefits in one – a native container model and orchestration using Kubernetes.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

#4 Accelerate investments in and combine Big Data Analytics and BPM engines –

In the end, the ability to drive business processes is what makes an agile enterprise. Automation in terms of both Business Processes (BPM) and Data Driven decision making are proven approaches used at webscale,  data-driven organizations. This makes all the difference in terms of what is perceived to be a digital enterprise. Accordingly, the ability to tie in a range of front, mid and back-office processes such as Customer Onboarding, Claims Management & Fraud Detection to a BPM-based system and allowing applications to access these via a loosely coupled architecture based on microservices is key. Additionally leveraging Big Data architectures to process data streams in near real-time is another key capability to possess.

Why Big Data Analytics is the Future of CRM..

#5 Invest in APIs –

APIs enable companies to constantly churn out innovative offerings while still continuously adapting & learning from customer feedback. Internet-scale companies such as Facebook provide edge APIs that enable thousands of companies to write applications that drives greater customer volumes to the Facebook platform. The term API Economy is increasingly in vogue and it connotes a loosely federated ecosystem of companies, consumers, business models and channels

APIs are used to abstract out the internals of complex underlying platform services. Application Developers and other infrastructure services can be leveraged well defined APIs to interact with Digital platforms. These APIs enable the provisioning, deployment, and management of platform services.

Applications developed for a Digital infrastructure will be developed as small, nimble processes that communicate via APIs and over traditional infrastructure such as service mediation components (e.g Apache Camel). These microservices based applications will offer huge operational and development advantages over legacy applications. While one does not expect legacy but critical applications that still run on mainframes (e.g. Core Banking, Customer Order Processing etc) to move over to a microservices model anytime soon, customer-facing applications that need responsive digital UIs will definitely move.

Why APIs Are a Day One Capability In Digital Platforms..

#6 Be prepared, your development methodologies will gradually evolve to DevOps – 

The key non-technology component that is involved in delivering error-free and adaptive software is DevOps.  Currently, most traditional application development and IT operations happen in silos. DevOps with its focus on CI/CD practices requires engineers to communicate more closely, release more frequently, deploy & automate daily, reduce deployment failures and mean time to recover from failures.

Typical software development life cycles that require lengthy validations and quality control testing prior to deployment can stifle innovation. Agile software process, which is adaptive and is rooted in evolutionary development and continuous improvement, can be combined with DevOps. DevOps focuses on tight integration between developers and teams who deploy and run IT operations. DevOps is the only development methodology to drive large-scale Digital application development.

Conclusion..

By following a transformation roughly outlined as above, the vast majority of enterprises can derive a tremendous amount of value in their Digital initiatives. However, the current industry approach as in vogue – to treat Digital projects as a one-off, tactical project investments – does not simply work or scale anymore. There are various organizational models that one could employ from the standpoint of developing analytical maturity. These ranging from a shared service to a line of business led approach. An approach that I have seen work very well is to build a Digital Center of Excellence (COE) to create contextual capabilities, best practices and rollout strategies across the larger organization. The COE should be at the forefront of pushing the above technology boundaries within the larger framework of the organization.

Blockchain and Bitcoin – Industry Insights & Reference Architectures…

Distributed Ledger technology (DLT) and applications built for DLT’s – such as cryptocurrencies – are arguably the hottest topics in tech. This post summarizes seven key blogs on the topic of Blockchain and Bitcoin published at VamsiTalksTech.com. It aims to serve as a handy guide for business and technology audiences tasked with understanding and implementing this groundbreaking technology.

Image Credit – DCEBrief

Introduction…

We have been discussing the capabilities of Blockchain and Bitcoin for quite some time on this blog. The impact of Blockchain on many industries is now clearly apparent. But can the DLT movement enable business efficiency and profitability?

# 1 – Introduction to Bitcoin –

Bitcoin (BTC) is truly the first decentralized, peer to peer, high secure and purely digital currency. Bitcoin & it’s other cousins such as Ether and AltCoins now regularly get widespread (& mostly positive) notice by a range of industry actors- ranging from Consumers, Banking Institutions, Retailers & Regulators. Riding on the real pathbreaker – the Blockchain, the cryptocurrency movement will help drive democratization in the financial industry and society at large in the years to come. This blog post discusses BTC from a business standpoint.

Bitcoin (BTC) Ushers in the Future of Finance..(1/5)

# 2 – The Architecture of Bitcoin –

This post discusses the technical architecture of Bitcoin.

The Architecture of Bitcoin..(2/5)

# 3 – Introduction to Blockchain –

The term Blockchain is derived from a design pattern that describes a chain of data blocks that map to individual transactions. Each transaction that is conducted in the real world (e.g a Bitcoin wire transfer) results in the creation of new blocks in the chain. The new blocks so created are done so by calculating a cryptographic hash function of its previous block thus constructing a chain of blocks – hence the name. This post introduces the business potential of Blockchain to the reader.

The immense potential of the Blockchain..(3/5)

# 4 – The Reference Architecture of the Blockchain –

This post discusses the technical architecture of Blockchain.

The Architecture of Blockchain..(4/5)

# 5 – How Blockchain will lead to Industry disruption –

Blockchain lies at the heart of the Bitcoin implementation & is easily the most influential part of the BTC platform ecosystem. Blockchain is thus both a technology platform and a design pattern for building global scale industry applications that make all of the above possible. Its design makes it possible to be used as a platform for digital currency also enable it to indelibly record any kind of transaction – be it a currency, or, a medical record, or, supply chain data, or, a document etc into it.

How the Blockchain will lead disruption across industry..(5/5)

# 6 – What Blockchain can do for the Internet of Things (IoT) –

Blockchain can enable & augment a variety of application scenarios and use-cases for the IoT. No longer are such possibilities too futuristic – as we discuss in this post.

What Blockchain can do for The Internet Of Things..

# 7 – Key Considerations in Adapting the Blockchain for the Enterprise –

With advances in various Blockchain based DLTs (distributed ledger technology) platforms such as HyperLedger & Etherium et al, enterprises have begun to take baby steps to adapt the Blockchain (BC) to industrial scale applications. This post discusses some of the stumbling blocks the author has witnessed enterprises are running into as they look to adopt Blockchain based Distributed Ledger Technology in real-world applications.

Blockchain For the Enterprise: Key Considerations..

Conclusion..

The true disruption of Blockchain based distributed ledgers will in moving companies to an operating model where they leave behind siloed and stovepiped business processes, to the next generation of distributed business processes predicated on a seamless global platform. The DLT based platform will enable the easy transaction, exchange, and contraction of digital assets. However, before enterprises rush in, they need to perform an adequate degree of due diligence to avoid some of the pitfalls we have highlighted above.

The Seven Characteristics of Cloud Native Application Architectures..

We are in the middle of a series of blogs on Software Defined Datacenters (SDDC) @ http://www.vamsitalkstech.com/?p=1833. The key business imperative driving the SDDC architectures is their ability to natively support digital applications. Digital applications are “Cloud Native” (CN) in the sense that these platforms are originally being written for cloud frameworks – instead of being ported over to the Cloud as an afterthought. Thus, Cloud Native application development emerging as the most important trend in digital platforms. This blog post will define the seven key architectural characteristics of these CN applications.

Image Credit – Shutterstock

What is driving the need for Cloud Native Architectures… 

The previous post in the blog covered the monolithic architecture pattern. Monolithic architectures , which currently dominate the enterprise landscape, are coming under tremendous pressures in various ways and are increasingly being perceived to be brittle. Chief among these forces include – massive user volumes, DevOps style development processes, the need to open up business functionality locked within applications to partners and the heavy human requirement to deploy & manage monolithic architectures etc. Monolithic architectures also introduce technical debt into the datacenter – which makes it very difficult for the business lines to introduce changes as customer demands change – which is a key antipattern for digital deployments.

Why Legacy Monolithic Architectures Won’t Work For Digital Platforms..

Applications that require a high release velocity presenting many complex moving parts, which are worked on by few or many development teams are an ideal fit for the CN pattern.

Introducing Cloud Native Applications…

There is no single and universally accepted definition of a Cloud Native application. I would like to define a CN Application as “an application built using a combination of technology paradigms that are native to cloud computing – including distributed software development, a need to adopt DevOps practices, microservices architectures based on containers, API based integration between the layers of the application, software automation from infrastructure to code, and finally orchestration & management of the overall application infrastructure.”

Further, Cloud Native applications need to be architected, designed, developed, packaged, delivered and managed based on a deep understanding of the frameworks of cloud computing (IaaS and PaaS).

Characteristic #1 CN Applications dynamically adapt to & support massive scale…

The first & foremost characteristic of a CN Architecture is the ability to dynamically support massive numbers of users, large development organizations & highly distributed operations teams. This requirement is even more critical when one considers that cloud computing is inherently multi-tenant in nature.

Within this area, the typical concerns need to be accommodated –

  1. the ability to grow the deployment footprint dynamically (Scale-up)  as well as to decrease the footprint (Scale-down)
  2. the ability to gracefully handle failures across tiers that can disrupt application availability
  3. the ability to accommodate large development teams by ensuring that components themselves provide loose coupling
  4. the ability to work with virtually any kind of infrastructure (compute, storage and network) implementation

Characteristic #2 CN applications need to support a range of devices and user interfaces…

The User Experience (UX) is the most important part of a human facing application. This is particularly true of Digital applications which are omnichannel in nature. End users could not care less about the backend engineering of these applications as they are focused on an engaging user experience.

Demystifying Digital – the importance of Customer Journey Mapping…(2/3)

Accordingly, CN applications need to natively support mobile applications. This includes the ability to support a range of mobile backend capabilities – ranging from authentication & authorization services for mobile devices, location services, customer identification, push notifications, cloud messaging, toolkits for iOS and Android development etc.

Characteristic #3 They are automated to the fullest extent they can be…

The CN application needs to be abstracted completely from the underlying infrastructure stack. This is key as development teams can focus on solely writing their software and does not need to worry about the maintenance of the underlying OS/Storage/Network. One of the key challenges with monolithic platforms (http://www.vamsitalkstech.com/?p=5617) is their inability to efficiently leverage the underlying infrastructure as they have a high degree of dependency to it. Further, the lifecycle of infrastructure provisioning, configuration, deployment, and scaling is mostly manual with lots of scripts and pockets of configuration management.

The CN application, on the other hand, has to be very light on manual asks given its scale. The provision-deploy-scale cycle is highly automated with the application automatically scaling to meet demand and resource constraints and seamlessly recovering from failures. We discussed Kubernetes in one of the previous blogs.

Kubernetes – Container Orchestration for the Software Defined Data Center (SDDC)..(5/7)

Frameworks like these support CN Applications in providing resiliency, fault tolerance and in generally supporting very low downtime.

Characteristic #4 They support Continuous Integration and Continuous Delivery…

The reduction of the vast amount of manual effort witnessed in monolithic applications is not just confined to their deployment as far as CN applications are concerned. From a CN development standpoint, the ability to quickly test and perform quality control on daily software updates is an important aspect. CN applications automate the application development and deployment processes using the paradigms of CI/CD (Continuous Integration and Continuous Delivery).

The goal of CI is that every time source code is added or modified, the build process kicks off & the tests are conducted instantly. This helps catch errors faster and improve quality of the application. Once the CI process is done, the CD process builds the application into an artifact suitable for deployment after combining it with suitable configuration. It then deploys it onto the execution environment with the appropriate identifiers for versioning in a manner that support rollback. CD ensures that the tested artifacts are instantly deployed with acceptance testing.

 Characteristic #5 They support multiple datastore paradigms…

The RDBMS has been a fixture of the monolithic application architecture. CN applications, however, need to work with data formats of the loosely structured kind as well as the regularly structured data. This implies the need to support data streams that are not just high speed but also are better suited to NoSQL/Hadoop storage. These systems provide Schema on Read (SOR) which is an innovative data handling technique. In this model, a format or schema is applied to data as it is accessed from a storage location as opposed to doing the same while it is ingested. As we will see later in the blog, individual microservices can have their own local data storage.

A Holistic New Age Technology Approach To Countering Payment Card Fraud (3/3)…

Characteristic #6 They support APIs as a key feature…

APIs have become the de facto model that provide developers and administrators with the ability to assemble Digital applications such as microservices using complicated componentry. Thus, there is a strong case to be made for adopting an API centric strategy when developing CN applications. CN applications use APIs in multiple ways – firstly as the way to interface loosely coupled microservices (which abstract out the internals of the underlying application components). Secondly, developers use well-defined APIs to interact with the overall cloud infrastructure services.Finally, APIs enable the provisioning, deployment, and management of platform services.

Why APIs Are a Day One Capability In Digital Platforms..

Characteristic #7 Software Architecture based on microservices…

As James Lewis and Martin Fowler define it – “..the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.” [1]

Microservices are a natural evolution of the Service Oriented Architecture (SOA) architecture. The application is decomposed into loosely coupled business functions and mapped to microservices. Each microservice is built for a specific granular business function and can be worked on by an independent developer or team. As such it is a separate code artifact and is thus loosely coupled not just from a communication standpoint (typically communication using a RESTful API with data being passed around using a JSON/XML representation) but also from a build, deployment, upgrade and maintenance process perspective. Each microservice can optionally have its localized datastore. An important advantage of adopting this approach is that each microservice can be created using a separate technology stack from the other parts of the application. Docker containers are the right choice to run these microservices on. Microservices confer a range of advantages ranging from easier build, independent deployment and scaling.

A Note on Security…

It goes without saying that security is a critical part of CN applications and needs to be considered and designed for as a cross-cutting concern from the inception. Security concerns impact the design & lifecycle of CN applications ranging from deployment to updates to image portability across environments. A range of technology choices is available to cover various areas such as Application level security using Role-Based Access Control, Multifactor Authentication (MFA), A&A (Authentication & Authorization)  using protocols such as OAuth, OpenID, SSO etc. The topic of Container Security is very fundamental one to this topic and there are many vendors working on ensuring that once the application is built as part of a CI/CD process as described above, they are packaged into labeled (and signed) containers which can be made part of a verified and trusted registry. This ensures that container image provenance is well understood as well as protecting any users who download the containers for use across their environments.

Conclusion…

In this post, we have tried to look at some architecture drivers for Cloud-Native applications. It is a given that organizations moving from monolithic applications will need to take nimble , small steps to realize the ultimate vision of business agility and technology autonomy. The next post, however, will look at some of the critical foundational investments enterprises will have to make before choosing the Cloud Native route as a viable choice for their applications.

References..

[1] Martin Fowler – https://martinfowler.com/intro.html

A Framework for Model Risk Management

“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.” – Donald Rumsfeld, 2002,  Fmr US Secy of Defense

This is the fourth in a series of blogs on Data Science that I am jointly authoring with Maleeha Qazi, (https://www.linkedin.com/in/maleehaqazi/). We have previously covered Data quality  issues @ http://www.vamsitalkstech.com/?p=5396 and the inefficiencies that result from a siloed data science process @ http://www.vamsitalkstech.com/?p=5046 . We have also discussed the ideal way Data Scientists would like their models deployed for the maximal benefit and use – as a Service @ http://www.vamsitalkstech.com/?p=5321. This fourth blogpost discusses an organizational framework for managing business risk which comes with a vast portfolios of model.  

Introduction

With machine learning increasing in popularity and adoption across industries, models are increasing in number and scope. McKinsey estimates that large enterprises have seen an increase of about 10 – 25% in their complex models which are being employed across areas as diverse as customer acquisition, risk management, insurance policy management, insurance claims processing, fraud detection and other advanced analytics. However, this increase is accompanied by a rise in model risk where incorrect model results, or design, contributes to erroneous business decisions. In this blog post, we discuss the need for model risk management (MRM) and a generic framework to achieve the same from an industry standpoint.

Model Risk Management in the Industry

The Insurance industry has extensively used predictive modeling across a range of business functions including policy pricing, risk management, customer acquisition, sales, and internal financial functions. However as predictive analytics has become increasingly important there is always a danger, or a business risk, incurred due to the judgment of the models themselves.  While the definition of a model can vary from one company to another, we would like to define a model as a representation of some real-world phenomenon based on the real-world inputs (both quantitative and qualitative) shown to it, which is generated by operating on the inputs using an algorithm to produce a business insight or decision. The model can also provide some level of explanation for the reasons it arrived at the corresponding business insight. There are many ways to create and deliver models to applications. These vary from spreadsheets to specialized packages and platforms. We have covered some of these themes from a model development perspective in a previous blog @ – http://www.vamsitalkstech.com/?p=5321.

Models confer a multitude of benefits, namely:

  1. The ability to reason across complex business scenarios spanning customer engagement, back-office operations, and risk management
  2. The ability to automate decision-making based on historical patterns across large volumes of data
  3. The audit-ability of the model which can explain to the business user how the model arrived at a certain business insight

The performance and the composition of a model depend on the intention of the designer. The reliability of the model depends primarily on access to adequate and representative data and secondly on the ability of the designer to model complex real-world scenarios and not always assume best-case scenarios.

As the financial crisis of 2008 illustrated, the failure of models brought down the insurance company AIG which caused severe disruption to the global financial system, set off the wider crisis in the global economy. Over the last few years, the growing adoption of Machine Learning models has resulted in their increased adoption into key business processes. This illustrates that if models go wrong, it can cause severe operational losses.  This should illustrate the importance of putting in place a strategic framework for managing model risk.

A Framework for Model Risk Management

The goal of Model Risk Management (MRM) is to ensure that the entire portfolio of models is governed like any other business asset. To that effect, a Model Risk Management program needs to include the following elements:

  1. Model Planning – The first step in the MRM process is to form a structure by which models created across the business are done so in a strategic and planned manner. This phase covers the ability to ensure that model objectives are well defined across the business, duplication is avoided, best practices around model development are ensured, & making sure modelers are provided the right volumes of data with high quality to create the most effective models possible. We have covered some of these themes around data quality in a previous blogpost @ http://www.vamsitalkstech.com/?p=5396    
  2. Model Validation & Calibration – As models are created for specific business functions, they must be validated for precision [1], and calibrated to reflect the correct sensitivity [4] & specificity [4] that the business would like to allow for. Every objective could have it’s own “sweet spot” (i.e. threshold) that they want to attain by using the model. For example: a company who wants to go green but realizes that not all of it’s customers have access to (or desire to use) electronic modes of communication might want to send out the minimum number of flyers that can get the message out but still keep their carbon footprint to a minimum without losing revenue by not reaching the correct set of customers. All business validation is driven by the business objectives that must be reached and how much wiggle room there is for negotiation.
  3. Model Management – Models that have made it to this stage must now be managed. Management here reflects answering questions suck: who should use what model for what purpose, how long should the models be used without re-evaluation, what is the criteria for re-evaluation, who will monitor the usage to prevent wrong usage, etc. Management also deals with logistics like where do the models reside, how are they accessed & executed, who gets to modify them versus just use them, how will they be swapped out when needed without disrupting business processes dependent on them, how should they be versioned, can multiple versions of a model be deployed simultaneously, how to detect data fluctuations that will disrupt model behavior prior to it happening, etc.
  4. Model Governance – Model Governance covers some of the most strategic aspects of Model Risk Management. The key goal of this process is to ensure that the models are being managed in conformance with industry governance and are being managed with a multistage process across their lifecycle – from Initiation to Business Value to Retirement.

Regulatory Guidance on Model Risk Management

The most authoritative guide on MRM comes from the Federal Reserve System – FRB SR 11-7/OCC Bulletin 2011-12. [3] And though it is not directly applicable to the insurance industry (it’s meant mainly for the banking industry), its framework is considered by many to contain thought leadership on this topic. The SR 11-7 framework includes documentation as part of model governance. An article in the Society of Actuaries April 2016 Issue 3 [2], details a thorough method to use for documenting a model, the process surrounding it, and why such information is necessary. In a highly regulated industry like insurance, every decision made (e.g. assumptions made, judgment calls given circumstances at the time, etc.) in the process of creating a model could be brought under scrutiny & effects the risk of the model itself. With adequate documentation you can attempt to mitigate any risks you can foresee, and have a good starting point for those that might blindside you down the road.

And Now a Warning…

Realize that even after putting MRM into place, models are still limited – they cannot cope with what Donald Rumsfeld dubbed the “unknown unknowns”. As stated in an Economist article [5]: “Almost a century ago Frank Knight highlighted the distinction between risk, which can be calibrated in probability distributions, and uncertainty, which is more elusive and cannot be so neatly captured…The models may have failed but it was their users who vested too much faith in them”. Models, by their definition, are built using probability distributions based on previous experience to predict future outcomes. If the underlying probability distribution changes radically, they can no longer attempt to predict the future – because the assumption upon which they were built no longer holds. Hence the human element must remain vigilant and not put all their eggs into the one basket of automated predictions. A human should always question if the results of a model make sense and intervene when they don’t.

Conclusion

As the saying goes – “Models do not kill markets, people do.” A model is only as good as the assumptions and algorithm choices made by the designer, as well as the quality & scope of the data fed to it. However, enterprises need to put in place an internal model risk management program that ensures that their portfolio of models are constantly updated, enriched with data, and managed as any other strategic corporate asset. And never forget, that a knowledgeable human must remain in the loop.

References

[1] Wikipedia – “Precision and Recall”
https://en.wikipedia.org/wiki/Precision_and_recall

[2] The Society of Actuaries – “The Modeling Platform” https://www.soa.org/Library/Newsletters/The-Modeling-Platform/2016/april/mp-2016-iss3-crompton.aspx

[3] The Federal Reserve – SR 11-7: Guidance on Model Risk Management
https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm

[4] Wikipedia – “Sensitivity and Specificity”

https://en.wikipedia.org/wiki/Sensitivity_and_specificity

[5] The Economist: “Economic models and the financial crisis Why they crashed too”, Jun 19th 2014 by P.W., London.

https://www.economist.com/blogs/freeexchange/2014/06/economic-models-and-financial-crisis

Why Legacy Monolithic Architectures Won’t Work For Digital Platforms..

As times change, so do architectural paradigms in software development. For the more than fifteen years the industry has been developing large scale JEE/.NET applications, the three-tier architecture has been the dominant design pattern. However, as enterprises embark or continue on their Digital Journey, they are facing a new set of business challenges which demand fresh technology approaches. We have looked into transformative data architectures at a great degree of depth in this blog, now let us now consider a rethink in the Applications themselves. Applications that were earlier deemed to be sufficiently well-architected are now termed as being monolithic.  This post solely focuses on the underpinnings of why legacy architectures will not work in the new software-defined world. My intention is not to merely criticize a model (the three-tier monolith) that has worked well in the past but merely to reason why it may be time for a generally well accepted newer paradigm.

Traditional Software Platform Architectures… 

Digital applications support a wider variety of frontends & channels, they need to accommodate larger volumes of users, they need wider support for a range of business actors  – partners, suppliers et al via APIs. Finally, these new age applications need to work with unstructured data formats (as opposed to the strictly structured relational format). From an operations standpoint, there is a strong need for a higher degree of automation in the datacenter. All of these requirements call for agility as the most important construct in the enterprise architecture.

As we will discuss, legacy applications (typically defined as created more than 5+ years ago) are beginning to emerge as one of the key obstacles in doing Digital. The issue is not just in the underlying architectures themselves but also in the development culture involved building and maintaining such applications.

Consider the vast majority of applications deployed in enterprise data centers. These applications deliver collections of very specific business functions – e.g. onboarding new customers, provisioning services, processing payments etc. Whatever be the choice of vendor application platform, the vast majority of existing enterprise applications & platforms essentially follows a traditional three-tier software architecture with specific separation of concerns at each tier (as the vastly simplified illustration depicts below).

Traditional three-tier Monolithic Application Architecture

The first tier is the Presentation tier which is depicted at the top of the diagram. The job of the presentation tier is to present the user experience. This includes the user interface components that present various clients with the overall web application flow and also renders UI components. A variety of UI frameworks that provide both flow and UI rendering is typically used here. These include Spring MVC, Apache Struts, HTML5, AngularJS et al.

The middle tier is the Business logic tier where all the business logic for the application is centralized while separating it from the user interface layer. The business logic is usually a mix of objects and business rules written in Java using frameworks such EJB3, Spring etc. The business logic is housed in an application server such as JBoss AS or Oracle WebLogic AS or IBM WebSphere AS – which provides enterprise services (such as caching, resource pooling, naming and identity services et al) to the business components run on these servers. This layer also contains data access logic and also initiates transactions to a range of supporting systems – message queues, transaction monitors, rules and workflow engines, ESB (Enterprise Service Bus) based integration, accessing partner systems using web services, identity, and access management systems et al.

The Data tier is where traditional databases and enterprise integration systems logically reside. The RDBMS rules this area in three-tier architectures & the data access code is typically written using an ORM (Object Relational Mapping) framework such as Hibernate or iBatis or plain JDBC code.

Across all of these layers, common utilities & agents are provided to address cross-cutting concerns such as logging, monitoring, security, single sign-on etc.

The application is packaged as an enterprise archive (EAR) which can be composed of a single or multiple WAR/JAR files. While most enterprise-grade applications are neatly packaged, the total package is typically compiled as a single collection of various modules and then shipped as one single artifact. It should bear mentioning that dependency & version management can be a painstaking exercise for complex applications.

Let us consider the typical deployment process and setup for a thee tier application.

From a deployment standpoint, static content is typically served from an Apache webserver which fronts a Java-based webserver (mostly Tomcat) and then a cluster of backend Java-based application servers running multiple instances of the application for High Availability. The application is Stateful (and Stateless in some cases) in most implementations. The rest of the setup with firewalls and other supporting systems is fairly standard.

While the above architectural template is fairly standard across industry applications built on Java EE, there are some very valid reasons why it has begun to emerge as an anti-pattern when applied to digital applications.

Challenges involved in developing and maintaining Monolithic Applications …

Let us consider what Digital business usecases demand of application architecture and where the monolith is inadequate at satisfying.

  1. The entire application is typically packaged as a single enterprise archive (EAR file), which is a combination of various WAR and JAR files. While this certainly makes the deployment easier given that there is only one executable to copy over, it makes the development lifecycle a nightmare. The reason being that even a simple change in the user interface can cause a rebuild of the entire executable. This results in not just long cycles but makes it extremely hard on teams that span various disciplines from the business to QA.
  2. What follows from such long “code-test & deploy” cycles are that the architecture becomes change resistant, the code very complex over time and as a whole the system subsequently becomes not agile at all in responding to rapidly changing business requirements.
  3. Developers are constrained in multiple ways. Firstly the architecture becomes very complex over a period of time which inhibits quick new developer onboarding. Secondly,  the architecture force-fits developers from different teams into working in lockstep thus forgoing their autonomy in terms of their planning and release cycles. Services across tiers are not independently deployable which leads to big bang releases in short windows of time. Thus it is no surprise that failures and rollbacks happen at an alarming rate.
  4. From an infrastructure standpoint, the application is tightly coupled to the underlying hardware. From a software clustering standpoint, the application scales better vertically while also supporting limited horizontal scale-out. As volumes of customer traffic increase, performance across clusters can degrade.
  5. The Applications are neither designed nor tested to operate gracefully under failure conditions. This is a key point which does not really get that much attention during design time but causes performance headaches later on.
  6. An important point is that Digital applications & their parts are beginning to be created using different languages such as Java, Scala, and Groovy etc. The Monolith essentially limits such a choice of languages, frameworks, platforms and even databases.
  7. The Architecture does not natively support the notion of API externalization or Continuous Integration and Delivery (CI/CD).
  8. As highlighted above, the architecture primarily supports the relational model. If you need to accommodate alternative data approaches such as NoSQL or Hadoop, you are largely out of luck.

Operational challenges involved in running a Monolithic Application…

The difficulties in running a range of monolithic applications across an operational infrastructure have already been summed up in the other posts on this blog.

The primary issues include –

  1. The Monolithic architecture typically dictates a vertical scaling model which ensures limits on its scalability as users increase. The typical traditional approach to ameliorate this has been to invest in multiple sets of hardware (servers, storage arrays) to physically separate applications which results in increases in running cost, a higher personnel requirement and manual processes around system patch and maintenance etc.
  2. Capacity management tends to be a bit of challenge as there are many fine-grained resources competing for compute, network and storage resources (vCPU, vRAM, virtual Network etc) that are essentially running on a single JVM. Lots of JVM tuning is needed from a test and pre-production standpoint.
  3. A range of functions needed to be performed around monolithic Applications lack any kind of policy-driven workload and scheduling capability. This is because the Application does very little to drive the infrastructure.
  4. The vast majority of the work needed to be done to provision, schedule and patch these applications is done by system administrators and consequently, automation is minimal at best.
  5. The same is true in Operations Management. Functions like log administration, other housekeeping, monitoring, auditing, app deployment, and rollback are vastly manual with some scripting.

Conclusion…

It deserves mention that the above Monolithic design pattern will work well for Departmental (low user volume) applications which have limited business impact and for applications serving a well-defined user base with well delineated workstreams. The next blog post will consider the microservices way of building new age architectures. We will introduce and discuss Cloud Native Application development which has been popularized across web-scale enterprises esp Netflix. We will also discuss how this new paradigm overcomes many of the above-discussed limitations from both a development and operations standpoint.