My take on Gartner’s Top 10 Strategic Technology Trends for 2015

As open source vendors like Red Hat, OpenStack, Hortonworks etc as well as foundations (OpenStack etc) help customers achieve their key business transformation initiatives through open architectures and technologies, customers should place a close eye on emerging technologies and trends as they happen.

But what comes next and what is to be expected?

Gartner top 10 trends offers a compelling look at these very important potential shifts in the IT landscape and their seismic impact on customer organizations.

gartner-top-10-strategic-technology-trends-2016-4-638

http://www.gartner.com/newsroom/id/2867917

Here is an independent (and open source) & hopefully succinct take on each of these –

1.Computing Everywhere

It is not just about serving these transient visitors across a business context, we feel that business architectures built in support of Mobile devices should also support the building of relationships with them. We increasingly see a number of customers supporting a BYOD model where mobile apps now serve as a replacement for web applications. Security,User interface design & business workflow support will emerge as key drivers from a business side. IT will focus on the ability of such architectures to support multiple cloud deployment backends.

2.Internet of Things

From an enterprise perspective, IoT has the potential to turn any organization into an Enterprise Internet of Things. We recommend that customers not only think about IoT soley in the context of smart home appliances and wearable fitness devices etc buts also about the ramifications of the changes to existing and potentially new complex application architectures run by most enterprises.

If IoT isn’t already viewed as a “must-have” by business stakeholders, chances are it won’t be long before customer IT organizations are tasked with identifying and harnessing information and actionable insight from Internet-connected devices.

The true value of Internet-connected devices lies in harnessing all the data they produce to provide insights into how the business is working – so that existing business models can be fine-tuned or even new ones created. This is typically done by developing applications that can glean insights from the data and providing it to business stakeholders and customers through dashboards. As a first step to designing an IoT strategy for your enterprise, start by identifying the areas of your business that would be a natural fit from a revenue generation or customer engagement perspective.

3. 3D Printing

We forecast that will be an interesting space to watch as more financing and funding goes into players in the 3D printing market. We expect this to only mature, evolve into supporting many different kinds of manufacturing products as cost of materials falls & more diverse products produced.This eventually this will lead to sea changes in the manufacturing industry with impacts for industrial automation.

4. Advanced, Pervasive and Invisible Analytics

As mobile clients, IoT applications, social media feeds etc are being brought oboard into existing applications from an analytics perspective, traditional IT operations face pressures from both business and development teams to provide new and innovative services in response to rapidly changing business requirements and the need for real-time responsiveness.Data streams need to be filtered and acted in appropriate context from an analytical perspective. Analytics is the first killer app for Big Data. Be it the low hanging fruit of reporting & dashboards to forecasting and predictive modeling and even Data Science. One of the biggest trends for 2014, is the enhancement of analytic capabilities to incorporate real streams of data at a humongous scale. Existing applications can now incorporate such functionality to derive real time meaning from this data.

5. Context-Rich Systems

We feel that context will be the critical piece as enterprise architectures ingest, transform and analyse new age data streams whether they are IoT or mobile device or social media related. Cross cutting concerns like security,workflow and business policies will all need to be baked in and supplied as part of the overall context of the data-flow.

6.Smart Machines

Smart machines like robots,personal helpers,automated home equipment will rapidly evolve as algorithms get more capable and understanding of their own environments. In addition, Big Data & Cloud computing will continue to mature and offer day to day capabilities around systems that employ machine learning to make predictions & decisions. We will see increased application of smart machines in diverse fields like financial services,healthcare, telecom and media.

7.Cloud/Client Computing

Cloud Computing will play an increasing role in the life cycle of development, deployment and optimization of computing applications.As mobile clients proliferate,the trend will be in favor of applications that use robust MBaaS technologies to maximize application performance and provide an ability to synchronize data efficiently across between devices and cloud computing backends.

8.Software-Defined Applications and Infrastructure

Innovation in the industry is often shackled by the absence of a responsive, automated,efficient and agile infrastructure. It can take days to procure servers to host bursts of workloads that may not be feasible for existing IT departments to rapidly turn around. We will witness the further rise of application controlled compute,network and storage.Further, Cloud Management Platforms (CMP) which beginning to provide orchestration capabilities by means of workload portability around public and private clouds will find increased adoption.

9.Web-Scale IT

Web-scale IT has already proven its mettle at large cloud services providers such as Amazon, Google, Netflix, Facebook and others and is now making its way into enterprises. Webscale IT in the enterprise will find adoption via technologies like OpenStack, Platform As A Service(PaaS) and DevOps, a software development philosophy & methodology that emphasises communication, collaboration and integration between development and operations. This trend towards adopting web scale practices, is definitely taking hold in IT organizations that want to be nimble and effective, will be driven by Open Source.

10.Risk-Based Security and Self-Protection

As cybercrime attacks increase in scale, notoriety and sophistication, security will clearly emerge as a cross cutting concern in any technology implementation. Broadly identifying every potential attack vector, enforcing realtime intelligence & deep learning around these while keeping the overall business context in mind will be one of the key approaches in keeping data & systems secure.

All said and done, these are disruptive (and exciting times) for enterprise IT and open source in particular. In follow-on posts, we will examine how these trends are rippling across the financial services industry both from a business solution & technology platform perspective.

Enter Open Source Cloud Management!

I have spent a lot of time in the past few years working on cloud management technology with various enterprise customers spanning verticals esp in financial services,insurance,healthcare & media.

This is an emerging field that had hitherto been dominated by proprietary vendors but with the acquisition and open sourcing of ManageIQ by Red Hat, the innovation dial is markedly moving in the direction of open source.

http://manageiq.org – is the community website. Red Hat productizes ManageIQ as CloudForms.

CMP’s become critically important as front (provisioning, integration, self service,workload optimization etc etc) and back office (reporting,billing,metering and chargeback) management capabilities become key for adoption and proliferation of technologies like OpenStack,AWS and even traditional virtualization.

What is a cloud management platform & why should anyone care?

The below is Gartner’s  definition of Cloud Management Platform.

A cloud management platform enables common management tasks on top of your virtual infrastructure, including:

  • self-service portal and capabilities with granular permissions for user access
  • metering and billing for chargeback and showback
  • ability to provision new instances and applications for an application catalog or from image templates
  • integration points with existing systems management, service catalogs and configuration management software
  • the ability to control and automate the placement and provisioning of new instances based on business and security policies

Ref – http://www.gartner.com/it-glossary/cloud-management-platforms

Two broad areas of applicability

First is managing virtual infrastructure..literally a management platform across your virtualized infrastructure. Things like monitoring, capacity mgmt & capacity planning (which are two significantly different things), compliance governance, automation, workflow automation; reporting, dashboards and chargeback. Everything around infrastructure management as far as virtualization goes.

Second solution area is around Cloud Enablement or Private & hybrid cloud computing. Self service (and under this umbrella provisioning new svcs & managing the lifecycle of those services) and then chargeback, metering and all those things that people think of as private cloud computing.

Let’s break it down further…


Broad areas of capabilities – governance, compliance, dashboards, process automation. Addressed a number of infra platforms..VMWare, RHEV, Hyper V, Amazon AWS..depending on the maturity or robustness of the APIs offered by the VM/Cloud provider. When ManageIQ was first started cloud was not in the lexicon of IT , this was around 7 years ago hence the heavy and robust focus on enterprise virtualization & concomitant workloads.

Key areas of functionality

Four key components..

  1. Insight – Visibility into the infrastructure..so we have discovering, monitoring, consumption, utilization, reporting,chargeback, trending analysis etc
  2. Control – Proactively enforce policies around compliance and governance. Things like patch policies, too many NICs in a DMZ or did not have the correct & certified version of an application. For instance, if you did not want servers running without a certain patch level in your production infrastructure, we could analyze that in realtime when the workload  discover that in realtime and intercept that start, stop it and then notify the appropriate roles that “this thing didn’t meet compliance”. 
  3. Automate – As the name would have you believe, this does IT process automation. Workflows. Key areas are provisioning using a state machine. Right from – Cloning a VM, IP Address Assignment, CPU-Disk size assignment and then  monitoring the workload throughout its lifecycle was done using the Automate engine.
  4. The fourth component and one that maybe you don’t think of as being that important but is a real differentiator is Integrate. This module deals with integrating the CMP engine into other management disciplines and ITIL systems. It is fairly straightforward to integrate ManageIQ into Enterprise Service Catalog projects like Remedy, IP Address Mgmt systems, CMDBs, Event Mgmt systems. There are bunch of those that ManageIQ has integrated with. Integration is bidirectional as well. System asking the CMP to do something (e.g provision a service catalog item that reflects a business app running on an  n-tier architecture) or have the CMP going back to get an approval from the system via the ManageIQ API.2313

    Architecture


    Lets get down into the guts of the product. It supports multiple & different infrastructure providers, ManageIQ collects and abstracts information (into what is called a virtual mgmt database) that would then be consumed by other areas of the product suite like Automation, Reporting and Access Control.

    So depending on the platform and the capabilities & APIs it exposes determine how deep the integration is with that platform. For instance VMWare and RHEV have rich APIs that let ManageIQ do a bunch of things across those platforms. Amazon AWS on the other hand has rudimentary or limited capabilities it exposes. So different levels of integration as mentioned above.

    On the Storage side, ManageIQ supports native integration with NetApp. For other storage vendors, ManageIQ uses an industry standard protocol as opposed to using a native protocol.

    ManageIQ classifies all of the data elements in the database, much like tags on a picture. Use this tagging for mgmt tagging to tag all of these elements. So db admins can just view & act on database workloads. What is cool is that you could use this from a business construct as well..for instance users in the Retail Banking line of business can only view Retail Banking workloads. You can slice and dice these things with configurable or discoverable tags or with business policy tags. This feature provides one an ability to create a tagging taxonomy that is based on business nomenclature and context.

    The project is delivered as a virtual appliance that uses RHEL. The  app is built on top of RHEL using Ruby & Ruby on Rails. The default DB is Postgres. 

    One simply imports the OVF. Virtual appliances can be horizontally scalable, failover, rollback, roll fwd. I have seen customers handle VM deployments from less than a 100 hosts and a 1000 VM’s to a 1000 hosts to tens of 1000s of VMs.

    You can hook up the server to whatever you use for A&A like an LDAP based like Active Directory so you are replicating roles and permissions in ManageIQ. It is all fairly seamless and instantaneous.

    Multiple locations are supported as far as distributed datacenter’s go.One can do federated across platforms like RHEV (Red Hat Enterprise Virtualization) & VMWare would see a single pane of glass or federated across geographical locations.

     

    Should my enterprise use ManageIQ?

    Clearly, if you are an infrastructure that leverages traditional virtualization infrastructure like VMware vSphere, Red Hat Enterprise Virtualization; Or if you are deploying or considering private cloud infrastructure like OpenStack; or public cloud infrastructure like Amazon Web Services, an open source based, enterprise grade & highly robust offering like ManageIQ can help you manage all those environments via a single pane of glass, provide efficient cloud brokering capabilities and comply with business policies & processes to put you on the ramp to offer Anything-as-a-service. That’s nirvana as most computing infrastructures go today.As with anything Red Hat, the system is fully open source encouraging wide community that is only growing.

Fraud detection architectures built around Open Source & Big Data #1/4

Fraud detection and prevention are a huge area of concern for not just credit card networks but the financial industry as a whole. This is an area I’ve spent a bit of time on. Please find the first in a series of posts where we examine design techniques to build webscale systems that can provide such advanced capabilities.

Click the below link to download a reference architecture white paper.

Followup posts will delve into the data & workflow aspects in greater detail.

Building AML Regulatory Platforms For The Big Data Era

Banking is increasingly a global business and leading US Banks are beginning to generate serious amounts of revenue in non-US markets. A natural consequence of this revenue surge is not just the proliferation of newer financial products tailored to local markets but also increased integration between the commercial and investment banking operations.

Combine these trends with the regulatory mandates, including those in the USA PATRIOT Act , that require banks to put in effective compliance , regulatory pressures continue to increase on Wall Street. The PATRIOT ACT requires all FINRA member firms to develop and implement anti-money laundering (AML) compliance programs that comply with the Bank Secrecy Act (BSA).

AML legislation was first introduced in 2001 as part of the US Patriot Act. The legislation targets money laundering and mandates financial institutions help the authorities investigate any suspicious transactions occurring in their customer’s accounts.

Implementation and re-engineering AML processes has been a focus for banks, especially as they adopt technologies around enterprise middleware, cloud, analytics and Big Data.

The global challenges for IT Organizations when it comes to AML are five fold:
1. The need to potentially monitor every transaction for fraudulent activity, such as money laundering
2. Ability to glean insight from existing data sources as well as integrating new volumes of data from unstructured or semi structured feeds
3. Presenting information that matters to the right users as part of a business workflow
4. Provide a way to create and change such policies and procedures on the fly as business requirements evolve
5. Provide an integrated approach to enforce compliance and policy control around business processes and underlying data as more regulation gets added with the passage of time
In order to address these challenges, financial services organizations need to meet the following business requirements:

1. Integrate & cleanse data to get complete view of any transaction that could signal potential fraud
2. Assess client risk during specific points in the banking lifecycle, such as account opening, transactions above a certain monetary value. These data points could signal potentially illegetimate activity based on any number of features associated with such transactions. Any transaction could also lead to the filing of a suspicious activity report (SAR)
3. Help aggregate such customer transactions across multiple geographies for pattern detection and reporting purposes
4. Create business rules that capture a natural language description of the policies, conditions, identifying features of activities such as those that resemble terrorist financing, money laundering, identity theft etc. These rules trigger downstream workflows to allow human investigation on such transactions
5. Alert bank personnel to completing customer identification procedures for cross border accounts
6. Track these events end to end from a tactical and strategic perspective
7. Combine transaction data with long term information from a mining perspective to uncover any previously undetected patterns in the underlying data
8. Help build and refine profiles of customers and related entities
9. Provide appropriate and easy-to-use dashboards for compliance officers, auditors, government agencies and other personnel
10. A key requirement is to implement automated business operations that not only meet the regulatory mandate but also ensuring that they are also transparent to business process owners, auditors and the authorities.
AML Platform Technologies

Building a regulatory platform is never purely a technology project. There are many other issues, including business structure, complex local market requirements, multiple audiences — both internal and external — with differing reporting needs, already complex business processes, program governance and SLAs. But having said all of that, our focus is to examine some of the key technical aspects in building such a solution.

The key technology components that provide the scaffolding of a large enterprise grade implementation are listed below. One needs to keep in mind that this is only a best practice recommendation and most mature architectures would already have a good portion of such a solution set in house. It is also not recommended to throw away what you have and rebuild from scratch, or other types of rip and replace strategies. These days, such a product stack can very handily be assembled leveraging open source or proprietary technology.

The platform, at a minimum, is composed of the following four tiers (starting with the bottom tier):

The Data and Business Systems tier is the repository of data in the overall information architecture. This data is produced as a result of business interactions (stored in OLTP systems), legacy data systems, mainframes, packaged applications, data warehouses, NoSQL databases and other Big Data oriented sources — Hadoop, columnar/MPP databases, etc. The data tier is also where core data processing and transformation happens. This is also the tier where a variety of different analysis and algorithms can be run to assess the different types of risk associated with AML programs, namely:

Client Risk
Business Risk
Geographic Risk

These data silos constitute data flowing into the enterprise architecture from Big Data or unstructured sources as a byproduct of business operations, data already present in-house in data warehouses, columnar data stores and other unstructured data.

The Data Virtualization tier sits atop the data and business systems tier and transforms data into actionable information so that it can be fed into business processes and integration tier that is above it. Most financial institutions suffer from being able to provide timely operational and analytical insights due to the inability to effectively utilize data trapped in disparate applications and technology silos. In essence, the Data Virtualization tier makes data spread across physically distinct systems appears as a set of tables in a local database (a virtual data view). It connects to any type of data source, including RDBMS (SQL), analytical cubes (MDX), XML, web services, and flat files. When users submit a query (SQL, XML, XQuery or procedural), this tier calculates the optimal way to fetch and join the data on remote, heterogeneous systems. It then performs the necessary joins and transformations to compose the virtual data view, and delivers the results to users via JDBC, ODBC or web services as a Virtual Data Service — all on the fly without developers/users knowing anything about the true location of the data or mechanisms required to access or merge it.

This tier is also comprised of tools, components and services for creating and executing bi-directional data services. Through abstraction and federation, data is accessed and integrated in real-time across distributed data sources without copying or otherwise moving data from its system of record. Data can also be persisted back using a variety of commonly supported interfaces – ODBC/JDBC or Web services (SOAP or REST) or any custom interface that can conform to an API. The intention is to be polyglot at that level as well.

Using data provisioning, management and federation capabilities that enable actionable and unified information to be exposed to a SOA/BPM/ESB layer in the easy steps:

1. Connect: Access data from multiple, heterogeneous data sources. 2. Compose: Easily create reusable, business-friendly logical data models and views by combining and transforming data.

3. Consume: Make unified data easily consumable through open standard interfaces.

4. Compliance: This tier also improves data quality via centralized access control, a robust security infrastructure and reduction in physical copies of data thus reducing risk. A metadata repository catalogs enterprise data locations and the relationships between the data elements located in various data stores, thus enabling transparency and visibility.

Data Virtualization layer of a Big Data Platform
The Integration tier is the tier which serves as the primary means of integrating applications, data, services, and devices with the regulatory platform. The integration platform uses popular technologies to provide transformation, routing, and protocol-matching services. Examples include JMS, AMQP, and STOMP.

The core technologies of the Integration tier are a messaging subsystem, a mediation framework that supports the most common enterprise integration patterns as well as an Enterprise Service Bus (ESB) to interconnect applications. Based on proven integration design patterns, this layer handles the plumbing issues that deal with application interconnects, financial format exchange and transformation and reliable messaging, so that software architects can direct more of their attention towards solving business problems

Popular Message Exchange Patterns at the Integration Tier
The BPM/Rules tier is where AML business processes, policies, and rules are defined, as well as measured for their effectiveness as a result of business activities. The BPM/Rules tier optionally hosts a Complex Event Processing (CEP) layer as an embeddable and independent software module, one still completely integrated with the rest of the platform.

CEP allows the architecture to process multiple business events with the goal of identifying the meaningful ones. This process involves:

Detection of specific business events
Correlation of multiple discrete events based on causality, event attributes, and timing as defined by the business via a friendly user interface
Abstraction into higher-level (i.e. complex or composite) events
It is this ability to detect, correlate and determine business relevance that powers a truly active decision-making capability and makes this tier the heart of a successful implementation.

Overall flow

The above architectural tiers can then be brought together as outlined below:
1. Information sources send data into the enterprise architecture via standard interfaces. These could be batch oriented or a result of real-time human interactions. This data is simultaneously fed into the data tier as well. The data tier is the golden image of all data in the architecture and may choose to present predefined or dynamic views via the virtualization tier.
2. A highly scalable messaging system, as part of the integration layer, to help bring these feeds into the architecture as well as normalize them and send them in for further processing via the BPM tier.
3. The BPM/Rules/CEP tier that can process these feeds at scale to understand relationships among them.
4. As a result of specific patterns being met that indicate potential flags, business rule process workflows are instantiated dynamically. These have created that follow a well-defined process that is predefined and modeled by the business. Different dashboards can be provided based on the nature of the user accessing this system. For instance, executives can track the total number.
5. Data that has business relevance and needs to be kept for offline or batch processing can be handled using a data grid, or columnar database or a storage platform. The idea to deploy Hadoop oriented workloads (MapReduce, or, Hive, or, Pig, or Machine Learning) to understand and learn from compliance patterns as they occur over a period of time.
6. Scale out via a cloud-based deployment model is preferred as a deployment approach as this helps the system, as the loads placed on the system increase over time.

Big Data and Risk Management in Financial Services

From a business perspective as well as a technology impact perspective, one of the biggest technical trends facing executives in financial services is big data, with the business value derived from this data helping to drive the pace of adoption.

There are very few industries that are as data-centric as the banking and financial services industries. Every interaction that a client or partner system has with a banking institution produces actionable data that has potential business value associated with it. Retail banking, wealth management, consumer banking and capital markets have historically had multiple sources and silos of data across the front-, back- and mid-offices.

Today, however, they are beginning to ask questions about how to on-board the data and draw actionable insights from the data all within a reasonable SLA. Many IT organizations are feeling pressure to deliver on this vision as it moves from industry hype to the datacenter.

Challenges That Financial Institutions Face
Financial service firms have operated under significantly increased regulatory requirements, such as Basel III, since the 2008 financial crisis. As capital and liquidity reserve requirements have increased, the requirement to know exactly how much capital needs to be reserved, based on current exposures, is critical. Unnecessarily tying up excess capital can keep the firm from taking advantage of business and market opportunities. Today’s risk management systems must respond to new reporting requirements and also handle ever-growing amounts of data to perform more comprehensive analysis of credit, counter-party, and geopolitical risk. However, existing systems that are not designed to meet today’s requirements cannot finish reporting in time for start of business or trading, which can lead to uninformed decisions. The problem is compounded by the increasing need for intra-day reporting as well as a short window for overnight batch processing as required by global trading and electronic exchanges. And, many of these systems are inflexible and expensive to operate.

There are other problems that aging in-house solutions can present, such as:

Data is often stored across many silos throughout the firm, using multiple technologies which all require different methods for obtaining access. Instead of focusing on analysis and reporting, valuable time can be wasted as teams try to figure out how to reliably obtain the necessary data.
Many proprietary solutions have been built using high performance computers or grid computing clusters that are inflexible and can consume large portions of the available technology budget without meeting evolving challenges. Since these systems often don’t use any standard interfaces, off-the-shelf tools can’t be used or require custom development.
Existing systems typically lack the security and controls necessary to keep up with compliance and data security requirements

Want to learn about how these challenges are being adddressed? Join Vamsi Chemitiganti, Red Hat and Ajay Singh, Hortonworks on our upcoming webinar, April 22 @ 9am PT/12pm ET and learn more about:

Big Data Use-cases and best practices
Requirements for a successful big data deployment
The Red Hat and Hortonworks collaborated solution for Risk Management