Home Cloud The Limitations of Serverless Computing In the Public Cloud…

The Limitations of Serverless Computing In the Public Cloud…

by Vamsi Chemitiganti

Serverless Computing frameworks led by AWS Lambda are being touted as the future for Digital applications. Since Lambda was introduced in late 2014, increasing number of Digital applications have begun adopting hybrid microservices/serverless architectures based on the same. This short blog post delves into why all is not kosher with running serverless Digital applications at enterprise scale.  

Serverless computing is a form of event-driven programming where short-lived code snippets called “functions” are delivered as a service to an invoking application. These functions are hosted in ephemeral containers which are only instantiated at the time of an event-based invocation. The event can be anything that fits an application use case. For instance, an HTTP call, a message trigger from a message queue, a database insert etc. Serverless frameworks hosted by the public providers also promoting a “pay as you go”, model, where payment is charged by 100’s of ms. The developer is completely shielded from the complexities of server management, administration, and capacity monitoring.

If one thinks of a monolith being composed of hundreds of microservices, the microservice itself can be decomposed into hundreds of functions.

Unlike running a PaaS, while using serverless frameworks, DevOps teams are free of worrying about updates, application scale up/down events, idle costs, complex build/deploy operations etc.

However, there are five key issues with using commercially available serverless technologies such as Lambda or Azure Functions at enterprise scale. The next post will discuss what is being done in the open source community to ameliorate the same.

Issue #1 Lockin to an underlying cloud provider

This is an obvious one. All the leading cloud providers lock customers into the unique implementation of their serverless framework. For instance, AWS Lambda relies on a panoply of AWS offerings across DNS (Route53), API Gateway, S3, Database, Networking (VPCs) etc that are needed to compose complex serverless applications. The lambda functions thus are written are not portable across other cloud providers. Once written, it is not just an application rewrite but also a rewiring of all of these essential services.

Added to this are some of the cloud provider specific limits such as those imposed by AWS Lambda :

  • deployment artifact sizes
  • limits on number of concurrent executions
  • amount of memory  allocated per invocation

Issue #2 Cost

First off, the cost of using a given FaaS framework such as Lambda should not be viewed in pure isolation from a function cost standpoint. The financial implications of using Lambda in a large enterprise also depend on usage costs for the surrounding ecosystem services, as highlighted above. Thus it is not just about vanilla CPU/RAM/Network cost but also about associated charges of API Gateway, S3, Dynamo, costs of sending data across VPCs etc. Most customers find that the charges quickly add up with the public cloud providers.

If your transaction volumes will remain high and scale-up higher, platforms such as Lambda functions can potentially cost more of your budget than anticipated. Fixes include designing the application in such a way that a larger batch size of data can be ingested into the function, keeping the execution time lower by writing more efficient code, data transfer costs across VPCs and AZs (Availability Zones) etc. Cross VPC transfers require lambda functions to open ENI( Elastic Network Interfaces) thus causing longer execution times and a higher charge for the transfers themselves.

Whatever be the fix, it stands to reason that Functions as a Service as a technology category has a compelling economic use case on the Private cloud.

Issue #3 (Startup) Latency

One issue pointed out by various users on the Public cloud providers & various adopters has been the cold start challenge associated with using FaaS frameworks. Once a (lambda) function has not been used for a threshold of time, the framework reclaims the resources held by it which means that to restart it means instantiating another container, loading up its dependencies and then making it available. For certain real-time or near real-time applications in IoT or Cognitive applications serving live end users, 100 ms is too much of a latency.

Issue #4 Private Cloud based Serverless Applications

In the private cloud, most serverless implementations are typically layered on an existing PaaS platform. If one thinks about it, the limiting model of a PaaS essentially calls into question the usage of a serverless framework on it. From an evolutionary standpoint, serverless frameworks have been added to commercial PaaS’s as an afterthought which makes adopting them a difficult challenge for applications not developed on a PaaS. Examples include legacy webapps, greenfield containerized apps, Big Data & Cognitive workloads, the list goes on. The lockin around the PaaS integration makes this one a very difficult proposition as it adds another layer of complexity on an already complex architecture.  The net result is that technical debt can get compounded in the case of inefficiently designed serverless applications.

Issue #5 Complex CI/CD toolchains

FaaS frameworks are still evolving and their place in a complex CI/CD toolchain is still undefined. It will take a lot of upfront investment & diligence by development teams to integrate serverless frameworks into their continuous delivery pipelines. Serverless means more moving parts, which means more testing & quality due diligence.

For instance,

  1. A newly developed or modified function needs to be passed through a chain of checks – from unit testing to UAT – before being promoted to production. As such this can make the process more cumbersome.
  2. For FaaS, additional load and performance testing need to be in place for each individual function. This is critical before deploying these  to production
  3. Rollback and roll forward capabilities need to be put in place for each function
  4. The Ops team needs to get involved much earlier compared to microservices based development

Conclusion

To be sure, serverless architectures demand a higher level of technology & cultural maturity from enterprises adopting them. The next blog will discuss what can be done about this critical enterprise architecture challenge by leveraging Kubernetes, which seems to be the answer to all things cloud.

Discover more at Industry Talks Tech: your one-stop shop for upskilling in different industry segments!

You may also like

1 comment

Lou May 5, 2018 - 3:46 pm

Great article! Thanks for sharing the knowledge…

Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.