Home Cloud Why You Should Move DevOps Over to Containers and Kubernetes

Why You Should Move DevOps Over to Containers and Kubernetes

by Vamsi Chemitiganti

The Developer Container Adoption Conundrum

I still spend a lot of my customer time discussing best practices in adapting containers into the DevOps cycle. This is true of various development teams even at large enterprise customers. While the key challenge is to always deliver applications at a higher degree of velocity, the adoption of DevOps like principles especially in CI/CD can help decrease project and business initiative risk greatly. Couple that with technology such as a Service Mesh and the benefits are manifold.

The Container Adoption Lifecycle

I wrote a blog last year around the container adoption model at any given Fortune 1000 enterprise. In that, I drew this below wagon wheel. What’s germane for this discussion is that while cloud implementations (both public and private) ostensibly start as a way of consolidating redundant or inefficient datacenter capacity and realizing cost savings, inevitably the developers are the juggernaut that drives it forward as the illustration captures in steps 2 and 3 – both around migrating legacy applications to the cloud as well as ensuring that all greenfield applications are ‘cloud-first’.

Reasons for Developers to move over to Containers

Simply put, Continuous Integration and Continuous Delivery (CI/CD) techniques conceive of the overall development flow as a pipeline and an automated, workflow efficient pipeline. One of the key benefits of such automation is that manual touchpoints are greatly eliminated thus enabling software to ship faster and at lower error rates. Engineering can thus release incremental updates without decreasing the quality of the overall shipped product.

Now, where do containers and Kubernetes come into all this?

Container & K8s adoption helps in five broad strategic areas –

#1 They eliminate configuration challenges

The container model eliminates configuration challenges by enforcing immutability and as it leverages K8s, it provides ‘type of cloud’ & platform independence. When coupled with frameworks such as Ansible and Terraform, environments across the development, testing & preproduction phases all look similar thus eliminating a constant irritant in DevOps – lack of uniform environment configuration. Adopting Docker (or CR-IO) containers makes the defacto contract between various developer teams precisely that – a self-contained container.

#2 Faster and more resilient deploys

They make for faster and more resilient deploys – K8s handles the automatic provisioning of containers into pods and abstracts both developer and operations teams from having to maintain the runtime state of the application. Further, an organization set of security scans and other security requirements can be added as part of a test suite. Kubernetes also makes it extremely easy to build environments that mimic production environments using concepts such as Ingress for load balancing.

#3 Let K8s handle all the complexity of the runtime state

CD platforms such as Spinnaker/JenkinsX speak native K8s APIs that support all core features for deployment flexibility such as StatefulSets, ReplicaSets

#4 LEVERAGE the ecosystem

Kubernetes has built up a thriving ecosystem of tools such as Helm charts, Application catalogs, and the overall GitOps flow. All of these aid in optimizing the dev lifecycle for containers.

GitOps especially is a powerful idea. While Git at its core is a version control system that stores application code from development to deployment, it not only involves store code but also can maintain everything related to the application, including tests, images, any infrastructure config, release pipeline metadata, and everything else related to the application. Git then provides the workflow that triggers the CI/CD cycle based on code check-in and then arrives at the true DevOps model – where developers partner with the operations teams, that are usually involved much more downstream, at the outset.

#5 use the service mesh

Finally, when microservices are adopted as the foundation of the application, service mesh technology such as Istio makes the interaction between microservices much easier with traffic management, secure communication, and managing timeouts between services. Service meshes deserve their own series of posts which I will be doing in the days and weeks to come.

Bringing it all together

For the sake of brevity, I reproduce the below blog I had written last year as a way of illustrating the above five advantages in an actual real-world CI/CD process. The blog captures a Telco moving their DevOps workflow into the Edge application space but the lessons are broadly valid no matter where your application is working.

A DevOps Pattern For Edge Computing Applications

SUMMARY

My goal for this blog was to drive home the five key reasons for enterprise developers to adopt containers and Kubernetes into their software engineering disciplines. The upfront rigor involved in doing so will pay dividends in terms of the agility, efficiency and cost involved in updating and managing the application.

Discover more at Industry Talks Tech: your one-stop shop for upskilling in different industry segments!

You may also like

1 comment

Harrison November 10, 2020 - 2:28 am

Great article, thank you.

Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.