Home Architecture Kubernetes Multi-tenancy Best Practices & Architecture Model..(2/2)

Kubernetes Multi-tenancy Best Practices & Architecture Model..(2/2)

by Vamsi Chemitiganti

A multi-tenant architecture is key in reliably hosting applications and business services that will eventually produce value for the customer. Not designing a K8s infrastructure for multi-tenancy will lead to business initiative & project failure. In the second in a two-part blog post, we will discuss the architectural and technical details of building a multitenant k8s platform. The first can be found @ http://www.vamsitalkstech.com/?p=8134

Photo by Sensory Art House on Unsplash

Kubernetes Multitenancy Architecture Model

In the below architecture, a central management plane is used to create Kubernetes clusters which are running on an IaaS layer that supports both virtualized and bare metal servers. The management plane also provides a UI and API to create tenant users who are assigned specific namespaces with resource isolation & network isolation policies.  Each tenant workload has the proper level of isolation and resource availability. In a typical scenario, each tenant (defined as a group of multiple users) is assigned a separate namespace. Thus, each isolatable entity has its own separate namespace which permits not just tenant isolation but also reasonable resource efficiencies. This model also permits the addition of multiple namespaces to a given set of tenants. The drawback of this model is that it is still relatively inefficient from a resource sharing perspective. However, each tenant is highly secure from intrusion by others. Cluster admins are responsible for enforcing Resource Quotas, CNI network policies and Pod security policies and tying them via RBAC.

The following are some best practices to follow while adopting the above architecture model.

Kubernetes primitives and the multi-tenancy challenge

#1 Assign Each Tenant A Separate Namespace

The namespace construct in Kubernetes provides scope to isolate resources that are spread across teams and projects. Using namespaces, multiple teams can divide cluster resources among themselves – via resource quotas. Namespaces can be nested inside other namespaces but a given K8s resource object can only be in one namespace. Within a namespace, labels can be used to distinguish resources. Work is ongoing inside the K8s community, to ensure that all objects within a namespace share all the same access control policies. Namespaces also work well when teams in an organization share Kubernetes clusters for dev, test and prod use cases.

#2 Use Network Policy to Isolate Traffic between namespaces

The basic Kubernetes deployment policy allows network communication between namespaces. Obviously, this will not work across a multi-tenant environment where namespaces across tenants need to be isolated from one another. Network Policies are provided to enforce segmentation and provide a basic degree of network isolation. In production. most clusters will use projects like Calico or Flannel for the networking layer underneath the clusters. These projects provide a richer set of policy capabilities than vanilla Kubernetes. For instance, Calico network policy provides policy ordering/priority, deny rules, and more flexible match rules. Calico network policy can be used across not just pods but also VMs/host interfaces and with service mesh technology such as Istio. In the last case, it supports layer 5 to 7 match criteria and cryptographic identity.

Here’s an example Network Policy file that will block traffic from external namespaces:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: block-external-namepsace-traffic
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}

After creating the file, apply it with a command like:

kubectl apply -f networkpolicy.yaml -n=your-namespace

#3 Enforce Resource Quotas

Kubernetes Resource Quotas ensure that tenants have fair access to resources. Resource Quota lets you set quotas on how much CPU, storage, memory, and other resources can be consumed by all pods within a namespace.

Consider this Resource Quota file:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: mem-cpu-demo
spec:
  hard:
    requests.cpu: "1"
    limits.cpu: "2"

It can be applied with a command like:

kubectl apply -f resourcequota.yaml  -n=your-namespace

The above policy will limit all containers within a given namespace to 1 CPU request and 2 total CPUs. One can set the same or different quotas for all namespaces based on business or capacity requirements. For instance, production workloads need more resources compared to development/test machines.

#4 Leverage RBAC to enforce control

Kubernetes uses Role-based Access Control (RBAC) to enable users to control access to resources. Roles such as Cluster admin, Tenant admin, and Users (such as developers) can be created which have a scope of operations as discussed below. Based on access permissions, users are able to create, modify, delete resources or are disallowed access to them.
  • Cluster admins are allowed super access to the entire cluster(s) to enable them to perform the full range of CRUD operations on any resource as well as to perform user administration into the namespace(s).
  • Tenant admins can administer single tenants and their namespaces. They can also perform user administration into the namespaces within their tenant.
  • Users such as developers can perform CRUD operations on the objects within their namespace- such as Pods,

Conclusion

Pending standardization work in the Kubernetes Mutitenancy SIG, enterprises are left to their own devices to ensuring the security and resource efficiency in a multi-tenant Kubernetes deployment.  My goals in these two blogs have been to provide best practices and an architecture model to achieve the same using the current state of art.

Discover more at Industry Talks Tech: your one-stop shop for upskilling in different industry segments!

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.