Home Cloud An Introduction to Cilium..(Part 3)

An Introduction to Cilium..(Part 3)

by Vamsi Chemitiganti

This post follows the last one in the eBPF series – https://www.vamsitalkstech.com/cloud/an-introduction-to-ebpf-architecture-part-2/. We introduce Cilium, an open source community-driven project that aims to drive software-driven networking, network observability, and monitoring for cloud-native microservice-based deployments on Kubernetes and other container orchestration platforms.

So what Cloud Native problems can Cilium solve better?

As we discussed over the last two posts in this series, eBPF enables the insertion of dynamic functionality in the Linux kernel, which enables new applications to be built in areas such as security, visibility, and networking. The most prominent project based on eBPF is Cilium, which has solid use cases in networking, observability, load balancing, and security. Cilium finds applicability in containerized workloads and microservices-driven environments where containers are run on K8s pods and scale in/scale out events occur very frequently in response to both CI/CD as well as system load.
In Oct 2021, Cilium became a CNCF incubating project.

Cilium (and eBPF) has the ability to solve a lot of the challenges that existed in prior linux networking approaches to K8s. The important among these include –

  1. Connectivity of microservices using traditional iptables/tcp & udp based approaches– iptables are a Linux utility that relies on rulechain-based filtering of packets based on ip addresses and port numbers. When a connection is made to a node, iptables looks up a rule to see what the default action should be. It relies heavily on access lists which hurts scalability in large volume environments
  2. Enable real-time visibility of system performance using approaches other than based purely on IP addresses. eBPF provides the ability to insert this visibility at the kernel level itself.
  3. Highly scalable security enforcement and monitoring – Based on eBPF, which separates security from IP addressing scheme, Cilium can apply security policies (for microservice deployed on containers running within pods), based on HTTP layer as well as can perform conventional L3/L4 segmentation
  4. The key use cases for Cilium include networking, observability and security. We will cover these in more detail in the next blog. L7 policy level enforcement is an experimental feature in the works

Cilium High-Level Architecture

CIlium Architecture (http://cilium.io)

The architecture of Cilium is shown above. Cilium consists of the following daemons running on all cluster nodes in your environment – the cilium agent, the cli client, plugins. You can install Cilium on a self-managed Kubernetes cluster or on a managed service such as EKS. Helm charts are included to make the install easy.

The agent provides networking, security, and observability to the workloads running on that node. Workloads can be either containerized or running natively on the system. Cilium operates on an agent architecture where the agent (cilium-agent) runs on each node in the Kubernetes cluster.
The agent accepts config information from K8s (or other orchestration systems) for networking, load balancing, network policy or monitoring information and translates that into eBPF constructs as shown. It also maintains a topology of which microservices are running on which containers in which nodes.

The CNI plugin (cilum-cni) is called by K8s whenever a pod lifecycle event happens e.g. a pod is scheduled or terminated. The Cilium API triggers the necessary networking, load balancing, and policy setup for the pod. We have covered the eBPF portion in previous blogs in this series. Each Linux node, in the case of Docker, runs a process (cilum-docker) that handles Docker libnetwork calls and interfaces with the Cilium Agent.

Cilium also provides a K8s operator that handles operations globally across the cluster as opposed to handling them individually for each node in the cluster. The operator is however not in the network path and the cluster can even continue to function if the operator is down. However, delays in IPAM and thus scheduling of new workloads can happen as new IP addresses cannot be allocated. Cilium provides a container that can verify connectivity and traffic flow between various pods.

Finally, Cilium also uses a key, value store to share data between agents running on different nodes. Right now, etcd and consul are supported.

One other project that Cilium evaluators run into is Hubble, which is a fully distributed observability platform for both networking and security. We will cover that in a follow-up post.

Conclusion

The next blog post will focus on the use cases that Cilium can enable.

Featured Image by Bessi from Pixabay

Discover more at Industry Talks Tech: your one-stop shop for upskilling in different industry segments!

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.