Why do we need communication betwee...

24
03

Why do we need communication between containers and virtual machines? What is the CNI of "OpenShift", "ACI", and "NSX": Basic knowledge of container networks [11th]

Related Keywords

SDN (Software Defined Networking) | Open Source | Docker

Communication within a container orchestrator "Kubernetes" cluster (Kubernetes cluster) , commonly using CNI (Container Networking Interface) plugins. A CNI plugin is an implementation of the CNI (Container Network Interface) that defines communication between containers. Following the 10th installment "What is VPC and VNet communication useful for understanding EKS, AKS, and GKE containers?", which introduced container communication in cloud services, this article is CNI using SDN (Software Defined Network) products. Let's take a look at how to use plugins.

Recommended articles to read together

Container Basics

CNI Plug-in for On-Premises Infrastructure

Cisco ACI CNI Plug-in

Cisco Systems' "Cisco ACI" is an SDN product that provides centralized management and operational automation of physical and virtual networks. Cisco ACI manages the network with policies. The components of the policy are "EPG" (End Point Group) and "Contract".

Communication between container and virtual machine Why is it necessary? What is the CNI of

EPGs are logical entities that group together various objects, including endpoints such as virtual machines and physical servers. A contract defines the communication rules between two EPGs. For example, create a group of clients as "EPG-Client", a group of Web servers as "EPG-Webserver", and allow TCP ports 80 and 443 in "Contract-Web" as "EPG-Client — Contract-Web — EPG-Webserver” policy. This policy allows only HTTP and HTTPS communication between clients and web servers within the EPG.

If you use "Cisco ACI CNI Plug-in", which is a CNI plug-in of Cisco ACI, with Kubernetes, you can use "Cluster" (Kubernetes cluster), "Namespace" (namespace: unit that separates Kubernetes cluster), "Deployment " (objects that manage Pod deployment) are recognized as EPGs by Cisco ACI, and policies can be used to control the communication of these resources. The Cisco ACI CNI Plug-in also provides an L4 (Layer 4) load balancer function that handles communication control. Although it is possible to work with an external load balancer, containers can be exposed outside the Kubernetes cluster without a separate load balancer. All of these features and mechanisms of the Cisco ACI CNI Plug-in are handled by the physical switch, allowing the server's network processing to be offloaded to the physical switch.

As mentioned earlier, virtual machines and physical servers can be registered in the EPG, so create a policy that connects the EPG containing virtual machines and physical servers and the EPG for Kubernetes Namespace with a contract. Access control between virtual machines/physical servers and containers is also possible.

NSX Container Plug-in (NCP)

"VMware NSX-T Data Center" (NSX-T) is an SDN product that works in conjunction with VMware's hypervisor. When configuring a Kubernetes cluster with a VMware hypervisor, by using NSX-T and the product's CNI plug-in "NSX Container Plug-in" (NCP), the "node" (container execution host) It is possible to use Open vSwitch (OVS), which is a virtual switch of NSX-T, and functions such as logical switches, logical routers, firewalls, and load balancers realized by NSX-T as Kubernetes resources.

The network virtualization function realized by the hypervisor and NSX-T configures a logical switch for each Kubernetes Namespace, and the Pod (collection of containers) passes through the OVS bridge function (OVS Bridge) in the node. Leverage this logical switch. The logical switch is encapsulated by an encapsulation protocol called “Geneve” (providing header information for the protocol), and this Geneve and logical router enable inter-pod communication across nodes. NSX-T's load balancer function provides resources such as Service and Ingress, and NSX-T's distributed firewall function provides Network Policy (a Kubernetes function that controls communication between Pods).

OpenShift SDN

"OpenShift SDN" is a CNI plug-in provided by Red Hat's container management products "Red Hat OpenShift". By tunneling (establishing a communication path) between the OVS of each node using the virtual network protocol "VXLAN" (Virtual eXtensible Local Area Network) and configuring an overlay network (logically controllable network), enable communication.

OpenShift SDN has three modes: "network policy mode", "multi-tenant mode" and "subnet mode". The features of each mode are as follows.

  • Multi-tenant mode
  • Subnet mode
  • From "Red Hat OpenShift 4.6", a controller called "OVN" (Open Virtual Network) that manages OVS settings is used. CNI "OVN-Kubernetes" is now available. OVN-Kubernetes uses Geneve as an encapsulation protocol, and by reducing the dependence on "iptables" (Linux's packet filtering function), data transfer performance can be expected to improve.

    Mixed Network of Virtual Machines and Containers

    "Flannel" introduced in Part 9 "Communication between 'Flannel' and 'Calico' that you should know first when using 'Kubernetes'" 's CNI plugin uses an overlay network for communication between nodes to achieve inter-Pod communication without interfering with the existing network. Therefore, the network that the container connects to is hidden inside the host (Kubernetes cluster) and separated from the management of the existing network.

    Figure 1 Fragmented container network <>

    Workloads (applications) are becoming increasingly containerized, but it is difficult to containerize all workloads . Virtual machines will continue to be needed in the future, and it is thought that the number of systems in which containers and virtual machines work together will increase. By using not only virtual machines but also cloud services such as database services in combination with containers, it is expected that the operation of applications will become more efficient.

    As the use of containers progresses, cooperation with existing workloads becomes more important, and containers are required to be treated as a general existence rather than a special existence in the network. Against this background, the CNI plugin introduced in this series integrates the container network used by Pods and the network used by virtual machines. By using a mechanism that seamlessly connects Pods to existing networks used by virtual machines, Pods can directly communicate with existing virtual machines and existing services, and functions such as load balancers and firewalls provided by existing networks can be used. (Fig. 2).

    Figure 2 Mixed network of containers and virtual machines (click to enlarge)

    This series mainly focuses on networks as basic knowledge of container networks I explained it, but I touched on some of the highlights in the series. For example, new technologies such as service mesh (a mechanism for controlling communication between microservices) are being actively used. Kubernetes is moving beyond container orchestration to extend management to virtual machines and public cloud resources. With these various movements, containers and Kubernetes will continue to be a technology area to keep an eye on.

    About the authors

    Masanori Nara, Applied Engineering Department 1, Business Development Division, Net One Systems

    Installing networks and servers at data centers of telecommunications carriers After gaining experience in operations, joined Net One Systems. After being in charge of bandwidth control, WAN acceleration products, and virtualization-related products, he mainly focuses on cloud and virtual infrastructure management, automation, and network virtualization.

    Norihiro Hosoya, 3rd Applied Technology Department, Business Development Division, Net One Systems

    In addition to data center networks, research on cutting-edge hardware and software for multi-cloud・Responsible for verification and technical support. His area of ​​focus is Kubernetes. He is also in charge of technical research and verification for IP conversion of broadcasting systems.

    Go Chiba Net One Systems Business Development Headquarters Applied Technology Department 1

    In charge of IaaS (Infrastructure as a Service) and other cloud infrastructure technologies and management products. He is focusing on building and operating a development/analysis platform centered on container technology, as well as technical verification of container-related automation technology and monitoring products.

    Related Articles