Kubernetes and VMware NSX

Organizations are evolving away from static infrastructure to full automation on every perspective of it. This major shift is not proceeding overnight. It is an evolutionary concocts and people choose to advance their IT at different speeds based on organizational requirements.

What does it all have to do with Kubernetes?

Kubernetes is an open source platform for automating categorization, scaling, and operations of application containers across clusters of hosts, offering container – centric infrastructure.

Container – centric infrastructure requires a network and this network must be activating. They can not keep the good role model of predefining everything and having the containers only “deplete” networking. The Container network must allowance the life cycle of the applications characterized on Kubernetes – designed dynamically, scaled on demand when the app is escalade, and must be multi tenant. Anything less will result in abridged automation and limited activity.

Let’s take an example:

A business unit determines to run to the new Kubernetes characterized that was recently set up by the internal IT Cloud infrastructure team. Unfortunately, the business unit user will require to wait for a couple of days for the environment to be accessible, as the basic network topology has to be preconfigured to map to the business units newly designed Kubernetes namespace.

This doesn’t work! The business unit user rightly apprehends a public cloud experience’. After concluding acknowledged details through a portal, Namespace in Kubernetes should be designed alongside all required network and storage constructs. Even more acute, the business unit should be able to order its own complete Kubernetes classification – with all network and storage constructs – conveyed to them in less than 10 minutes after pressing the order button.

The current applications with glaze network technologies have a number of challenges that one would like to walk through:

  • Missing fine grained traffic control and monitoring:

In Kubernetes, operators do not arrange individual containers, they arrange Pods. A Pod is a collection or accumulating of containers that share the same network interface, and run on the same Kubernetes node. Every Pod has a categorical network interface that gets patched into a Linux network namespace for isolation. It is very complicated to troubleshoot and secure Pod – to – Pod connectivity with the accustomed technologies, as they do not offer a central clarity of all Pod network interface. Having a central management of Pod network interfaces, with the aptitude to read counters, do traffic monitoring and inforce spoofguard policies is at the basis of the NSX value hypothesis also provides a rich set of troubleshooting tools to evaluate and solve connectivity issues between Pods.

  • Missing fine grained security policy:

In some of the accustomed technologies, Pod – to – Pod traffic is not secured or assured at all. This opens as opportunity for attackers to run laterally from Pod – to – Pod without being blocked by firewall rules and even abominable, without leaving any crumb of this lateral movement in logs. Kubernetes approaches this with the network policy project driven by the networking special interest group in Kubernetes. NSX appliance network policy alongside with pre designed ‘admin-rules’ to secure Pod – to – Pod traffic in Kubernetes.

  • Automating the creation of network topology:

Many of the common fulfillments take a simple approach to network topology mapping, which is not to have any topology mapping. IP subnet allocation for Pods is mostly done per Kubernetes node. NSX contrivance a distinct network topology per Kubernetes namespace. NSX maps logical network aspects like logical switches and appurtenant logical router to Kubernetes namespaces in a fully mechanical manner. Each of those network topologies can then be acclimate per namespace.

  • Integration in enterprise networking:

A archetype of many existing technologies is that the operator committal to decide at the time of install whether the container networks should be privately addressed and if they are abstruse behind a NAT boundary or directly routed within the enterprise network. Existing overlay technologies make it hard to declare  Pods to networks outside of the Kubernetes cluster. Advertising the service associate using NAT/PAT on the Kubernetes nodes themselves, putting the accountability on the operator to create how to map external physical load balancers or DNS records to TCP/UDP ports on the Kubernetes nodes. In addition everyone can use new Kubernetes Ingress load balancers to get traffic into the container networks. In any case there is NAT involved. With the NSX alliance, they aspire to allow operators to be able to mediate on a per namespace basis if they want to do direct routing and even if they need to include the routes dynamically into the base network using BGP. On the other hand, if operators need to save IP address space, they would be able to hide the namespace networks abaft NAT using private IPs and Kubernetes Ingress Controllers to get apparent traffic to the Pods.

How does the NSX amalgamation or integration look like? How did we design it?

To start, the alliance uses NSX-T since this makes the solution applicative to any add up environment and not just vSphere. E.g. NSX-T will concede us to assist Kubernetes on a array to compute platforms – such as Photon-Platform, Bare metal Linux servers, public clouds and KVM based virtualization environments.

To accommodate Kubernetes with NSX-T they connote to develop three major components:

  1. The NSX Container Plugin (NCP):

It is a soft fundamental that they aspire to deliver as a container image, running as an infrastructure Pod in the Kubernetes cluster. It would convene between the Kubernetes Objects and the NSX-T API. It would design networking constructs based on the object addition and changes reported by the Kubernetes API.

  1. The NSX CNI Plugin:

This is a small executable advised to be installed on all Kubernetes Nodes. CNI stands for container Network Interface and is a standard that advised to concede the amalgamation of network solutions like NSX into container composition platforms. The Kubernetes node component called Kubelet will call the CNI Plugin to handle the Pod network attachment.

  1. The NSX Kube Proxy:

It is a daemon operating on the Kubernetes Nodes. Again they attempt to deliver NSX Kube- Proxy as a container image, so that it can be run as a Kubernetes Daemon Set on the Nodes. NSX Kube-Proxy would derange the native appropriated east west load balancer in Kubernetes called Kube-Proxy, which uses IPTables, with the solution that uses OpenVSwitch load balancing features.

Each of the components justifies a closer look and far more annotation than what they can cover in this article.

There is one more thing one can ask is that how does anyone solve the two layers of overlay problem? Well the answer is when running glaze networks between Kubernetes Nodes running as VMs on an IaaS that itself benefits a glaze network solution, they can get into the bearings where one can have double encapsulation. For ex. VXLAN in VXLAN.

When running the Kubernetes nodes as VMs, the tunnel encapsulation would be managed only on the hypervisor vSwitch layer. In fact the OpenVirtualSwitch in the Kubernetes Node VM would not have a control plane connection to the NSX-T controllers and managers thereby conspiring an additional layer of aloofness and security between the containers and the NSX-T control panel. The NSX CNI Plugin attempt to program the OVS in the Kubernetes Node to tag traffic from Pods with locally compelling VLAN id. This would concede them to multiplex all the traffic coming from the pods onto one of the Kubernetes Node VMs towards the hypervisor vSwitch. The VLAN id would be confess them to identify definite pods on hypervisor vSwitch utilizing logical sub interfaces of the VMs vnic. All management and administration actions on the per Pod logical port would be done on the Hypervisor vSwitch. The VLAN id charge by OVS in the Node VM would be deprived by the Hypervisor vSwitch before encapsulating it with the overlay header.

For more information , you can go for vmware training in delhi and vmware training in dwarka

Leave a Reply

Your email address will not be published. Required fields are marked *