VMware NSX Cookbook
上QQ阅读APP看书,第一时间看更新

How it works...

In this recipe, we configured and deployed a DLR. The DLR, as mentioned in the introduction, is the first tier of routing for virtual machines connected to logical switches in an NSX-for-vSphere network-virtualization solution and provides optimized east-west traffic flows between virtual machines that are in different subnets. Using the DLR ensures that traffic from virtual machines is not hairpinned to a single network device; instead, routing is performed by every ESXi Hypervisor where the DLR is deployed to.

Each DLR runs in kernel space on the ESXi host, and each logical switch connected to the DLR is represented as a logical interface (LIF). There are two primary LIF types available, which are:

  • Uplink: The uplink LIF type is reserved for connecting the DLR to an upstream router, such as the ESG. At the time of writing, a DLR can have a maximum of eight uplink interfaces. The uplink interface is also where the BGP or OSPF routing protocol would run and exchange routes with their peers. The uplink LIF is also created as a vNIC on the DLR CVM; this vNIC interface is used for route peering between the DLR CVM and ESG.
Only a single routing protocol (BGP/OSPF) can run on the DLR uplink interface. In addition an uplink LIF is usually connected to a logical switch.
  • Internal: The internal LIF type is typically where all non-transit logical switches would connect to; each LIF connected to a non-transit logical switch would serve as the layer 3 default gateway for all virtual machines connected to that logical switch. At the time of writing, a DLR can have up to 991 internal interfaces connected. These interfaces are created on the DLR instance running in the ESXi kernel and are not created on the DLR CVM.

The most important concept to grasp in regard to internal LIFs is that they represent the default gateway to virtual machines, using a virtual MAC (vMAC) that is the same across all ESXi hosts. The vMAC is only known to the hosts and virtual machines using it as a layer 3 gateway, and it is never seen by the physical network, therefore no duplicate entries would appear in the physical switch's content addressable memory (CAM) table. Keeping the same vMAC across all hosts for the same DLR LIF is important when doing vMotions of virtual machines, so that the ARP entry on the virtual machine for default gateway never changes.

In this recipe, we created a total of four internal LIF types and one uplink LIF; the four internal LIFs are where all our virtual machines will connect to, and the uplink LIF is connected to the transit logical switch which will be used to route traffic from the internal DLR LIFs to the ESG and on to the physical network, which is known as north-south routing.

When deploying the DLR, you have the option to deploy an edge appliance which will be seen in the first dialog box. A DLR can be deployed without an edge appliance, also known as the DLR CVM, but it is restricted in function to only static routes. You can confirm which DLR has a control VM deployed or not by looking at the Status field in the NSX edge configuration pane; a DLR with a control VM is referenced as Deployed, a DLR without a control VM is referenced as Undeployed:

In addition to deploying the DLR CVM, we selected High Availability (HA); HA is provided in an Active/Standby arrangement with the default HA timer set to 15 seconds. This timer can be tuned down to six seconds if a faster failover time is required. Steps 5 and 6 show the creation of the DLR CVM; it is considered best practice to deploy each DLR CVM on separate storage arrays to provide greater redundancy in the event that the underlying storage is adversely impacted.