NSX-T Part6: Routing – Tier1 Router

NSX-T is different in the way routing is designed or configured than NSX-V. There are two different tiers of routing, known as Tier1 or T1 router and Tier0 or T0 router.

T1 routing is very similar to the DLR data plane component as in NSX-V i.e its completely distributed in nature and there is no appliance deployed or setup once we create a T1 router. As a general recommendation , T1 router’s are to be deployed to enable E-W routing between multiple logical switches which basically instantiates a DR (Distributed routing) component on each of the Transport nodes. This DR component will do in kernel routing between logical switches across these transport nodes.

Let’s set it up as below.

Firstly, Click on Routing and Click on Add, you see an option of T1 and T0 router , select T1 router.




Now, the T1 router is added. Next, is to attach the logical switches which we created in last part and configure their gateways on the T1 router.


As seen, in the above screenshot I have configured all the three networks for Web, App and DB logical switches.


Now, once the router ports are configured on the logical switches as these are directly connected networks, workloads attached to these logical switches are routable. Additionally, routing parameters can be configured by clicking on the T1 router and then the Routing tab.


All the three VMs, attached to logical switch WEB, APP and DB respectively can ping each other.


Additionally, quick tip, Traceflow is just brilliant in NSX-T and great tool to troubleshoot or view the traffic path.


Now, T1 routing is setup with E-W communication , in next part NSX- Edge will be deployed to setup N-S routing.



NSX-T Part5: Add vCenter as Compute Manager

As I continue with the next part , I went ahead and setup a new vCenter so that I can add it as a compute manager on NSX-T.

A compute manager is nothing but a vCenter in this case, which has the ESXi hosts added which are the Transport nodes, and thus provides an inventory mapping to NSX-T manager to poll for any events such as adding hosts. removing hosts, VMs migration etc.

To add the Compute Manager , we click on Fabric > Compute manager, and fill in the details for vCenter.


Accept the certificate presented by vCenter.


Next, validate the registration and connection status of the compute manager.


Check under hosts if the inventory details such as ESXi hosts are getting populated under the compute manager (vCenter).


In the next, part I will setup the T1 router and connect the logical switches to it.


NSX-T Part4: Configure Logical Switching

Continuing on with the NSX-T install , next part is configuration of Logical switches and attaching VM’s.

Click Switching > Switches > Add


In total I created three different switches LS-App, LS-DB and LS-Web.


Next, I moved the three VMs I created in their respective networks.


Below, is the switch ports utilization or the VIF once the VMs are connected to the respective logical switch.


Below are the virtual machines port-ids.


The overall dashboard after the Logical Switches are created, switch ports and VM ports.


NSX-T Part3 : Configuring Transport Zone and Transport Nodes

In continuation to this series, in this part I am configuring the Transport zones and Transport nodes for future configuration of logical switching and TIER1 routing.

Pre-requisites : 1600 MTU minimum on the physical network .

GENEVE is the protocol used in NSX-T. GENEVE is a UDP-based encapsulation, very similar to VXLAN, but that does not define an associated protocol using it.

More information about GENEVE is available on https://datatracker.ietf.org/doc/draft-ietf-nvo3-geneve/

Firstly, I am creating an Uplink Profile. An Uplink Profile is very similar to the “VTEP Teaming Policy” in NSX-V, however slightly different. Using an Uplink profile allow to configure once and use again and again the same policy for network adapters across multiple transport nodes.

The settings defined by uplink profiles may include teaming policies, active/standby links, the transport VLAN ID, and the MTU setting.

Quick screenshot, were I have configured the VNI pool to be used.

Click > Fabric > Configuration


You will notice that there will be default profiles, however it’s best to create a new Uplink profile instead of using the default one.

Click > Fabric > Uplink Profiles > Add


Screenshots below have the configuration of teaming policies and VLAN.


Post which we need to create a new IP-Pool for IP’s to be assigned for the TEPs.

Click on Inventory > Groups > IP Pools




The next step is to create a new Transport Zone.

Click on Fabric > Transport Zone > Add an Overlay TZ


The overlay transport zone is used by both host transport nodes and NSX Edges. When a host or NSX Edge transport node is added to an overlay transport zone, an N-VDS is installed on the host or NSX Edge.

After the new Overlay Transport Zone is created, we need to add Transport Nodes into this Transport zone for the host which will participate in this Transport Zone.

As part of the Transport Node creation, we will also have to create a new N-VDS on these nodes.

Quick information on N-VDS:

The overlay transport zone is used by both host transport nodes and NSX Edges. When a host or NSX Edge transport node is added to an overlay transport zone, an N-VDS is installed on the host or NSX Edge. N-VDS is independent on different Transport-node and only free physical nics can be attached to the N-VDS, thus the Uplink Profile should have independent or free nics configured.

Click Nodes > Transport Nodes > Add



Thus, continuing all the 5 ESXi host as Transport nodes in the newly created Overlay-TZ.


Also, reviewing their operations and performance stats..


In the next part, we will be configuring the logical switch and explain about different routing tiers in NSX-T.

Till then Happy Learning!!