NSX-T Part4: Configure Logical Switching

Continuing on with the NSX-T install , next part is configuration of Logical switches and attaching VM’s.

Click Switching > Switches > Add


In total I created three different switches LS-App, LS-DB and LS-Web.


Next, I moved the three VMs I created in their respective networks.


Below, is the switch ports utilization or the VIF once the VMs are connected to the respective logical switch.


Below are the virtual machines port-ids.


The overall dashboard after the Logical Switches are created, switch ports and VM ports.



NSX-T Part3 : Configuring Transport Zone and Transport Nodes

In continuation to this series, in this part I am configuring the Transport zones and Transport nodes for future configuration of logical switching and TIER1 routing.

Pre-requisites : 1600 MTU minimum on the physical network .

GENEVE is the protocol used in NSX-T. GENEVE is a UDP-based encapsulation, very similar to VXLAN, but that does not define an associated protocol using it.

More information about GENEVE is available on https://datatracker.ietf.org/doc/draft-ietf-nvo3-geneve/

Firstly, I am creating an Uplink Profile. An Uplink Profile is very similar to the “VTEP Teaming Policy” in NSX-V, however slightly different. Using an Uplink profile allow to configure once and use again and again the same policy for network adapters across multiple transport nodes.

The settings defined by uplink profiles may include teaming policies, active/standby links, the transport VLAN ID, and the MTU setting.

Quick screenshot, were I have configured the VNI pool to be used.

Click > Fabric > Configuration


You will notice that there will be default profiles, however it’s best to create a new Uplink profile instead of using the default one.

Click > Fabric > Uplink Profiles > Add


Screenshots below have the configuration of teaming policies and VLAN.


Post which we need to create a new IP-Pool for IP’s to be assigned for the TEPs.

Click on Inventory > Groups > IP Pools




The next step is to create a new Transport Zone.

Click on Fabric > Transport Zone > Add an Overlay TZ


The overlay transport zone is used by both host transport nodes and NSX Edges. When a host or NSX Edge transport node is added to an overlay transport zone, an N-VDS is installed on the host or NSX Edge.

After the new Overlay Transport Zone is created, we need to add Transport Nodes into this Transport zone for the host which will participate in this Transport Zone.

As part of the Transport Node creation, we will also have to create a new N-VDS on these nodes.

Quick information on N-VDS:

The overlay transport zone is used by both host transport nodes and NSX Edges. When a host or NSX Edge transport node is added to an overlay transport zone, an N-VDS is installed on the host or NSX Edge. N-VDS is independent on different Transport-node and only free physical nics can be attached to the N-VDS, thus the Uplink Profile should have independent or free nics configured.

Click Nodes > Transport Nodes > Add



Thus, continuing all the 5 ESXi host as Transport nodes in the newly created Overlay-TZ.


Also, reviewing their operations and performance stats..


In the next part, we will be configuring the logical switch and explain about different routing tiers in NSX-T.

Till then Happy Learning!!








NSX-T Part2 : Host Preparation and adding to NSX-T Fabric

In continuation to the the NSX-T Part1 , in this part we would be preparing the ESXi nodes as a fabric node, i.e essentially installing the NSX-T modules on the nodes and registering them with the NSX-T management plane.

So, the first thing we do is connect to the NSX-T manager using SSH and get the status of the install-upgrade service. This service should be running for the vibs to be automatically installed on the ESXi nodes when we add them in the fabric.


Then , we need to SSH into the ESXi nodes and get the thumbprint from the nodes. You can also skip this step and do not enter any host thumbprint, this is were NSX-T UI will prompt you to accept the retrieved thumbprint. However, for demonstration I have retrieved the thumbprint from the ESXi node SSH.

ESXCLI command to retrieve thumbprint is : openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout


Next, on the browser log in to the NSX-T manager UI , and select Fabric > Nodes > Hosts and click Add. Enter the information as below.



Continue the same process for all the ESXi nodes part of the fabric.

Once the nodes are successfully added to the fabric , their manager connectivity status will show as “UP”.


Also, I have checked the list of vibs which get installed on the ESXi for NSX-T, and their are a bunch of them.


In the next part, I will write about how to prepare the added nodes into fabric as transport nodes and overlay network preparation.


NSX-T Part 1 : NSX Manager and Controllers Installation

I am starting with this blog series in which I will be detailing the steps of NSX-T installation in my lab environment which will be later used for PKS installation and integration..

In Part1 , I am deploying NSX manager followed by 3 NSX controllers and connecting them to NSX manger. This deployment is based on the latest NSX-T release 2.2 and currently on ESXi. I will be adding KVM transport nodes to the setup at a later stage.

I have downloaded NSX-T manager 2.2 OVA for ESXi  and NSX-T controllers OVA for ESXi from VMware website : https://my.vmware.com/web/vmware/details?productId=673&downloadGroup=NSX-T-220

In my lab, I have 4 ESXi nodes running VSAN on which the deployment will proceed. I have created 5 nested ESXi host will be used as transport nodes for the NSX-T deployment.

Deployment of NSX-T manager is pretty straightforward ova deployment which will require IP address, DNS, NTP entries. After the deployment of the NSX manager booting up, we can login into the NSX-T manager, and check the status of the interface.



After the NSX-T manager is deployed , its time for NSX-T controllers to be deployed. Again, NSX-T controllers for ESXi are OVA as well and then 3 of them are deployed on ESXi as below.


After the NSX-T controllers are deployed, we need to confirm its reachability to NSX-T manager before we can go ahead with the steps of connecting controllers with NSX-T manager.

Connect to NSX-T manager SSH, and run the below command to get thumbprint:

#nsxt-mgr > get certificate api thumbprint

After that, connect using SSH to each of the NSX controller’s and run the below command:

#nsxt-controller1 > join management-plane NSX-Manager-IP-address username admin thumbprint <NSX-Manager-thumbprint>


After joining the NSX-T controllers to the NSX-T manager, verify the result by running the “get managers”  command.


On the NSX-T manager , run the below command to check that NSX controllers are listed.


Once, all the NSX controllers are connected to NSX manager, then a NSX-T controller needs to be set as the master and the control cluster needs to be initialized.

  1. Open an SSH session for your NSX Controller.
  2. Run the set control-cluster security-model shared-secret secret <secret> command and type a shared secret when prompted. Ensure you remember the secret key.
  3. Run the initialize control-cluster command.

    This command makes this controller the control cluster master

After this , get the control-cluster status and verify that master is in majority are true and the status is active and the zookeeper server IP is reachable.


Post which we need to join the other controllers to the master.

NSX-Controller2> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password> Security secret successfully set on the node.
NSX-Controller3> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password> Security secret successfully set on the node.
Then on the master NSX Controller, run the join control-cluster command with the ip address of the secondary and third NSX controller.
join control-cluster <NSX-Controller2-IP> thumbprint <nsx-controller2’s-thumbprint>
After all the NSX controller’s have joined the master, then we need to activate the control-cluster.  Make sure you run activate control-cluster sequentially on each NSX controller and not in parallel.
This completes the NSX-T manager and NSX-T controller installation and connection with NSX-T manager and setup of controller cluster.
In next part, I will be doing host preparation and setting up them as transport nodes.
Happy learning!!