NSX-T Part2 : Host Preparation and adding to NSX-T Fabric

In continuation to the the NSX-T Part1 , in this part we would be preparing the ESXi nodes as a fabric node, i.e essentially installing the NSX-T modules on the nodes and registering them with the NSX-T management plane.

So, the first thing we do is connect to the NSX-T manager using SSH and get the status of the install-upgrade service. This service should be running for the vibs to be automatically installed on the ESXi nodes when we add them in the fabric.


Then , we need to SSH into the ESXi nodes and get the thumbprint from the nodes. You can also skip this step and do not enter any host thumbprint, this is were NSX-T UI will prompt you to accept the retrieved thumbprint. However, for demonstration I have retrieved the thumbprint from the ESXi node SSH.

ESXCLI command to retrieve thumbprint is : openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout


Next, on the browser log in to the NSX-T manager UI , and select Fabric > Nodes > Hosts and click Add. Enter the information as below.



Continue the same process for all the ESXi nodes part of the fabric.

Once the nodes are successfully added to the fabric , their manager connectivity status will show as “UP”.


Also, I have checked the list of vibs which get installed on the ESXi for NSX-T, and their are a bunch of them.


In the next part, I will write about how to prepare the added nodes into fabric as transport nodes and overlay network preparation.



NSX-T Part 1 : NSX Manager and Controllers Installation

I am starting with this blog series in which I will be detailing the steps of NSX-T installation in my lab environment which will be later used for PKS installation and integration..

In Part1 , I am deploying NSX manager followed by 3 NSX controllers and connecting them to NSX manger. This deployment is based on the latest NSX-T release 2.2 and currently on ESXi. I will be adding KVM transport nodes to the setup at a later stage.

I have downloaded NSX-T manager 2.2 OVA for ESXi  and NSX-T controllers OVA for ESXi from VMware website : https://my.vmware.com/web/vmware/details?productId=673&downloadGroup=NSX-T-220

In my lab, I have 4 ESXi nodes running VSAN on which the deployment will proceed. I have created 5 nested ESXi host will be used as transport nodes for the NSX-T deployment.

Deployment of NSX-T manager is pretty straightforward ova deployment which will require IP address, DNS, NTP entries. After the deployment of the NSX manager booting up, we can login into the NSX-T manager, and check the status of the interface.



After the NSX-T manager is deployed , its time for NSX-T controllers to be deployed. Again, NSX-T controllers for ESXi are OVA as well and then 3 of them are deployed on ESXi as below.


After the NSX-T controllers are deployed, we need to confirm its reachability to NSX-T manager before we can go ahead with the steps of connecting controllers with NSX-T manager.

Connect to NSX-T manager SSH, and run the below command to get thumbprint:

#nsxt-mgr > get certificate api thumbprint

After that, connect using SSH to each of the NSX controller’s and run the below command:

#nsxt-controller1 > join management-plane NSX-Manager-IP-address username admin thumbprint <NSX-Manager-thumbprint>


After joining the NSX-T controllers to the NSX-T manager, verify the result by running the “get managers”  command.


On the NSX-T manager , run the below command to check that NSX controllers are listed.


Once, all the NSX controllers are connected to NSX manager, then a NSX-T controller needs to be set as the master and the control cluster needs to be initialized.

  1. Open an SSH session for your NSX Controller.
  2. Run the set control-cluster security-model shared-secret secret <secret> command and type a shared secret when prompted. Ensure you remember the secret key.
  3. Run the initialize control-cluster command.

    This command makes this controller the control cluster master

After this , get the control-cluster status and verify that master is in majority are true and the status is active and the zookeeper server IP is reachable.


Post which we need to join the other controllers to the master.

NSX-Controller2> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password> Security secret successfully set on the node.
NSX-Controller3> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password> Security secret successfully set on the node.
Then on the master NSX Controller, run the join control-cluster command with the ip address of the secondary and third NSX controller.
join control-cluster <NSX-Controller2-IP> thumbprint <nsx-controller2’s-thumbprint>
After all the NSX controller’s have joined the master, then we need to activate the control-cluster.  Make sure you run activate control-cluster sequentially on each NSX controller and not in parallel.
This completes the NSX-T manager and NSX-T controller installation and connection with NSX-T manager and setup of controller cluster.
In next part, I will be doing host preparation and setting up them as transport nodes.
Happy learning!!