NSX-T Part2 : Host Preparation and adding to NSX-T Fabric

In continuation to the the NSX-T Part1 , in this part we would be preparing the ESXi nodes as a fabric node, i.e essentially installing the NSX-T modules on the nodes and registering them with the NSX-T management plane.

So, the first thing we do is connect to the NSX-T manager using SSH and get the status of the install-upgrade service. This service should be running for the vibs to be automatically installed on the ESXi nodes when we add them in the fabric.

pic10

Then , we need to SSH into the ESXi nodes and get the thumbprint from the nodes. You can also skip this step and do not enter any host thumbprint, this is were NSX-T UI will prompt you to accept the retrieved thumbprint. However, for demonstration I have retrieved the thumbprint from the ESXi node SSH.

ESXCLI command to retrieve thumbprint is : openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout

pic11.jpg

Next, on the browser log in to the NSX-T manager UI , and select Fabric > Nodes > Hosts and click Add. Enter the information as below.

pic12

pic13

Continue the same process for all the ESXi nodes part of the fabric.

Once the nodes are successfully added to the fabric , their manager connectivity status will show as “UP”.

pic15

Also, I have checked the list of vibs which get installed on the ESXi for NSX-T, and their are a bunch of them.

pic16

In the next part, I will write about how to prepare the added nodes into fabric as transport nodes and overlay network preparation.

 

Advertisements

NSX-T Part 1 : NSX Manager and Controllers Installation

I am starting with this blog series in which I will be detailing the steps of NSX-T installation in my lab environment which will be later used for PKS installation and integration..

In Part1 , I am deploying NSX manager followed by 3 NSX controllers and connecting them to NSX manger. This deployment is based on the latest NSX-T release 2.2 and currently on ESXi. I will be adding KVM transport nodes to the setup at a later stage.

I have downloaded NSX-T manager 2.2 OVA for ESXi  and NSX-T controllers OVA for ESXi from VMware website : https://my.vmware.com/web/vmware/details?productId=673&downloadGroup=NSX-T-220

In my lab, I have 4 ESXi nodes running VSAN on which the deployment will proceed. I have created 5 nested ESXi host will be used as transport nodes for the NSX-T deployment.

Deployment of NSX-T manager is pretty straightforward ova deployment which will require IP address, DNS, NTP entries. After the deployment of the NSX manager booting up, we can login into the NSX-T manager, and check the status of the interface.

pic1

pic2

After the NSX-T manager is deployed , its time for NSX-T controllers to be deployed. Again, NSX-T controllers for ESXi are OVA as well and then 3 of them are deployed on ESXi as below.

pic3

After the NSX-T controllers are deployed, we need to confirm its reachability to NSX-T manager before we can go ahead with the steps of connecting controllers with NSX-T manager.

Connect to NSX-T manager SSH, and run the below command to get thumbprint:

#nsxt-mgr > get certificate api thumbprint

After that, connect using SSH to each of the NSX controller’s and run the below command:

#nsxt-controller1 > join management-plane NSX-Manager-IP-address username admin thumbprint <NSX-Manager-thumbprint>

pic4

After joining the NSX-T controllers to the NSX-T manager, verify the result by running the “get managers”  command.

pic5

On the NSX-T manager , run the below command to check that NSX controllers are listed.

pic6

Once, all the NSX controllers are connected to NSX manager, then a NSX-T controller needs to be set as the master and the control cluster needs to be initialized.

  1. Open an SSH session for your NSX Controller.
  2. Run the set control-cluster security-model shared-secret secret <secret> command and type a shared secret when prompted. Ensure you remember the secret key.
  3. Run the initialize control-cluster command.

    This command makes this controller the control cluster master

After this , get the control-cluster status and verify that master is in majority are true and the status is active and the zookeeper server IP is reachable.

pic7

Post which we need to join the other controllers to the master.

NSX-Controller2> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password> Security secret successfully set on the node.
NSX-Controller3> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password> Security secret successfully set on the node.
Then on the master NSX Controller, run the join control-cluster command with the ip address of the secondary and third NSX controller.
join control-cluster <NSX-Controller2-IP> thumbprint <nsx-controller2’s-thumbprint>
After all the NSX controller’s have joined the master, then we need to activate the control-cluster.  Make sure you run activate control-cluster sequentially on each NSX controller and not in parallel.
pic8.jpg
This completes the NSX-T manager and NSX-T controller installation and connection with NSX-T manager and setup of controller cluster.
pic9
In next part, I will be doing host preparation and setting up them as transport nodes.
Happy learning!!

NSX Micro-segment “Ingress and Egress Traffic”

Recently, there was a customer asking me a question if the distributed firewall works on both ingress and egress traffic or just the egress traffic. Although, this is very well documented, he wanted me to demonstrate this capability.

Thus, what do I do? I spin up a quick LAB for NSX from Hands on Labs, and try and demonstrate the same and I will try and use the power of “Applied to” field to showcase the same.

I log into the NSX Manager through CLI, and as generally, we have the three clusters part of the vCenter, seen below.

1

I pick up cluster “RegionA01-COMP01” and the pull out the ESXi hosts part of the cluster.

2

Then, I pick up an ESXI host “esx-01a.corp. local” and pull out the list of all VMs on that ESXi host.

3.jpg

Then, using the summarize-dvfilter command I pull out al the vNIc name for dfw of the virtual machines.

4.jpg

Here, I am just showing this for the “web-02a.corp. local” virtual machine, followed by all the rules getting applied on that machine.

5

6.jpg

Here, I am just showing this for the “web-01a.corp. local” virtual machine, followed by all the rules getting applied on that machine.

7

8.jpg

As, you can see above in the rules for both the VMs, there is no specific “ICMP” allow rule present and the default rule 1001 is set to “ANY: ANY”: DROP.

Below, are the screenshots for the IP addresses of both the WEB machines and as expected these two machines cannot communicate to each other.

9.jpg

10

11

12

Now, I create a new rule for ICMP from “web-01a.corp. local” to “web-02a.corp. local” virtual machine, and then just apply onto “web-01a.corp. local” virtual machine.

13.jpg

14.jpg

However, as you can see below it does not work. This is because the rules are not applied to the “web-02a.corp. local”, and hence the icmp packets at dropped on the destination machine. This can also be seen at the rules using the commands shown earlier.

15

16.jpg

Now, I add both the machines under the “Applied to” field for them to get the allow rules applied.

17.jpg

18.jpg

And, now once we try to ping “It works”. This demonstrates true “Micro Segmentation” for both ingress and egress traffic and also the power of “Applied to” field.

19.jpg

Hope, this will help !!!

 

Update Manager plugin missing in vCenter Enhanced Linked Mode Configuration

I have seen this issue now multiple times especially in vCenter 6.0 setup of enhanced linked mode, and thus sharing the solution implemented in the scenario.

Issue : Two sites , let’s say DC and DR. Each site having it’s own vCenter appliance with an External Platform Services Controller . The PSC are part of the Single SSO domain in two different sites and hence forming an Enhanced linked mode configuration.

Now, everything works fine, but as we know till vSphere 6.0 update manager is still installed on a separate windows machine and is linked to the vCenter. The same was done at each site where independent Update manager was deployed and connected to it’s respective vCenter at DC and DR. However, after this once we login to the web-client > Click update manager we are not able to see the update manager plugin from DR vCenter and also unable to manage the DR update manager from DC site.

After exploring the configurations for update manager/web-client on both vCenter’s at DC and DR, it was found that there is a permission issue on vSphere-client folder at DR site along with missing plugin files.

Working vCenter at DC :

Under the directory : etc/vmware/vsphere-client/vc-packages/vsphere-client-serenity# ls -lh

drwx—— 3 vsphere-client users 4.0K Oct 11 17:12 com.vmware.vcIntegrity-6.0.0.29963

 

Non Working vCenter at DR:

There was no directory as vc-packages under /etc/vmware/vsphere-client. Hence created the directory and copied the same file from DC vCenter.

Under the directory : /etc/vmware/vsphere-client/vc-packages/vsphere-client-serenity# ls -lh

drwx—— 3 root           root  4.0K Oct 11 11:40 com.vmware.vcIntegrity-6.0.0.29963

 

But if you look closely, the permissions are owned by root and not vsphere-client users . Thus , used the below commands to modify the permissions.

chown vsphere-client /etct/vmware/vsphere-client 

 

The above command also changes the owner to the underlying directories and files from root to vsphere-client

chgrp -r users /etct/vmware/vsphere-client 

Now , when we check the same location /etc/vmware/vsphere-client/vc-packages/vsphere-client-serenity . It shows the same plugin and permissions as the working vCenter at DC.

Next step for the changes to take affect was to restart the vSphere-client service on the appliance , and relogin to the web-client and validate.

As we did, we are able to manage the update manager configuration from web-client at both sites now.

Thanks, hope this helps !!