NSX-T Part3 : Configuring Transport Zone and Transport Nodes

In continuation to this series, in this part I am configuring the Transport zones and Transport nodes for future configuration of logical switching and TIER1 routing.

Pre-requisites : 1600 MTU minimum on the physical network .

GENEVE is the protocol used in NSX-T. GENEVE is a UDP-based encapsulation, very similar to VXLAN, but that does not define an associated protocol using it.

More information about GENEVE is available on https://datatracker.ietf.org/doc/draft-ietf-nvo3-geneve/

Firstly, I am creating an Uplink Profile. An Uplink Profile is very similar to the “VTEP Teaming Policy” in NSX-V, however slightly different. Using an Uplink profile allow to configure once and use again and again the same policy for network adapters across multiple transport nodes.

The settings defined by uplink profiles may include teaming policies, active/standby links, the transport VLAN ID, and the MTU setting.

Quick screenshot, were I have configured the VNI pool to be used.

Click > Fabric > Configuration

pic17

You will notice that there will be default profiles, however it’s best to create a new Uplink profile instead of using the default one.

Click > Fabric > Uplink Profiles > Add

pic18

Screenshots below have the configuration of teaming policies and VLAN.

pic19

Post which we need to create a new IP-Pool for IP’s to be assigned for the TEPs.

Click on Inventory > Groups > IP Pools

pic20.jpg

 

pic21.jpg

The next step is to create a new Transport Zone.

Click on Fabric > Transport Zone > Add an Overlay TZ

pic22.jpg

The overlay transport zone is used by both host transport nodes and NSX Edges. When a host or NSX Edge transport node is added to an overlay transport zone, an N-VDS is installed on the host or NSX Edge.

After the new Overlay Transport Zone is created, we need to add Transport Nodes into this Transport zone for the host which will participate in this Transport Zone.

As part of the Transport Node creation, we will also have to create a new N-VDS on these nodes.

Quick information on N-VDS:

The overlay transport zone is used by both host transport nodes and NSX Edges. When a host or NSX Edge transport node is added to an overlay transport zone, an N-VDS is installed on the host or NSX Edge. N-VDS is independent on different Transport-node and only free physical nics can be attached to the N-VDS, thus the Uplink Profile should have independent or free nics configured.

Click Nodes > Transport Nodes > Add

pic23.jpg

pic24.jpg

Thus, continuing all the 5 ESXi host as Transport nodes in the newly created Overlay-TZ.

pic25.jpg

Also, reviewing their operations and performance stats..

pic26.jpg

In the next part, we will be configuring the logical switch and explain about different routing tiers in NSX-T.

Till then Happy Learning!!

 

 

 

 

 

 

 

Advertisements

NSX-T Part2 : Host Preparation and adding to NSX-T Fabric

In continuation to the the NSX-T Part1 , in this part we would be preparing the ESXi nodes as a fabric node, i.e essentially installing the NSX-T modules on the nodes and registering them with the NSX-T management plane.

So, the first thing we do is connect to the NSX-T manager using SSH and get the status of the install-upgrade service. This service should be running for the vibs to be automatically installed on the ESXi nodes when we add them in the fabric.

pic10

Then , we need to SSH into the ESXi nodes and get the thumbprint from the nodes. You can also skip this step and do not enter any host thumbprint, this is were NSX-T UI will prompt you to accept the retrieved thumbprint. However, for demonstration I have retrieved the thumbprint from the ESXi node SSH.

ESXCLI command to retrieve thumbprint is : openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout

pic11.jpg

Next, on the browser log in to the NSX-T manager UI , and select Fabric > Nodes > Hosts and click Add. Enter the information as below.

pic12

pic13

Continue the same process for all the ESXi nodes part of the fabric.

Once the nodes are successfully added to the fabric , their manager connectivity status will show as “UP”.

pic15

Also, I have checked the list of vibs which get installed on the ESXi for NSX-T, and their are a bunch of them.

pic16

In the next part, I will write about how to prepare the added nodes into fabric as transport nodes and overlay network preparation.

 

NSX-T Part 1 : NSX Manager and Controllers Installation

I am starting with this blog series in which I will be detailing the steps of NSX-T installation in my lab environment which will be later used for PKS installation and integration..

In Part1 , I am deploying NSX manager followed by 3 NSX controllers and connecting them to NSX manger. This deployment is based on the latest NSX-T release 2.2 and currently on ESXi. I will be adding KVM transport nodes to the setup at a later stage.

I have downloaded NSX-T manager 2.2 OVA for ESXi  and NSX-T controllers OVA for ESXi from VMware website : https://my.vmware.com/web/vmware/details?productId=673&downloadGroup=NSX-T-220

In my lab, I have 4 ESXi nodes running VSAN on which the deployment will proceed. I have created 5 nested ESXi host will be used as transport nodes for the NSX-T deployment.

Deployment of NSX-T manager is pretty straightforward ova deployment which will require IP address, DNS, NTP entries. After the deployment of the NSX manager booting up, we can login into the NSX-T manager, and check the status of the interface.

pic1

pic2

After the NSX-T manager is deployed , its time for NSX-T controllers to be deployed. Again, NSX-T controllers for ESXi are OVA as well and then 3 of them are deployed on ESXi as below.

pic3

After the NSX-T controllers are deployed, we need to confirm its reachability to NSX-T manager before we can go ahead with the steps of connecting controllers with NSX-T manager.

Connect to NSX-T manager SSH, and run the below command to get thumbprint:

#nsxt-mgr > get certificate api thumbprint

After that, connect using SSH to each of the NSX controller’s and run the below command:

#nsxt-controller1 > join management-plane NSX-Manager-IP-address username admin thumbprint <NSX-Manager-thumbprint>

pic4

After joining the NSX-T controllers to the NSX-T manager, verify the result by running the “get managers”  command.

pic5

On the NSX-T manager , run the below command to check that NSX controllers are listed.

pic6

Once, all the NSX controllers are connected to NSX manager, then a NSX-T controller needs to be set as the master and the control cluster needs to be initialized.

  1. Open an SSH session for your NSX Controller.
  2. Run the set control-cluster security-model shared-secret secret <secret> command and type a shared secret when prompted. Ensure you remember the secret key.
  3. Run the initialize control-cluster command.

    This command makes this controller the control cluster master

After this , get the control-cluster status and verify that master is in majority are true and the status is active and the zookeeper server IP is reachable.

pic7

Post which we need to join the other controllers to the master.

NSX-Controller2> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password> Security secret successfully set on the node.
NSX-Controller3> set control-cluster security-model shared-secret secret <NSX-Controller1’s-shared-secret-password> Security secret successfully set on the node.
Then on the master NSX Controller, run the join control-cluster command with the ip address of the secondary and third NSX controller.
join control-cluster <NSX-Controller2-IP> thumbprint <nsx-controller2’s-thumbprint>
After all the NSX controller’s have joined the master, then we need to activate the control-cluster.  Make sure you run activate control-cluster sequentially on each NSX controller and not in parallel.
pic8.jpg
This completes the NSX-T manager and NSX-T controller installation and connection with NSX-T manager and setup of controller cluster.
pic9
In next part, I will be doing host preparation and setting up them as transport nodes.
Happy learning!!

NSX Micro-segment “Ingress and Egress Traffic”

Recently, there was a customer asking me a question if the distributed firewall works on both ingress and egress traffic or just the egress traffic. Although, this is very well documented, he wanted me to demonstrate this capability.

Thus, what do I do? I spin up a quick LAB for NSX from Hands on Labs, and try and demonstrate the same and I will try and use the power of “Applied to” field to showcase the same.

I log into the NSX Manager through CLI, and as generally, we have the three clusters part of the vCenter, seen below.

1

I pick up cluster “RegionA01-COMP01” and the pull out the ESXi hosts part of the cluster.

2

Then, I pick up an ESXI host “esx-01a.corp. local” and pull out the list of all VMs on that ESXi host.

3.jpg

Then, using the summarize-dvfilter command I pull out al the vNIc name for dfw of the virtual machines.

4.jpg

Here, I am just showing this for the “web-02a.corp. local” virtual machine, followed by all the rules getting applied on that machine.

5

6.jpg

Here, I am just showing this for the “web-01a.corp. local” virtual machine, followed by all the rules getting applied on that machine.

7

8.jpg

As, you can see above in the rules for both the VMs, there is no specific “ICMP” allow rule present and the default rule 1001 is set to “ANY: ANY”: DROP.

Below, are the screenshots for the IP addresses of both the WEB machines and as expected these two machines cannot communicate to each other.

9.jpg

10

11

12

Now, I create a new rule for ICMP from “web-01a.corp. local” to “web-02a.corp. local” virtual machine, and then just apply onto “web-01a.corp. local” virtual machine.

13.jpg

14.jpg

However, as you can see below it does not work. This is because the rules are not applied to the “web-02a.corp. local”, and hence the icmp packets at dropped on the destination machine. This can also be seen at the rules using the commands shown earlier.

15

16.jpg

Now, I add both the machines under the “Applied to” field for them to get the allow rules applied.

17.jpg

18.jpg

And, now once we try to ping “It works”. This demonstrates true “Micro Segmentation” for both ingress and egress traffic and also the power of “Applied to” field.

19.jpg

Hope, this will help !!!