NSX Micro-segment “Ingress and Egress Traffic”

Recently, there was a customer asking me a question if the distributed firewall works on both ingress and egress traffic or just the egress traffic. Although, this is very well documented, he wanted me to demonstrate this capability.

Thus, what do I do? I spin up a quick LAB for NSX from Hands on Labs, and try and demonstrate the same and I will try and use the power of “Applied to” field to showcase the same.

I log into the NSX Manager through CLI, and as generally, we have the three clusters part of the vCenter, seen below.

1

I pick up cluster “RegionA01-COMP01” and the pull out the ESXi hosts part of the cluster.

2

Then, I pick up an ESXI host “esx-01a.corp. local” and pull out the list of all VMs on that ESXi host.

3.jpg

Then, using the summarize-dvfilter command I pull out al the vNIc name for dfw of the virtual machines.

4.jpg

Here, I am just showing this for the “web-02a.corp. local” virtual machine, followed by all the rules getting applied on that machine.

5

6.jpg

Here, I am just showing this for the “web-01a.corp. local” virtual machine, followed by all the rules getting applied on that machine.

7

8.jpg

As, you can see above in the rules for both the VMs, there is no specific “ICMP” allow rule present and the default rule 1001 is set to “ANY: ANY”: DROP.

Below, are the screenshots for the IP addresses of both the WEB machines and as expected these two machines cannot communicate to each other.

9.jpg

10

11

12

Now, I create a new rule for ICMP from “web-01a.corp. local” to “web-02a.corp. local” virtual machine, and then just apply onto “web-01a.corp. local” virtual machine.

13.jpg

14.jpg

However, as you can see below it does not work. This is because the rules are not applied to the “web-02a.corp. local”, and hence the icmp packets at dropped on the destination machine. This can also be seen at the rules using the commands shown earlier.

15

16.jpg

Now, I add both the machines under the “Applied to” field for them to get the allow rules applied.

17.jpg

18.jpg

And, now once we try to ping “It works”. This demonstrates true “Micro Segmentation” for both ingress and egress traffic and also the power of “Applied to” field.

19.jpg

Hope, this will help !!!

 

Advertisements

Update Manager plugin missing in vCenter Enhanced Linked Mode Configuration

I have seen this issue now multiple times especially in vCenter 6.0 setup of enhanced linked mode, and thus sharing the solution implemented in the scenario.

Issue : Two sites , let’s say DC and DR. Each site having it’s own vCenter appliance with an External Platform Services Controller . The PSC are part of the Single SSO domain in two different sites and hence forming an Enhanced linked mode configuration.

Now, everything works fine, but as we know till vSphere 6.0 update manager is still installed on a separate windows machine and is linked to the vCenter. The same was done at each site where independent Update manager was deployed and connected to it’s respective vCenter at DC and DR. However, after this once we login to the web-client > Click update manager we are not able to see the update manager plugin from DR vCenter and also unable to manage the DR update manager from DC site.

After exploring the configurations for update manager/web-client on both vCenter’s at DC and DR, it was found that there is a permission issue on vSphere-client folder at DR site along with missing plugin files.

Working vCenter at DC :

Under the directory : etc/vmware/vsphere-client/vc-packages/vsphere-client-serenity# ls -lh

drwx—— 3 vsphere-client users 4.0K Oct 11 17:12 com.vmware.vcIntegrity-6.0.0.29963

 

Non Working vCenter at DR:

There was no directory as vc-packages under /etc/vmware/vsphere-client. Hence created the directory and copied the same file from DC vCenter.

Under the directory : /etc/vmware/vsphere-client/vc-packages/vsphere-client-serenity# ls -lh

drwx—— 3 root           root  4.0K Oct 11 11:40 com.vmware.vcIntegrity-6.0.0.29963

 

But if you look closely, the permissions are owned by root and not vsphere-client users . Thus , used the below commands to modify the permissions.

chown vsphere-client /etct/vmware/vsphere-client 

 

The above command also changes the owner to the underlying directories and files from root to vsphere-client

chgrp -r users /etct/vmware/vsphere-client 

Now , when we check the same location /etc/vmware/vsphere-client/vc-packages/vsphere-client-serenity . It shows the same plugin and permissions as the working vCenter at DC.

Next step for the changes to take affect was to restart the vSphere-client service on the appliance , and relogin to the web-client and validate.

As we did, we are able to manage the update manager configuration from web-client at both sites now.

Thanks, hope this helps !!

 

 

 

 

 

 

vCloud Director Load Balancing with NSX Edge

After a lot of searching around the internet , I was still unable to find something which specifically defines configuration of vCloud Director load balancing on NSX Edge
We do have whitepaper’s available for vRealize Automation components load balancing, but at-least I didn’t get something . Thus, thought of writing something which may help in future.

Firstly, this is based on vCloud Director 8.20 and NSX 6.3.2 version. Below is the topology , of the configuration., where NSX load balancer is configured in One ARM mode.

blog1

HTTP Certificates (With SSL Offload for HTTP):

Ideally, for the individual Cells you want to issue a certificate that MATCHES the hostname . This will be used by the load balancer to connect via SSL to the hosts in the pool. Also this will allow to connect directly to a cell without a certificate error. Thus, Obtain a certificate for the Load Balancer VIP address to install directly onto the load balancer NSX edge. This will be the secure connection the clients use when connecting through the load balancer. This setup ensures client to load balancer and load balancer to cell is encrypted.

In this example, SSL pass-through was configured for Portal acces, and as you would know console is a pure TCP connection and have to allow pass-through.

Below is Load balancer configuration on the NSX Edge.

1.Enabled the Load Balancer on newly deployed NSX Edge with X-Large size.

blog2

2. Added three application profile for VCD Portal (HTTPS), VCD Portal (HTTP) and VCD Console (TCP)

app3

3. Created Service Monitoring for Console and HTTPS Portal access. Used the default http monitor for HTTP access for portal.

blog4

blog5

4. Added machine into Pools for HTTPS, HTTP and Console connection with the respective Service monitor.

blog6

5. Created the Virtual IP for respective Pools

blog7

Then validated the access from the internet for the portal and tried opening the console which worked fine. I have not got into details on changes which need to be made on your physical network for the same.

Hope, this would give a fair idea for setting up the VCD LB on NSX Edge.

Happy Diwali!!

 

 

NSX Troubleshooting tips

ESXi Host Level Troubleshooting

1. How to verify that VIBs are successfully installed on the ESXi host:

Verify the NSX vibs are installed and correct version is on the ESXi host esxcli software vib list

(Will display the list of all the VIBs installed on the hosts and user can grep for vxlan and vsip VIBs)

esxcli software vib get –vibname esx-vxlan

esxcli software vib get –vibname esx-vsip

esxcli software vib get –vibname esx-vxlan

Verify VXLAN kernel module vdl2 is loaded on the ESXi host vmkload_mod –l | grep vdl2
Find the VDS name associated with this host’s VTEP. esxcli network vswitch dvs vmware vxlan list

If none of these commands return expected output, this is an indication of a problem and logs should be verified.

Relevant logs to be checked are:

/var/log/esxupdate.log

/var/log/vmkernel.log

Syslog collectors like LogInsight can be configured to send alerts/errors for certain messages detected in the logs.

Sample Output:

2. How to verify control-plane is up between the host and the controller per logical-switch.

Verify logical network information and controller-plane connection per logical-switch esxcli network vswitch dvs vmware vxlan network list –vds-name <VDS_Name>
Verify message bus TCP connection (vsfwd) esxcli network ip connection list | grep 5671
Verify controller TCP connection (netcpad) esxcli network ip connection list | grep 1234
Verify controller connection from host /etc/init.d/netcpad

<status/start/stop/restart>

Verify the firewall process running on the host /etc/init.d/vShield-stateful-firewall

<status/start/stop/restart>

If there are VMs present attached to a logical switch on this host, the host should have controller-connections in the output of this command (there should be one connection for each logical switch which has an attached VM running on this host).

Check if all the controller connections show “up” or “down”. If there is a down, it warrants more debugging and checking the logs on the host and/or logging into the controllers for further debugging.

Relevant logs to be checked are the netcpa and vsfwd communication channel logs:

/var/log/netcpad.log

/var/log/vsfwd.log