Network, It’s Time to Modernize!

Network, It’s Time to Modernize!

The network is a critical component of any IT environment. When it works, it’s “normal” and few notice it. But the smallest glitch can have devastating business impacts. For over a decade, networking has been adapting to become more programmable, closer to applications, and easier to use.


VMware Social Media Advocacy

NSX-T Service Interface or Centralized Service Port for vRealize Automation Load Balancing

Service Interface or previously known as CSP (Centralized Service Port) connecting to VLAN or Overlay segments can be used for providing Load balancer functions. It is connected to a standalone Tier1- Gateway which has only Service router function and no Distributed Router (DR) function.

The Service router can be deployed on a single NSX Edge node or two NSX Edge nodes in Active-Standby mode.

A standalone tier-1 logical router:

  • Must not have a connection to a tier-0 logical router.
  • Must not have a downlink.
  • Can have only one service router or centralized service port (CSP) if it is used to attach a load balancer (LB) service.
  • Can connect to an overlay logical switch or a VLAN logical switch.

The Tier-1 standalone service router is connected to an Overlay or VLAN logical switch and can communicate to other devices through the regular Tier-1 gateway or existing VLAN network with static routs configuration and advertisement.

In this scenario, we are considering deploying a standalone Tier-1 gateway and configuring a service router for load balancing to be used by vRA components. As vRA components are primarily VLAN networks, then the service router will be connected to VLAN Logical switch in a One-Arm Load Balancer configuration.

I am doing all the configuration through Advanced Networking and Security; the same configuration can be done through the Simplified UI.

Firstly, deploy a new standalone Tier-1 Router,

Create a VLAN Logical Switch, and provide the VLAN id. IN this example, VLAN 10 .

Go back to the Tier-1 router and go to configuration and Add a Router Port as Centralized Service Port, connecting it to the LB-LS which we created earlier.

Then under the Subnets, go ahead and Add and IP which will an Interface IP address.

Then, add a Static Route with the next hop as the gateway address of VLAN subnet on which vRA appliances and the Interface IP is created attaching it to the same Service Router Port which was created earlier.

After the routes and interface IP are set, the next steps will depend on your NSX Edge VLAN Transport Zone design.

In basic, out of the NSX Edge VM FP-ETH1 and FP-ETH2 interface will be connected to separate N-VDS hosting the VLAN Logical Switch or a single. This interface of the Edge VM will be connecting to a Distributed Switch Port-group or with 2 PNIC design if the underlying ESXi host are having a N-VDS then a VLAN logical switch needs to be trunked.

The trunk on the DVS Port-group as the VLAN tag is done on the Logical Switch which was created earlier and thus the Service Router to be in the same network as the vRA appliances and thus Load Balance traffic between those appliances.

Thus, have the NSX edge interface connecting to a port-group which have a Trunk (0-4094) Distributed Switch port-group.

Add Profiles

Application profile must be created to define the behavior of a particular type of network traffic. For NSX-T, two application profiles need to be created to:

  1. Redirect HTTP to HTTPS
  2. Handle HTTPS traffic After the configuration of an application profile, the same should be associated with a virtual server.

The virtual server then processes traffic according to the options specified in the application profile.

Configure the Application Profile for HTTP requests

  • Go to Load Balancing -> Profiles -> Application Profiles
  • Click the Add icon and choose HTTP Profile.
  • Choose a Name for the profile and enter parameters (please refer to the example below)

Configure the Application Profile for HTTPS requests

  • Go to Load Balancing → Profiles → Application Profiles
  • Click the Add icon and choose Fast TCP Profile.
  • Choose a Name for the profile and enter parameters (please refer to the example below)

Configure Persistence Profile

  • Go to Load Balancing → Profiles → Persistent Profiles
  • Click the Add icon and select Source IP Persistence
  • Choose a Name for the profile and enter parameters (please refer to the example below)

Add Active Health Monitor

Configuring active health monitoring is like creating health checks on other load-balancers. When you associate an active health monitor with a pool, the pool members are monitored according to the active health monitor parameters.

  • Go to Load Balancing → Monitors → Active Health Monitors
  • Click the Add icon
  • Choose a Name for the active health monitor and enter Monitor Properties (please refer to the example below)

Note: LbHttpsMonitor is pre-configured monitor for HTTPS protocol and it can be used for this Active Health Monitor

  • Configure Health check parameters with the following values:
    • Health Check Protocol: HTTPS
    • Request Method: GET
    • Request URL: (see table below)
    • Request Version: HTTP_VERSION_1_1
    • Response Status Codes (see table below)
    • Response Body (see table below)
    • Ciphers: High Security
    • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    • Protocols: TLS_V1_2
    • Server Auth: IGNORE
    • Certificate Chain Depth: 3
NAMETYPEINTERVALRETRIESTIMEOUTURLRESPONSE CODERESPONSE BODY
vra_https_va_webHTTPS3310/vcac/services/api/health200,204 
vra_https_iaas_webHTTPS3310/wapi/api/status/web REGISTERED
vra_https_iaas_mgrHTTPS3310/VMPSProvision ProvisionService  
vro_https_8283  HTTPS3310/vco-controlcenter/docs/200 

Here’s an example of vra_https_va_web Health Monitor configuration:

Add Server Pools

NSX-T Server Pools are used to contain the nodes that are receiving traffic. You will need to create a single pool per vRealize Operations Manager cluster with all the data nodes participating in the cluster as members. Remote collectors should not be added into this pool.

  • Go to Load Balancing → Server Pools
  • Click the Add icon
  • Choose a Name for the pool.
  • Set Load Balancing Algorithm as LEAST_CONNECTION
  • Configure SNAT Translation as Auto Map
  • Add the Pool Members (vRA nodes IP addresses and Port)
  • Name
  • IP Address
  • Weight: 1
  • Port: 443
  • State: ENABLED
  • Attach an Active Health Monitor to the pool (please refer to the example below)
POOL NAMEALGORITHMMEMBER NAMEIP ADDRESSPORTMONITOR
pool_vra-va-web_80Least connectionsvra_va1IP80nsx-default-http-monitor
  vra_va2IP80 
pool_vra-va-web_443Least connectionsvra_va1IP443vra_https_va_web
  vra_va2IP443 
*pool_vra-rconsole_8444Least connectionsvra_va1IP8444vra_https_va_web
  vra_va2IP8444 
pool_vro-cc_8283Least connectionsvra_va1IP8283vro_https_8283  
  vra_va2IP8283 
pool_iaas-web_443Least connectionsvra_web1IP443vra_https_iaas_web
  vra_web2IP443 
pool_iaas-manager_443**Least connectionsvra_ms1IP443vra_https_iaas_mgr
  vra_ms1IP443 

* Port 8444 is optional – it is required only if you want to use remote console from vRealize Automation.

** The Manager Service uses active-passive type of configuration hence the load balancer will always send the traffic to the current active node regardless of the load balancing method.

Here’s an example of pool_vra-va-web_443 Server Pool configuration:

Add Virtual Servers

  • Go to Load Balancing → Virtual Servers
  • Click the Add icon
  • Choose a Name for Virtual Server
  • Configure Application Type as Layer 7
  • Assign appropriate Application Profile (please refer to the example below)
  • Assign IP Address (Virtual IP) and Port
  • Add Default Pool Member Port
  • Choose the Server Pool configured
  • Assign appropriate Persistent Profile (please refer to the example below)

Note: There is no need to configure any Server Pool for this Virtual Server

NAMETYPEPROFILEIP ADDRPORTSERVER POOLPERSISTENCE PROFILE
vs_vra-va-web_80Layer 7vRA_HTTP_to_HTTPSIP80pool_vra-va-web_80None
vs_vra-va-web_443Layer 4vRA_HTTPS  IP443pool_vra-va-web_443source_addr_vra  
vs_iaas-web_443Layer 4vRA_HTTPS  IP443pool_iaas-web_443source_addr_vra  
vs_iaas-manager_443Layer 4vRA_HTTPS  IP443pool_iaas-manager_443None
*vs_vra-va-rconsole_8444Layer 4vRA_HTTPS  IP8444pool_vra-rconsole_8444source_addr_vra  
vs_vro-cc_8283Layer 4vRA_HTTPS  IP8283pool_vro-cc_8283source_addr_vra  

* Port 8444 is optional – it is required only if you want to use remote console from vRealize Automation.

Configure Load Balancer

You need to specify a load-balancer configuration parameter and configure the NSX-T appliance for load balancing by creating the respective service.

  • Go to Load Balancing → Load Balancers
  • Click the Add icon
  • Choose a Name, select appropriate Load Balancer Size (depends on vRA cluster size) and Error Log Level and press OK
  • Attach the previously created Tier 1 Logical Router to the newly created Load Balancer (Overview → Attachment → EDIT)
  • Attach the previously created Virtual Servers to the Load Balancer (Virtual Servers → ATTACH)

Hope, this will help in configuration of a Load balancer for vRA components using NSX-T.

Setup an Ubuntu VM as SFTP Server for NSX-T backup

In this blog, I will be setting up an Ubuntu Virtual Machine  as a SFTP Server for NSX-T config backups.

  • Set an Ubuntu Server , I used the below release

animeshd@sftp:~$ lsb_release -a

Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic

  • Installed VMware tools on the Ubuntu machine.
  • Install latest updates to the Ubuntu Machine using : sudo apt-get update (assume internet access is there)
  • Next , Install an Openssh Server using : sudo apt install openssh-server

check the status of ssh – running

sftp1

Next, using putty ssh to the server, and take the backup of /etc/ssh/sshd_config file.

In the current example, I took a backup of the file under the tmp directory as /tmp/sshd_backup.

sftp2

As the original file is read only, use chmod 777 against the /etc/ssh/sshd_config file to edit it. Use an editor of your choice on the system, I used Nano editor to the open the file for editing.

  • Edit the ListenIPaddress and add the IP of the local machine.

sftp3

  • Then change X11Forwarding no   (change from ‘yes’), and then add the overriding settings as per the screenshot below.

sftp4

Here’s what each of those directives do:

  • Match User tells the SSH server to apply the following commands only to the user specified.
  • ForceCommand internal-sftp forces the SSH server to run the SFTP server upon login.
  • PasswordAuthentication yes allows password authentication for this user.
  • ChrootDirectory /var/nsxtsftp/ ensures that the user will not be allowed access to anything beyond the /var/nsxtsftp directory.
  • AllowAgentForwarding no, AllowTcpForwarding no. and X11Forwarding no disables port forwarding, tunneling and X11 forwarding for this user.

Restart the ssh service on the machine

  • Now, I have created these directory and user in the on the SFTP Ubuntu machine.

Create a new user

  • sudo adduser –shell /bin/false nsxtbackupuser

Create a new directory

  • sudo mkdir -p /var/nsxtsftp/backups

Change owner and  permission on the new directory

  • sudo chown root:root /var/nsxtsftp
  • sudo chown nsxtbackupuser:nsxtbackupuser /var/nsxtsftp
  • sudo chown 755 /var/nsxtsftp

Once, this is done use the NSX-T UI, under system go ahead and edit and configure backup to the backup server.

sftp5

Then, perform a backup and view the result.

sftp6

Backup files are getting created.

sftp7

 

 

 

 

 

 

 

 

 

 

 

 

NSX-T Part 10: Configure N-S Routing

In the previous part, we have setup the T1 router and connected all the logical switches with its gateway configured on it. In this part after the Edges are deployed, we will be configuring the N-S routing for VMs to reach the external network.

nsxt10-1

We have just the T1 router currently available, now we will start with configuring the T0 router.

nsxt10-2

nsxt10-3

I have deployed it in Active-standy state as I will be using this setup for future deployment of PKS or Kubernetes.

nsxt10-4

Next, I connected the T1 router to T0 router.

nsxt10-5

As seen below, now the T1 router is connected to T0 router.

nsxt10-6

Next is to connect the Edges upstream to the VLAN network. In the previously setup, we had the VLAN-TZ setup and now we are first adding a VLAN backed logical switch for upstream connecting. As the lab is in a nested environment , VLAN 0 does fine 🙂

nsxt10-7

Quick summary of the T0 router below.

nsxt10-8

Next, is to connect the edges upstream with the VLAN logical switch and thus we need to configure the router ports on the T0 router on the below screen.

nsxt10-9

Below is the configuration output from the VYOS router which is being used for both my NSX-V and NSX-T environment.

nsxt10-10

Created a new Router port in the below screen, with the ip address used on the same L2 network

nsxt10-11

Similarly, we configured two router ports as we will be using BGP routing between the VYOS router and edges. We already know that on the standby edge , NSX automatically prepends the AS-Path to make it a less preferred route and thus no changes are required on the upstream router.

nsxt10-12

Below we do the BGP configuration .

nsxt10-13

nsxt10-14nsxt10-15

Similarly, we configure the routing for each edge router port.

nsxt10-16

Next, is to advertise the T1 routes upstream which is the all connected routes.

nsxt10-17

nsxt10-18

Quick recap on the logical networks connected to T1.

nsxt10-19

Next step is to validate the routes on the Active Edge. Firstly, we get the logical router available.

nsxt10-20

Login to the specific T0 SR component (as SR is responsible for routing N-S)

nsxt10-21

Check the routes, and we see that upstream and NSX-V environment routes are learnt through the VYOS router.

nsxt10-22

Below is the neighbor summary of the VYOS router.

nsxt10-23

This completes the NSX-T setup configuration. In future, I am planning to upgrade this setup to NSX-T 2.4.x release , as there are additional features available on the same.

Hope, this 10 part series was helpful.

NSX-T Part9: Configure Edge Cluster

In this part continuing with the edges configuration, we will configure the edge cluster. Before we create a new edge cluster, an edge cluster profile needs to be used.

There is already a default profile which is available.

nsxt9-1

However, I created a new Edge Cluster Profile as I do not want to use the default one.

nsxt9-2

nsxt9-3

Then I created a new Edge cluster and added both the previously created Edges into the newly created edge cluster.

nsxt9-4

nsxt9-5

nsxt9-6

nsxt9-7

Post which we bind the edge cluster profile to the edge-cluster profile.

nsxt9-8

In next part , I will configure the logical routing.

 

NSX-T Part8: Configure Edge nodes

In the previous part , I got two Edge VM nodes deployed. In this part we will configure them to function as an edge node . The first step is to configure an Edge Uplink Profile.

Initially, I configured the Edge using the earlier created Overlay Uplink profile in which  there was an active and a standby uplink , and was getting the below error .

nsxt8-1

I had to quickly change that and only configure with one active uplink, posted this for all your information if you run into this issue.

Create a new Edge-Overlay uplink profile

nsxt8-2.png

Create a new Edge-VLAN uplink profile.

nsxt8-3.png

All the Uplink profiles which are created.

nsxt8-8

Create a new Transport Zone . As we had already created the Overlay Transport zone for configuring the logical switches, we just need to create a new VLAN-Transport zone.

nsxt8-4.png

Configured both the edge-node VMs as a Transport node. Added both the overlay and VLAN transport zone as part of the edge transport zones.

nsxt8-5.png

tswitch1 configured for overlay

nsxt8-6.png

tswitch2 , new switch created for VLAN outbound connectivity to physical wordl.

nsxt8-7.png

Similarly, I configured both the edges as Edge Transport node

nsxt8-9.png

In the next part, I will continue with the Edge-cluster configuration.