Install Kubernetes on VMware Workstation

It’s been quite sometime I have published a blog, the pandemic and WFH has kept things busy. However, after some time I am trying to get my learnings published through this blog again. This was a long pending one, I had captured screenshots few months back, but never got time to publish.

In this blog, I am starting with Kubernetes. While, I have worked on NSX-T (NCP) integrations with VMware Tanzu solutions in the past year which requires Kubernetes knowledge for deployment, troubleshooting etc.

I am documenting the steps to setup Kubernetes with Calico on my laptop on top of Workstation running Ubuntu 18.04 to get started.

Introduction : What is KUBERNETES

Kubernetes is an open-source container management system that is available for free. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes gives you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, releasing organizations from tedious deployment tasks.

Kubernetes was originally designed by Google and maintained by the Cloud Native Computing Foundation (CNCF). It is quickly becoming the new standard for deploying and managing software in the cloud. Kubernetes follows the master-slave architecture, where, it has a master that provides centralized control for an all agents. Kubernetes has several components including, etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy, docker and much more.

prerequisite

  • Two new Workstation machine with Ubuntu 18.04 installed
  • A static IP address 192.168.0.103 is configured on the first instance (Master) and 192.168.0.104 is configured on the second instance (worker).
  • Minimum 2GB RAM per instance.
  • A Root password is setup on each instance.

UPDATE LINUX

It’s always good to start with the latest updates thus update your Ubuntu version.

sudo apt-get update -y

CONFIGURING YOUR NODES

Before starting, you will need to configure hosts file and hostname on each server, so each server can communicate with each other using the hostname.

First, open /etc/hosts file on the first server:

nano /etc/hosts

Add the following lines:

192.168.0.103 master-node

192.168.0.104 worker-node

Save and close the file when you are finished, then setup hostname by running the following command:

hostnamectl set-hostname master-node

Next, open /etc/hosts file on second server:

nano /etc/hosts

Add the following lines:

192.168.0.103 master-node

192.168.0.104 worker-node

Save and close the file when you are finished, then setup hostname by running the following command:

hostnamectl set-hostname worker-node

DISABLE SWAP

Next, you will need to disable swap memory on each server. Because, kubelets do not support swap memory and will not work if swap is active or even present in your /etc/fstab file.

You can disable swap memory usage with the following command:

swapoff -a

You can disable this permanent by commenting out the swap file in /etc/fstab:

nano /etc/fstab

Comment out the swap line as shown below:

#/etc/fstab: static file system information. # # Use ‘blkid’ to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point>   <type>  <options>       <dump>  <pass> # / was on /dev/sda4 during installation UUID=6f612675-026a-4d52-9d02-547030ff8a7e /               ext4    errors=remount-ro 0       1 # swap was on /dev/sda6 during installation #UUID=46ee415b-4afa-4134-9821-c4e4c275e264 none            swap    sw              0       0 /dev/sda5 /Data               ext4   defaults  0 0

Save and close the file, when you are finished.

or add a command to disable swap and reboot

sudo sed -i ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab

install DOcker

First, install required packages to add Docker repository with the following command:

apt-get install apt-transport-https ca-certificates curl software-properties-common -y

Next, download and add Docker’s GPG key with the following command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key –keyring /etc/apt/trusted.gpg.d/docker.gpg add –

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add –

Next, add Docker repository with the following command:

sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”

sudo add-apt-repository \   “deb [arch=amd64] https://download.docker.com/linux/ubuntu \   $(lsb_release -cs) \   stable”

Next, update the repository and install Docker with the following command:

apt-get update -y apt-get install docker-ce -y

# Set up the Docker daemon cat <<EOF | sudo tee /etc/docker/daemon.json {   “exec-opts”: [“native.cgroupdriver=systemd”],   “log-driver”: “json-file”,   “log-opts”: {     “max-size”: “100m”   },   “storage-driver”: “overlay2” } EOF

# Create /etc/systemd/system/docker.service.d

sudo mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker

sudo systemctl daemon-reload

sudo systemctl restart docker

If you want the docker service to start on boot, run the following command:

sudo systemctl enable docker

INSTALL KUBERNETES

Next, you will need to install kubeadm, kubectl and kubelet on both the server.

First, download and GPG key with the following command:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –

Next, add Kubernetes repository with the following command:

echo ‘deb https://apt.kubernetes.io/ kubernetes-xenial main’ | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo bash -c ‘cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb https://apt.kubernetes.io/ kubernetes-xenial main

EOF’

Finally, update the repository and install Kubernetes with the following command:

sudo apt-get update -y

CHECK PACKAGE LIST

apt-cache policy kubelet | head -n 20

apt-cache policy docker.io | head -n 20

sudo apt-get install -y docker.io kubelet kubeadm kubectl

sudo apt-mark hold docker.io kubelet kubeadm kubectl ## this is to not allow apt to upgrade this

There was a docker permission error while starting docker. It can be solved by below command :

sudo chmod 666 /var/run/docker.sock

configuring master node

All the required packages are installed on both servers. Now, it’s time to configure Kubernetes Master Node.

First, initialize your cluster using its private IP address with the following command:

kubeadm init –pod-network-cidr=192.168.0.0/16 –apiserver-advertise-address=192.168.48.128

You should see the following output:

Your Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node as root:kubeadm join –token 62b281.f819128770e900a3 192.168.0.103:6443 –discovery-token-ca-cert-hash sha256:68ce767b188860676e6952fdeddd4e9fd45ab141a3d6d50c02505fa0d4d44686

Note : Note down the token from the above output. This will be used to join Slave Node to the Master Node in the next step.

Next, you will need to run the following command to configure kubectl tool:

mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config

Next, check the status of the Master Node by running the following command:

kubectl get nodes

You should see the following output:

NAME          STATUS     ROLES     AGE       VERSION master-node   NotReady   master    14m       v1.9.4

In the above output, you should see that Master Node is listed as not ready. Because the cluster does not have a Container Networking Interface (CNI).

Let’s deploy a Calico CNI for the Master Node with the following command:

curl https://docs.projectcalico.org/manifests/calico.yaml -O

kubectl apply -f calico.yaml

Make sure Calico was deployed correctly by running the following command:

kubectl get pods –all-namespaces

You should see the following output:

Now, Run kubectl get nodes command again, and you should see the Master Node is now listed as Ready.

kubectl get nodes

ADD WORKER NODE to the CLUSTER

Next, you will need to log in to the worker Node and add it to the Cluster. Remember the join command in the output from the Master Node initialization command and issue it on the worker Node as shown below:

kubeadm join –token 62b281.f819128770e900a3 192.168.0.103:6443 –discovery-token-ca-cert-hash sha256:68ce767b188860676e6952fdeddd4e9fd45ab141a3d6d50c02505fa0d4d44686

Once the Node is joined successfully, you should see the following output:

[discovery] Trying to connect to API Server “192.168.0.103:6443” [discovery] Created cluster-info discovery client, requesting info from “https://192.168.0.104:6443” [discovery] Requesting info from “https://192.168.0.104:6443” again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.0.104:6443” [discovery] Successfully established connection with API Server “192.168.0.103:6443″This node has joined the cluster: * Certificate signing request was sent to master and a response   was received. * The Kubelet was informed of the new secure connection details.Run ‘kubectl get nodes’ on the master to see this node join the cluster.

Now, check the nodes status using kubectl get nodes

So the above steps, have setup a small test Kubernetes cluster with one Master and one Worker node and calico as the CNI for some testing.

Hope, this will help you as well in starting with Kubernetes.

This is just a start on the kubernetes journey.

#Stay Safe everyone.

Network, It’s Time to Modernize!

Network, It’s Time to Modernize!

The network is a critical component of any IT environment. When it works, it’s “normal” and few notice it. But the smallest glitch can have devastating business impacts. For over a decade, networking has been adapting to become more programmable, closer to applications, and easier to use.


VMware Social Media Advocacy

NSX-T Service Interface or Centralized Service Port for vRealize Automation Load Balancing

Service Interface or previously known as CSP (Centralized Service Port) connecting to VLAN or Overlay segments can be used for providing Load balancer functions. It is connected to a standalone Tier1- Gateway which has only Service router function and no Distributed Router (DR) function.

The Service router can be deployed on a single NSX Edge node or two NSX Edge nodes in Active-Standby mode.

A standalone tier-1 logical router:

  • Must not have a connection to a tier-0 logical router.
  • Must not have a downlink.
  • Can have only one service router or centralized service port (CSP) if it is used to attach a load balancer (LB) service.
  • Can connect to an overlay logical switch or a VLAN logical switch.

The Tier-1 standalone service router is connected to an Overlay or VLAN logical switch and can communicate to other devices through the regular Tier-1 gateway or existing VLAN network with static routs configuration and advertisement.

In this scenario, we are considering deploying a standalone Tier-1 gateway and configuring a service router for load balancing to be used by vRA components. As vRA components are primarily VLAN networks, then the service router will be connected to VLAN Logical switch in a One-Arm Load Balancer configuration.

I am doing all the configuration through Advanced Networking and Security; the same configuration can be done through the Simplified UI.

Firstly, deploy a new standalone Tier-1 Router,

Create a VLAN Logical Switch, and provide the VLAN id. IN this example, VLAN 10 .

Go back to the Tier-1 router and go to configuration and Add a Router Port as Centralized Service Port, connecting it to the LB-LS which we created earlier.

Then under the Subnets, go ahead and Add and IP which will an Interface IP address.

Then, add a Static Route with the next hop as the gateway address of VLAN subnet on which vRA appliances and the Interface IP is created attaching it to the same Service Router Port which was created earlier.

After the routes and interface IP are set, the next steps will depend on your NSX Edge VLAN Transport Zone design.

In basic, out of the NSX Edge VM FP-ETH1 and FP-ETH2 interface will be connected to separate N-VDS hosting the VLAN Logical Switch or a single. This interface of the Edge VM will be connecting to a Distributed Switch Port-group or with 2 PNIC design if the underlying ESXi host are having a N-VDS then a VLAN logical switch needs to be trunked.

The trunk on the DVS Port-group as the VLAN tag is done on the Logical Switch which was created earlier and thus the Service Router to be in the same network as the vRA appliances and thus Load Balance traffic between those appliances.

Thus, have the NSX edge interface connecting to a port-group which have a Trunk (0-4094) Distributed Switch port-group.

Add Profiles

Application profile must be created to define the behavior of a particular type of network traffic. For NSX-T, two application profiles need to be created to:

  1. Redirect HTTP to HTTPS
  2. Handle HTTPS traffic After the configuration of an application profile, the same should be associated with a virtual server.

The virtual server then processes traffic according to the options specified in the application profile.

Configure the Application Profile for HTTP requests

  • Go to Load Balancing -> Profiles -> Application Profiles
  • Click the Add icon and choose HTTP Profile.
  • Choose a Name for the profile and enter parameters (please refer to the example below)

Configure the Application Profile for HTTPS requests

  • Go to Load Balancing → Profiles → Application Profiles
  • Click the Add icon and choose Fast TCP Profile.
  • Choose a Name for the profile and enter parameters (please refer to the example below)

Configure Persistence Profile

  • Go to Load Balancing → Profiles → Persistent Profiles
  • Click the Add icon and select Source IP Persistence
  • Choose a Name for the profile and enter parameters (please refer to the example below)

Add Active Health Monitor

Configuring active health monitoring is like creating health checks on other load-balancers. When you associate an active health monitor with a pool, the pool members are monitored according to the active health monitor parameters.

  • Go to Load Balancing → Monitors → Active Health Monitors
  • Click the Add icon
  • Choose a Name for the active health monitor and enter Monitor Properties (please refer to the example below)

Note: LbHttpsMonitor is pre-configured monitor for HTTPS protocol and it can be used for this Active Health Monitor

  • Configure Health check parameters with the following values:
    • Health Check Protocol: HTTPS
    • Request Method: GET
    • Request URL: (see table below)
    • Request Version: HTTP_VERSION_1_1
    • Response Status Codes (see table below)
    • Response Body (see table below)
    • Ciphers: High Security
    • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    • Protocols: TLS_V1_2
    • Server Auth: IGNORE
    • Certificate Chain Depth: 3
NAMETYPEINTERVALRETRIESTIMEOUTURLRESPONSE CODERESPONSE BODY
vra_https_va_webHTTPS3310/vcac/services/api/health200,204 
vra_https_iaas_webHTTPS3310/wapi/api/status/web REGISTERED
vra_https_iaas_mgrHTTPS3310/VMPSProvision ProvisionService  
vro_https_8283  HTTPS3310/vco-controlcenter/docs/200 

Here’s an example of vra_https_va_web Health Monitor configuration:

Add Server Pools

NSX-T Server Pools are used to contain the nodes that are receiving traffic. You will need to create a single pool per vRealize Operations Manager cluster with all the data nodes participating in the cluster as members. Remote collectors should not be added into this pool.

  • Go to Load Balancing → Server Pools
  • Click the Add icon
  • Choose a Name for the pool.
  • Set Load Balancing Algorithm as LEAST_CONNECTION
  • Configure SNAT Translation as Auto Map
  • Add the Pool Members (vRA nodes IP addresses and Port)
  • Name
  • IP Address
  • Weight: 1
  • Port: 443
  • State: ENABLED
  • Attach an Active Health Monitor to the pool (please refer to the example below)
POOL NAMEALGORITHMMEMBER NAMEIP ADDRESSPORTMONITOR
pool_vra-va-web_80Least connectionsvra_va1IP80nsx-default-http-monitor
  vra_va2IP80 
pool_vra-va-web_443Least connectionsvra_va1IP443vra_https_va_web
  vra_va2IP443 
*pool_vra-rconsole_8444Least connectionsvra_va1IP8444vra_https_va_web
  vra_va2IP8444 
pool_vro-cc_8283Least connectionsvra_va1IP8283vro_https_8283  
  vra_va2IP8283 
pool_iaas-web_443Least connectionsvra_web1IP443vra_https_iaas_web
  vra_web2IP443 
pool_iaas-manager_443**Least connectionsvra_ms1IP443vra_https_iaas_mgr
  vra_ms1IP443 

* Port 8444 is optional – it is required only if you want to use remote console from vRealize Automation.

** The Manager Service uses active-passive type of configuration hence the load balancer will always send the traffic to the current active node regardless of the load balancing method.

Here’s an example of pool_vra-va-web_443 Server Pool configuration:

Add Virtual Servers

  • Go to Load Balancing → Virtual Servers
  • Click the Add icon
  • Choose a Name for Virtual Server
  • Configure Application Type as Layer 7
  • Assign appropriate Application Profile (please refer to the example below)
  • Assign IP Address (Virtual IP) and Port
  • Add Default Pool Member Port
  • Choose the Server Pool configured
  • Assign appropriate Persistent Profile (please refer to the example below)

Note: There is no need to configure any Server Pool for this Virtual Server

NAMETYPEPROFILEIP ADDRPORTSERVER POOLPERSISTENCE PROFILE
vs_vra-va-web_80Layer 7vRA_HTTP_to_HTTPSIP80pool_vra-va-web_80None
vs_vra-va-web_443Layer 4vRA_HTTPS  IP443pool_vra-va-web_443source_addr_vra  
vs_iaas-web_443Layer 4vRA_HTTPS  IP443pool_iaas-web_443source_addr_vra  
vs_iaas-manager_443Layer 4vRA_HTTPS  IP443pool_iaas-manager_443None
*vs_vra-va-rconsole_8444Layer 4vRA_HTTPS  IP8444pool_vra-rconsole_8444source_addr_vra  
vs_vro-cc_8283Layer 4vRA_HTTPS  IP8283pool_vro-cc_8283source_addr_vra  

* Port 8444 is optional – it is required only if you want to use remote console from vRealize Automation.

Configure Load Balancer

You need to specify a load-balancer configuration parameter and configure the NSX-T appliance for load balancing by creating the respective service.

  • Go to Load Balancing → Load Balancers
  • Click the Add icon
  • Choose a Name, select appropriate Load Balancer Size (depends on vRA cluster size) and Error Log Level and press OK
  • Attach the previously created Tier 1 Logical Router to the newly created Load Balancer (Overview → Attachment → EDIT)
  • Attach the previously created Virtual Servers to the Load Balancer (Virtual Servers → ATTACH)

Hope, this will help in configuration of a Load balancer for vRA components using NSX-T.

Setup an Ubuntu VM as SFTP Server for NSX-T backup

In this blog, I will be setting up an Ubuntu Virtual Machine  as a SFTP Server for NSX-T config backups.

  • Set an Ubuntu Server , I used the below release

animeshd@sftp:~$ lsb_release -a

Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic

  • Installed VMware tools on the Ubuntu machine.
  • Install latest updates to the Ubuntu Machine using : sudo apt-get update (assume internet access is there)
  • Next , Install an Openssh Server using : sudo apt install openssh-server

check the status of ssh – running

sftp1

Next, using putty ssh to the server, and take the backup of /etc/ssh/sshd_config file.

In the current example, I took a backup of the file under the tmp directory as /tmp/sshd_backup.

sftp2

As the original file is read only, use chmod 777 against the /etc/ssh/sshd_config file to edit it. Use an editor of your choice on the system, I used Nano editor to the open the file for editing.

  • Edit the ListenIPaddress and add the IP of the local machine.

sftp3

  • Then change X11Forwarding no   (change from ‘yes’), and then add the overriding settings as per the screenshot below.

sftp4

Here’s what each of those directives do:

  • Match User tells the SSH server to apply the following commands only to the user specified.
  • ForceCommand internal-sftp forces the SSH server to run the SFTP server upon login.
  • PasswordAuthentication yes allows password authentication for this user.
  • ChrootDirectory /var/nsxtsftp/ ensures that the user will not be allowed access to anything beyond the /var/nsxtsftp directory.
  • AllowAgentForwarding no, AllowTcpForwarding no. and X11Forwarding no disables port forwarding, tunneling and X11 forwarding for this user.

Restart the ssh service on the machine

  • Now, I have created these directory and user in the on the SFTP Ubuntu machine.

Create a new user

  • sudo adduser –shell /bin/false nsxtbackupuser

Create a new directory

  • sudo mkdir -p /var/nsxtsftp/backups

Change owner and  permission on the new directory

  • sudo chown root:root /var/nsxtsftp
  • sudo chown nsxtbackupuser:nsxtbackupuser /var/nsxtsftp
  • sudo chown 755 /var/nsxtsftp

Once, this is done use the NSX-T UI, under system go ahead and edit and configure backup to the backup server.

sftp5

Then, perform a backup and view the result.

sftp6

Backup files are getting created.

sftp7

 

 

 

 

 

 

 

 

 

 

 

 

NSX-T Part 10: Configure N-S Routing

In the previous part, we have setup the T1 router and connected all the logical switches with its gateway configured on it. In this part after the Edges are deployed, we will be configuring the N-S routing for VMs to reach the external network.

nsxt10-1

We have just the T1 router currently available, now we will start with configuring the T0 router.

nsxt10-2

nsxt10-3

I have deployed it in Active-standy state as I will be using this setup for future deployment of PKS or Kubernetes.

nsxt10-4

Next, I connected the T1 router to T0 router.

nsxt10-5

As seen below, now the T1 router is connected to T0 router.

nsxt10-6

Next is to connect the Edges upstream to the VLAN network. In the previously setup, we had the VLAN-TZ setup and now we are first adding a VLAN backed logical switch for upstream connecting. As the lab is in a nested environment , VLAN 0 does fine 🙂

nsxt10-7

Quick summary of the T0 router below.

nsxt10-8

Next, is to connect the edges upstream with the VLAN logical switch and thus we need to configure the router ports on the T0 router on the below screen.

nsxt10-9

Below is the configuration output from the VYOS router which is being used for both my NSX-V and NSX-T environment.

nsxt10-10

Created a new Router port in the below screen, with the ip address used on the same L2 network

nsxt10-11

Similarly, we configured two router ports as we will be using BGP routing between the VYOS router and edges. We already know that on the standby edge , NSX automatically prepends the AS-Path to make it a less preferred route and thus no changes are required on the upstream router.

nsxt10-12

Below we do the BGP configuration .

nsxt10-13

nsxt10-14nsxt10-15

Similarly, we configure the routing for each edge router port.

nsxt10-16

Next, is to advertise the T1 routes upstream which is the all connected routes.

nsxt10-17

nsxt10-18

Quick recap on the logical networks connected to T1.

nsxt10-19

Next step is to validate the routes on the Active Edge. Firstly, we get the logical router available.

nsxt10-20

Login to the specific T0 SR component (as SR is responsible for routing N-S)

nsxt10-21

Check the routes, and we see that upstream and NSX-V environment routes are learnt through the VYOS router.

nsxt10-22

Below is the neighbor summary of the VYOS router.

nsxt10-23

This completes the NSX-T setup configuration. In future, I am planning to upgrade this setup to NSX-T 2.4.x release , as there are additional features available on the same.

Hope, this 10 part series was helpful.

NSX-T Part9: Configure Edge Cluster

In this part continuing with the edges configuration, we will configure the edge cluster. Before we create a new edge cluster, an edge cluster profile needs to be used.

There is already a default profile which is available.

nsxt9-1

However, I created a new Edge Cluster Profile as I do not want to use the default one.

nsxt9-2

nsxt9-3

Then I created a new Edge cluster and added both the previously created Edges into the newly created edge cluster.

nsxt9-4

nsxt9-5

nsxt9-6

nsxt9-7

Post which we bind the edge cluster profile to the edge-cluster profile.

nsxt9-8

In next part , I will configure the logical routing.