Update ESXi hostname in OCI DNS for Oracle Cloud VMware Solution

While working with Oracle Cloud VMware Solution (OCVS), I encountered a scenario where I need to update the ESXi hostname post the OCVS SDDC was deployed.

Now, updating ESXi hostname is fairly easy as we would see but as ESXi servers are native BareMetal Instances within OCI compute, it is also important that those hostnames are also updated to maintain consistency.

Now by default for every subnet within the VCN , a private DNS Zone is created within OCI. This is by design based on the VCN resolver and the format for the DNS Domain names and Zones is as follows:

  • VCN domain name: <VCN-DNS-label>.oraclevcn.com
  • Subnet domain name: <subnet-DNS-label>.<VCN-DNS-label>.oraclevcn.com
  • Instance FQDN: <hostname>.<subnet-DNS-label>.<VCN-DNS-label>.oraclevcn.com

For an instance, you may assign a hostname. It’s assigned to the VNIC that’s automatically created during instance launch (that is, the primary VNIC). Along with the subnet domain name, the hostname forms the instance’s fully qualified domain name (FQDN):

Once we deploy the OCVS SDDC it automatically creates Private DNS Zones for the SDDC subnet which is created . This SDDC subnet has all the ESXi instances connected to it and thus DNS entry for the ESXi instances based on the Instance FQDN format is also automatically created.

Navigate to OCI > Networking > DNS Management> Zones .

Ensure you are in the right compartment

Select the Private Zones > appropriate subnet domain name.

Example : sub01234567.demolab.oraclevcn.com

These entries are protected in nature and cannot be modified directly from the OCI DNS.

Also, once there is a change in the ESXi hostname and FQDN , it needs to reflect in vCenter for which it needs to be removed from the vCenter and re-added back. Thus, I have documented the steps required to be done on OCVS and OCI to update the hostname.

  1. Put the ESXi hosts in maintenance mode in vCenter and NSX-T.
  2. Disconnect the ESXi host from the vCenter
  3. Remove the host from vCenter inventory
  4. Update hostname on VMware ESXi
    1. esxcli system hostname set –host=<hostname>
    2. esxcli system hostname set –fqdn=<hostname>sub01234567.demolab.oraclevcn.com

5. Update the ESXi name in OCI SDDC

Navigate to OCI console > VMware Solution > Software Defined Data Center > Select your SDDC

Under Resources > Select ESXi hosts

Click on the three dot symbol on the right of the ESXi host and Edit ESXi host.

Update the new hostname

6. Update the ESXi BareMetal Instance name

Under OCI console > Compute > Instances > View Instance details for the ESXi Compute Instance

7. Update the vNIC hostname

Under the same instances > Attached vNics and update the hostname.

The above step completes the config changes required in OCI, and if you go back and check the DNS records under DNS Management > Zones > Appropriate subnet domain > The NS records would be updated with the updated FQDN

8. Add the ESXi host back in the vCenter > Cluster

You will have to provide the credentials for ESXi host . Its the same vCenter credential available under SDDC.

9. Add the ESXi host in DSwitch

10. Remove the host from maintenance mode. Check NSX-T, the host should have automatically prepared within NSX-T.

11. Validate the config across vCenter, NSX-T.

Hope this blog will be helpful !!

Advertisement

Configure vSAN Encryption using vSphere Native Key Provider

Starting with vSphere 7.0U2, vSphere customers can use the native key provider built into vCenter for VM and Datastore encryption purposes. Traditionally, before this customers were dependent on 3rd party solutions like Hytrust for key management solutions.

In this blog we will be talking about configuring datastore level vSAN encryption on an existing configured vSAN cluster which will allow data at rest encryption . This is different from VM encryption and is slightly complex than that.

Enabling Data-At-Rest Encryption on a New vSAN Cluster is easier than Enabling Data-At-Rest Encryption on Existing vSAN Cluster due to the existence of virtual machines on the cluster and automatic disk reclaiming (must be set to manual) .

Navigate to the vCenter/Configure tab. Select Key Providers under Security:

  • Click ADD, and Add Native Key Provider:
  • Give the Native Key Provider a name.
  • Once created, you must backup the Native Key Provider before it becomes active:
  • Backup the Native Key Provider with a password (recommended), enable TPM if your host supports it. Also ensure to store the backup at a safe location.

Note : It’s important that you save the vSphere native key provider and store at a safe location, because it will be required to restore if you ever get into that situation during a state of disaster.

  • For existing vSAN cluster, migrate all the VMs off to another storage temporarily.
  • Disable vSphere HA
  • Now , create a new vSAN cluster
  • Configure vSAN services and enable Data-at-Rest Encryption, use the key created earlier. Note that Wipe residual data will take sometime. It can take upto 5-6 hours based on the number of nodes in the cluster.
  • Claim the disks manually for the vSAN cluster (7 capacity/1 cache per host) in the next screen.
  • Validate the configuration and click Finish.
  • Migrate Virtual machine back to the vSAN datastore and re-enable vSphere HA

That completes the task of enabling vSphere native key based vSAN encryption on an existing vSAN cluster.

Reference:

Add Custom HTTP Header to Oracle Cloud Load Balancer

An application load balancer on Oracle Cloud Infrastructure works on layer 7, so it supports both HTTP and HTTPS. It can distribute HTTP and HTTPS traffic based on host-based or path-based rules. An application load balancer is a context-aware load distribution that can forward and manipulate requests based on HTTP headers. It also has a configurable range of health check status codes, and additionally based on the requirements on backend servers custom request and response headers can be inserted.

Today, we will be talking about adding custom http request or response header. There are customer requirements which requires to add custom headers based on how the backend application is designed.

How Custom Headers work

Custom request and response headers allow you to specify additional headers that the load balancer adds to requests and responses. These rules enable you to offer metadata to your backend servers, enabling you to do things like figure out which listener made a request, geographic location of the client’s IP address, notify WebLogic that the Load Balancer Terminated SSL and so on.

Application Load balancer adds certain headers by default to all HTTP(S) requests and responses that it proxies between backends and clients. For more information, see https://docs.oracle.com/en-us/iaas/Content/Balance/Reference/httpheaders.htm

In this example, customer wants to replicate their existing on-prem environment by adding a specific HTTP header as CLIENTIP with the value of actual client IP using X-Forwarded-For or X-REAL-IP.

Before we go through the steps, lets understand about What is a rule set?

A rule set is a named collection of rules connected with a load balancer and applied to one or more load balancer listeners. You must first establish the rule set that contains the rules before you can apply it to a listener. Rules are objects that represent actions taken by a load balancer listener on traffic. The load balancer’s setup includes the rule set. When you create or edit a load balancer listener, you may specify the rule set to use. A rule set can contain the following sorts of rules:

Below are steps to add customize request header to OCI LB:

  1. Login to the OCI console – https://cloud.oracle.com/
  2. Navigate to –

Networking > Load Balancers > Select your load balancer and view details

3. Scroll down on the left-hand side
  • Select Rule Sets > Select Create Rule set > Give a Name > Select Specify Request Header Rules and select the Action “Add Request Header”.
  • Type in the Header name as per the variable or name user wants and select the value as {X-Real-IP} or {X-Forwarded-For}.

See the screenshot below, note I have chosen different Header names to show different values

  1. Save changes to save the Ruleset.
  2. Select Listeners under the same page –

Edit the Listener > Scroll down to Rulesets and attach the Ruleset created in Step 4 to the Listener. This will apply the ruleset to the Load Balancer Listener.

Now the configuration is complete . Let’s check from the backend server instance, where we can see the inserted custom header being received on the instance with the actual client-Ip used to test the load-balancer. We have used the below tcpdump command to check the same: tcpdump -Xx -s 0 -i <INTERFACE> port <PORT_NUM> | grep <Filter> -A 2 -B 2

As we can see in the screenshots above, a new custom header with the value of CLIENTIP and CLIENTip with the value of X-Forwarded-For and X-Real-IP (actual client IP address) is passed to the backend server.

Hope, this information was helpful.

I will also like to Thank my colleague Piyush Jalan (https://www.linkedin.com/in/piyush-jalan/) for his contribution to this blog.

Micro-segmentation and Beyond with NSX Firewall

Micro-segmentation and Beyond with NSX Firewall

VMware-based workload environments are the norm in private clouds for enterprise-class customers. 100%[1] of Fortune 500 companies deploy vSphere/ESXi. Further, ~99% of Fortune 1000 and ~98%[2] of Forbes Global 2000 companies deploy vSphere/ESXi. VMware’s deep presence in enterprise […]


VMware Social Media Advocacy

Announcing Availability of vSphere 7 Update 3c

Announcing Availability of vSphere 7 Update 3c

In November, we took the unprecedented step to retract the ESXi 7 Update 3 release from the market. This was to protect our customers from some potential failures as they upgrade to ESXi 7 Update 3 and our desire to minimize customer exposure to them. We’re pleased to announce that we have […]


VMware Social Media Advocacy

Install Kubernetes on VMware Workstation

It’s been quite sometime I have published a blog, the pandemic and WFH has kept things busy. However, after some time I am trying to get my learnings published through this blog again. This was a long pending one, I had captured screenshots few months back, but never got time to publish.

In this blog, I am starting with Kubernetes. While, I have worked on NSX-T (NCP) integrations with VMware Tanzu solutions in the past year which requires Kubernetes knowledge for deployment, troubleshooting etc.

I am documenting the steps to setup Kubernetes with Calico on my laptop on top of Workstation running Ubuntu 18.04 to get started.

Introduction : What is KUBERNETES

Kubernetes is an open-source container management system that is available for free. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes gives you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, releasing organizations from tedious deployment tasks.

Kubernetes was originally designed by Google and maintained by the Cloud Native Computing Foundation (CNCF). It is quickly becoming the new standard for deploying and managing software in the cloud. Kubernetes follows the master-slave architecture, where, it has a master that provides centralized control for an all agents. Kubernetes has several components including, etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy, docker and much more.

prerequisite

  • Two new Workstation machine with Ubuntu 18.04 installed
  • A static IP address 192.168.0.103 is configured on the first instance (Master) and 192.168.0.104 is configured on the second instance (worker).
  • Minimum 2GB RAM per instance.
  • A Root password is setup on each instance.

UPDATE LINUX

It’s always good to start with the latest updates thus update your Ubuntu version.

sudo apt-get update -y

CONFIGURING YOUR NODES

Before starting, you will need to configure hosts file and hostname on each server, so each server can communicate with each other using the hostname.

First, open /etc/hosts file on the first server:

nano /etc/hosts

Add the following lines:

192.168.0.103 master-node

192.168.0.104 worker-node

Save and close the file when you are finished, then setup hostname by running the following command:

hostnamectl set-hostname master-node

Next, open /etc/hosts file on second server:

nano /etc/hosts

Add the following lines:

192.168.0.103 master-node

192.168.0.104 worker-node

Save and close the file when you are finished, then setup hostname by running the following command:

hostnamectl set-hostname worker-node

DISABLE SWAP

Next, you will need to disable swap memory on each server. Because, kubelets do not support swap memory and will not work if swap is active or even present in your /etc/fstab file.

You can disable swap memory usage with the following command:

swapoff -a

You can disable this permanent by commenting out the swap file in /etc/fstab:

nano /etc/fstab

Comment out the swap line as shown below:

#/etc/fstab: static file system information. # # Use ‘blkid’ to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point>   <type>  <options>       <dump>  <pass> # / was on /dev/sda4 during installation UUID=6f612675-026a-4d52-9d02-547030ff8a7e /               ext4    errors=remount-ro 0       1 # swap was on /dev/sda6 during installation #UUID=46ee415b-4afa-4134-9821-c4e4c275e264 none            swap    sw              0       0 /dev/sda5 /Data               ext4   defaults  0 0

Save and close the file, when you are finished.

or add a command to disable swap and reboot

sudo sed -i ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab

install DOcker

First, install required packages to add Docker repository with the following command:

apt-get install apt-transport-https ca-certificates curl software-properties-common -y

Next, download and add Docker’s GPG key with the following command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key –keyring /etc/apt/trusted.gpg.d/docker.gpg add –

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add –

Next, add Docker repository with the following command:

sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”

sudo add-apt-repository \   “deb [arch=amd64] https://download.docker.com/linux/ubuntu \   $(lsb_release -cs) \   stable”

Next, update the repository and install Docker with the following command:

apt-get update -y apt-get install docker-ce -y

# Set up the Docker daemon cat <<EOF | sudo tee /etc/docker/daemon.json {   “exec-opts”: [“native.cgroupdriver=systemd”],   “log-driver”: “json-file”,   “log-opts”: {     “max-size”: “100m”   },   “storage-driver”: “overlay2” } EOF

# Create /etc/systemd/system/docker.service.d

sudo mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker

sudo systemctl daemon-reload

sudo systemctl restart docker

If you want the docker service to start on boot, run the following command:

sudo systemctl enable docker

INSTALL KUBERNETES

Next, you will need to install kubeadm, kubectl and kubelet on both the server.

First, download and GPG key with the following command:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –

Next, add Kubernetes repository with the following command:

echo ‘deb https://apt.kubernetes.io/ kubernetes-xenial main’ | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo bash -c ‘cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb https://apt.kubernetes.io/ kubernetes-xenial main

EOF’

Finally, update the repository and install Kubernetes with the following command:

sudo apt-get update -y

CHECK PACKAGE LIST

apt-cache policy kubelet | head -n 20

apt-cache policy docker.io | head -n 20

sudo apt-get install -y docker.io kubelet kubeadm kubectl

sudo apt-mark hold docker.io kubelet kubeadm kubectl ## this is to not allow apt to upgrade this

There was a docker permission error while starting docker. It can be solved by below command :

sudo chmod 666 /var/run/docker.sock

configuring master node

All the required packages are installed on both servers. Now, it’s time to configure Kubernetes Master Node.

First, initialize your cluster using its private IP address with the following command:

kubeadm init –pod-network-cidr=192.168.0.0/16 –apiserver-advertise-address=192.168.48.128

You should see the following output:

Your Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node as root:kubeadm join –token 62b281.f819128770e900a3 192.168.0.103:6443 –discovery-token-ca-cert-hash sha256:68ce767b188860676e6952fdeddd4e9fd45ab141a3d6d50c02505fa0d4d44686

Note : Note down the token from the above output. This will be used to join Slave Node to the Master Node in the next step.

Next, you will need to run the following command to configure kubectl tool:

mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config

Next, check the status of the Master Node by running the following command:

kubectl get nodes

You should see the following output:

NAME          STATUS     ROLES     AGE       VERSION master-node   NotReady   master    14m       v1.9.4

In the above output, you should see that Master Node is listed as not ready. Because the cluster does not have a Container Networking Interface (CNI).

Let’s deploy a Calico CNI for the Master Node with the following command:

curl https://docs.projectcalico.org/manifests/calico.yaml -O

kubectl apply -f calico.yaml

Make sure Calico was deployed correctly by running the following command:

kubectl get pods –all-namespaces

You should see the following output:

Now, Run kubectl get nodes command again, and you should see the Master Node is now listed as Ready.

kubectl get nodes

ADD WORKER NODE to the CLUSTER

Next, you will need to log in to the worker Node and add it to the Cluster. Remember the join command in the output from the Master Node initialization command and issue it on the worker Node as shown below:

kubeadm join –token 62b281.f819128770e900a3 192.168.0.103:6443 –discovery-token-ca-cert-hash sha256:68ce767b188860676e6952fdeddd4e9fd45ab141a3d6d50c02505fa0d4d44686

Once the Node is joined successfully, you should see the following output:

[discovery] Trying to connect to API Server “192.168.0.103:6443” [discovery] Created cluster-info discovery client, requesting info from “https://192.168.0.104:6443” [discovery] Requesting info from “https://192.168.0.104:6443” again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.0.104:6443” [discovery] Successfully established connection with API Server “192.168.0.103:6443″This node has joined the cluster: * Certificate signing request was sent to master and a response   was received. * The Kubelet was informed of the new secure connection details.Run ‘kubectl get nodes’ on the master to see this node join the cluster.

Now, check the nodes status using kubectl get nodes

So the above steps, have setup a small test Kubernetes cluster with one Master and one Worker node and calico as the CNI for some testing.

Hope, this will help you as well in starting with Kubernetes.

This is just a start on the kubernetes journey.

#Stay Safe everyone.