How to map vmdk files to Disk number or device name inside guest OS

Firstly, I would like to Thank my colleague Akshay Kalia (https://in.linkedin.com/in/kaliaakshay) for sharing this information with us .

In most cases you can obtain this mapping by following a few simple steps

Step1:

Find out the PCI slot ID of the SCSI controller on the VM and make a note of them. You will need them in Step2

PCI slot ID of the SCSI controller on the VM can be obtained by running a simple command on the vmx for the V

#cat /vmfs/volumes/<data store name>/vmname/vmname.vmx | grep scsi | grep pci

The above command will generate an output similar to.

scsi0.pciSlotNumber = “160”

scsi1.pciSlotNumber = “192”

Step2:

Find disk information within the guest OS. The steps to obtain this information depends upon the guest OS in use.

Linux:

On a Linux machine run following command for the device you want to map.

udevadm info –query=all -n /dev/<device name> | grep DEVPATH

Let’s say we want to map /dev/sda the final command would be look like

udevadm info –query=all -n /dev/ sda | grep DEVPATH

The above command with generate an out similar to

DEVPATH=/devices/pci0000:00/0000:00:15.0/0000:03:00.0/host2/target2:0:1/2:0:1:0/block/sdb

In the above put the highlighted number is the address of the controller where the /dev/sda is attached. Further the highlighted number target ID.

Hence, /dev/sda is target 1 on the Scsi controller present at address 0000:03:00.0

Now to find a relation between address of the controller and the PCI slot number run following command on the Linux machine.  Run the below command for each PCI slot ID obtained in Step1.

cat /sys/bus/pci/slots/160/address

The output of the above command will be the address of the controller and will look like

0000:03:00

What we know so far

  • /dev/ sda is target1 on the Scsi controller present at address 0000:03:00.0
  • Scsi controller present at address 0000:03:00.0 is scsi0
  • From the above information we can conclude that /dev/sda is target1 on scsi0 which is nothing but scsi0:1

Windows:

On a Windows machine open “Disk Management” this can done by following

Start > run > diskmgmt.msc

Right click on the disk number and select properties. Let say we do this for Disk 0, it will open a page similar to

disk

On the above page “Location:” provide you following information

PCI Slot ID: 160 (Location 160)

Target ID: 0 (Target Id 0)

Partition: 0 (LUN 0)

What we know so far

  • Disk 0 is target0 on the Scsi controller present at PCI Slot ID 160
  • PCI Slot ID 160 is scsi0
  • From the above information we can conclude that Disk 0 is target0 on scsi0 which is nothing but scsi0:0

Note: For windows system in some corner cases location information can be off. Please Verify the disk size as well.

Step3:

Find out the vmdk files and Naa ID of the data store. Once you have found the Scsi ID of the guest OS disk, we can following the steps below to obtain the vmdk files and Naa ID information

To find vmdk files associated with the VM run following command

#cat /vmfs/volumes/<data store name>/vmname/vmname.vmx | grep –i vmdk

The above command will generate an output similar to.

scsi0:0.fileName = “vmname.vmdk”

scsi0:1.fileName = “vmname_1.vmdk”

scsi0:2.fileName = “/vmfs/volumes/UUID/vmname_2.vmdk”

From the above output we see that the VM has disks located on two data stores scsi0:0.fileName = “vmname.vmdk” and scsi0:1.fileName = “vmname_1.vmdk” exits in VMs home directory. scsi0:2.fileName = “/vmfs/volumes/UUID/vmname_2.vmdk” exits in a separate data store

Use the information obtained in step one and two map vmdk to an in gues disk number. In this case disk 0 for windows VMs is vmname.vmdk

To find Naa Id of the data store associated with vmname.vmdk run following commands

esxcfg-scsidevs –m | grep <data store name>

To find Naa Id of the data store associated with vmname_2.vmdk run following commands

esxcfg-scsidevs –m | grep 4ce381e2-8a5b2a05-b0a7-18a90571b0ec

Advertisements

VIO – Introduction

VMware Integrated OpenStack, you can implement OpenStack services on your existing VMware vSphere implementation.

VIO is made of two main building blocks, first the VIO Manager and second the OpenStack components.

The VIO  Manager provides a workflow that guides you through and completes the VIO deployment process. With VIO Manager, you can specify your management and compute clusters, configure networking, and add resources. Post-deployment, you can use VIO manager to add components or otherwise modify the configuration of your VMware Integrated OpenStack cloud infrastructure.

VMware Integrated OpenStack 2.0 is based on the Kilo release of OpenStack. (Version 1.0 was based on the Icehouse release.)

VMware Integrated OpenStack is implemented as compute and management clusters in your vSphere environment.

The compute cluster handles all tenant workloads and the management cluster contains the VMs that comprise your OpenStack cloud deployment. It also contains the memory cache (memcache), message queue (RabbitMQ), load balancing, DHCP, and database services.

Components forming Openstack services in VIO

The OpenStack services in VIO are architected as a distributed highly available solution made ofthe following components:
• VIO Controllers: An active-active cluster running Nova (n-api, n-sch, n-vnc), Glance,
Keystone, Horizon, Neutron and Cinder services.
• Memcache: active-active memory caching cluster for Keystone performance.
• RabbitMQ: Active-active cluster for messaging used by OpenStack services.
• Load Balancer: An HA-Proxy active-active cluster managing the managemtn and public
virtual IP addresses. Enables HA and provides horizontal scale-out architecture.
• Compute Driver: Node running a subset of Nova processes that interact with compute
clusters to manage virtual machines
• DB: A three node MariaDB Galera Cluster. Stores OpenStack metadata.
• DHCP Servers: OVS enabled nodes running the DHCP service. These two VMs are
registered in NSX manager as hypervisor transport nodes if NSX is used for Neutron.
• Object Storage: Node running OpenStack Swift services

Happy Sharing!!

 

 

Unable to Power ON a VM in vCenter 6

Hello All,

If you come across an issue were you are unable to power on a VM on vCenter 6.

Workflow

Error Received : A general System error occurred. No connection could be made because the target machine actively refused it.

This can happen if you accidentally stopped the VMware vCenter workflow manager service.

Start the VMware vCenter workflow manager service and vm power on operations works fine.

I believe this is the orchestrator workflow service , but the integration of it related to powering on vm is interesting.

I believe we need to wait and check on more information from VMware.

Happy Thinking 🙂

How to remove an orphaned Nexus 1000v DVS in vCenter

What if you Accidentally deleted the VSM, and now are left with the DVS still showing in vCenter

Nexus 1000v & Any version of vCenter

Conditions / Environment

The DVS must be gracefully removed from the VSM before deleting it. w.

Solution:

Deploy a temporary VSM
Restore the startup config (or at least restore the previous switchname)
You can use the command “vmware vc extension-key <extension-id>” to have a new CP connect to the existing DVS.
The extension id here in this command should be the same one that is tied down to the DVS. You can find the key using the one of two ways:
In vCenter navigate to the Networking View
Select the DVS in the Left navigation pane.
Click on the Summary tab on the right.
The Extension key is listed under Annotations

or

Go to the VC’s mob by pointing your browser to it [ https://<VC_IP_ADDR>/mob%5D
Go to rootFolder ‘group-d1’
Find your datacenter from ‘childEntity’ containing Datacenter-IDs [When you click on a datacenter, you will find a name associated with it]
From your datacenter, go to networkFolder [Eg: group-n6]
From the network folder select the child entity [Eg:group-n373]
In the ‘childEntity’ click on your dvs [eg: dvs-7]
Under the DVS “config” attribute, you can find the extension key in use
Assign the extension key id to the VSM using “vmware vc extension-key <extension-id>”.
Once you key in this extension key, verify the new extension key on the CP using “show vmware vc extension-key”.
Save and reboot the VSM.
Delete the extension key present on the VC using MOB. (Unregister extension API):

Goto the extension manager [https://<VC_IP_ADDR>/mob/?moid=ExtensionManager]

Click on Unregister extension [https://<VC_IP_ADDR>/mob/?moid=ExtensionManager&method=unregisterExtension]

Paste “Cisco_Nexus_1000V_<DVS TO RECOVER’s KEY>” (your extension key attached to the DVS) and click on “Invoke Method”

8. Now you are ready to re-register the extension. If you are getting the .xml file through the browser, make sure you refresh the browser before downloading the XML file
9. Re-register the extension plugin
10. Setup the SVS connection properties (VC IP, Port, Datacenter name etc)
11. ‘Connect’ for your svs connection
12. Last but not least, gracefully remove the DVS using “no vmware dvs” from the SVS context on the VSM.

Once you verify the DVS removes from vCenter, you can delete the temporary VSM safely.

Happy Learning 😉