Error : [500] SSO error: Cannot connect to the VMware Component Manager

Hello All,

I came along an interesting issue at work and was able to reproduce it in the lab . Sharing the way to fix it , while we check further for the cause of it.

Below were the steps performed to reproduce the problem

1. Build by ISO. Site name “Default-First-Site”, Domain name “vsphere.local”.
2. Build by ISO and join
3. Build vCenter instance “” by ISO and connect to
4. Repoint vCenter instance to “” by KB 2113917.
5. Test Web Client and vSphere Client.
6. Reboot the 3 VMs and test again.
7. Disconnect network of vCenter instance VM.
8. Change DNS name of vCenter instance to different IP address on DNS server.  >> Done successfully
9. Change IP address of vCenter instance to new IP by DCUI or command line. (I tested this with vami change through ssh as well as console UI on vcenter 6)
10. Connect network of vCenter instance VM.
11. Restart vCenter services and test again. (Web client fault here)

A server error occurred.
[500] SSO error: Cannot connect to the VMware Component Manager https://vcsa6.xxxx.bng/cm/sdk?hostid=fbf7282a-f940-4f6d-8180-58b1476c3e72
Check the vSphere Web Client server logs for details.vcsa500error

Error seen on vsphere-client-virgo.log for web-client under /var/log/vmware/vsphere-client .

[2016-01-18T11:46:43.240Z] [INFO ] http-bio-9090-exec-9          com.vmware.vise.util.session.SessionUtil                          Generated hashed session id: 100001
[2016-01-18T11:46:43.243Z] [INFO ] http-bio-9090-exec-9         70000001 100001 ###### com.vmware.vise.util.i18n.I18nFilter                              The preferred locale for session 100001 is set to: en_US
[2016-01-18T11:46:44.909Z] [INFO ] http-bio-9090-exec-9         70000001 100001 ######           Retry wont be attempted for error: No route to host
[2016-01-18T11:46:44.917Z] [ERROR] http-bio-9090-exec-9         70000001 100001 ######           Error when creating component manager service com.vmware.vim.vmomi.client.exception.ConnectionException: No route to host
at com.vmware.vim.vmomi.client.common.impl.ResponseImpl.setError(
at com.vmware.vim.vmomi.client.http.impl.HttpProtocolBindingBase.executeRunnable(
at com.vmware.vim.vmomi.client.http.impl.HttpProtocolBindingImpl.send(
at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$CallExecutor.sendCall(
at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl$CallExecutor.executeCall(
at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl.completeCall(
at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl.invokeOperation(
at com.vmware.vim.vmomi.client.common.impl.MethodInvocationHandlerImpl.invoke(
at com.sun.proxy.$Proxy474.retrieveServiceInstanceContent(Unknown Source)

It clearly mentions as  “”no route to host “.

At this point I tested forward /reverse lookup which was working fine , pings to both psca and pscb were working fine .

So , I repointed the vcsa back to psca and to my surprise the web-client started working fine.

Got the repoint done again back to pscb , and it failed again with the same error .

Post multiple log checks found that the /etc/hosts file on the vcsa still was having the old ip address although the new ip change was completed at step9.
Configured the hosts file with the correct ip address and restarted all services using command “service-control –stop -all ” and  “service-control –start -all ”

Now , once the web-client came up it was loading fine , and it was still pointed to


VIO – Introduction

VMware Integrated OpenStack, you can implement OpenStack services on your existing VMware vSphere implementation.

VIO is made of two main building blocks, first the VIO Manager and second the OpenStack components.

The VIO  Manager provides a workflow that guides you through and completes the VIO deployment process. With VIO Manager, you can specify your management and compute clusters, configure networking, and add resources. Post-deployment, you can use VIO manager to add components or otherwise modify the configuration of your VMware Integrated OpenStack cloud infrastructure.

VMware Integrated OpenStack 2.0 is based on the Kilo release of OpenStack. (Version 1.0 was based on the Icehouse release.)

VMware Integrated OpenStack is implemented as compute and management clusters in your vSphere environment.

The compute cluster handles all tenant workloads and the management cluster contains the VMs that comprise your OpenStack cloud deployment. It also contains the memory cache (memcache), message queue (RabbitMQ), load balancing, DHCP, and database services.

Components forming Openstack services in VIO

The OpenStack services in VIO are architected as a distributed highly available solution made ofthe following components:
• VIO Controllers: An active-active cluster running Nova (n-api, n-sch, n-vnc), Glance,
Keystone, Horizon, Neutron and Cinder services.
• Memcache: active-active memory caching cluster for Keystone performance.
• RabbitMQ: Active-active cluster for messaging used by OpenStack services.
• Load Balancer: An HA-Proxy active-active cluster managing the managemtn and public
virtual IP addresses. Enables HA and provides horizontal scale-out architecture.
• Compute Driver: Node running a subset of Nova processes that interact with compute
clusters to manage virtual machines
• DB: A three node MariaDB Galera Cluster. Stores OpenStack metadata.
• DHCP Servers: OVS enabled nodes running the DHCP service. These two VMs are
registered in NSX manager as hypervisor transport nodes if NSX is used for Neutron.
• Object Storage: Node running OpenStack Swift services

Happy Sharing!!