vMotion may fail because of TegileNasPlugin installed on ESXi host.

You may see following lines in VMware.log of the VM that was attempted to migrate.

2019-09-12T18:05:51.309Z| Worker#1| I125: DISK: OPEN scsi1:0 '/vmfs/volumes/b57aef17-591cfc64/NAS1/NAS1_1.vmdk' persistent R[]
2019-09-12T18:05:51.315Z| Worker#2| I125: TegileNasPlugin: StartSession: Req for 10.1.192.42 /export/ESX_5/ZEBI_ESX4 /vmfs/volumes/b57aef17-591cfc64 NFS 1
2019-09-12T18:05:51Z[+0.000]| Worker#2| W115: Caught signal 11 -- tid 190820 (addr 6379C5FA4C)
2019-09-12T18:05:51Z[+0.000]| Worker#2| I125: SIGNAL: rip 0x6379c5fa4c rsp 0x637d6bec18 rbp 0x637d6bec50
2019-09-12T18:05:51Z[+0.000]| Worker#2| I125: SIGNAL: rax 0x2f rbx 0x6338557110 rcx 0x20 rdx 0x656d756c rsi 0x6338572fa0 rdi 0x6f762f73666d762f
2019-09-12T18:05:51Z[+0.000]| Worker#2| I125:         r8 0x0 r9 0x2a r10 0x1999999999999999 r11 0x656d756c r12 0x6338572fa0 r13 0x637d6becb4 r14 0x637d6becb0 r15 0x6338505c00
2019-09-12T18:05:51Z[+0.000]| Worker#2| I125: SIGNAL: stack 637D6BEC18 : 0x000000637c77c4c1 0x0000000000000000
2019-09-12T18:05:51Z[+0.000]| Worker#2| I125: SIGNAL: stack 637D6BEC28 : 0x1e840cb0799abf47 0x0000006338572fa0
2019-09-12T18:05:51Z[+0.001]| Worker#2| I125: Backtrace:
2019-09-12T18:05:51Z[+0.001]| Worker#2| I125: Backtrace[0] 000000637d6be690 rip=00000063370c39c7 rbx=00000063370c34c0 rbp=000000637d6be6b0 r12=0000000000000000 r13=0000006337d64d01 r14=000000000000000a r15=000000637d6bec98
2019-09-12T18:05:51Z[+0.001]| Worker#2| I125: Backtrace[1] 000000637d6be6c0 rip=00000063372f64c0 rbx=000000637d6bec98 rbp=000000637d6be8d0 r12=000000000000000b r13=0000006337d64d01 r14=000000000000000a r15=000000637d6bec98
2019-09-12T18:05:51Z[+0.001]| Worker#2| I125: Backtrace[2] 000000637d6be8e0 rip=00000063372f68c4 rbx=0000000000000008 rbp=000000637d6be930 r12=000000637d6c0538 r13=0000006337d64dc8 r14=0000006337d64dc0 r15=000000000000000b
2019-09-12T18:05:51.316Z| Worker#3| I125: TegileNasPlugin: StartSession: Req for 10.1.192.42 /export/ESX_5/ZEBI_ESX4 /vmfs/volumes/b57aef17-591cfc64 NFS 1
2019-09-12T18:05:51Z[+0.000]| Worker#2| I125: Backtrace[3] 000000637d6be940 rip=000000000038600f rbx=0000006338557110 rbp=000000637d6beb80 r12=000000637d6be9c0 r13=000000637d6becb4 r14=000000637d6becb0 r15=0000006338505c00

 

VMkernel.log (source ESXi host).

2019-09-12T18:05:56.302Z cpu29:81337)WARNING: Migrate: 273: 2017306294537764927 S: Failed: Failed to resume virtual machine (0xbad0044) @0x418018883122
2019-09-12T18:05:56.302Z cpu29:81337)VMotionRecv: 3733: 2017306294537764927 S: Error handling message: Connection reset by peer
2019-09-12T18:05:56.312Z cpu27:76130)WARNING: Migrate: 6279: 2017306294537764927 S: Migration considered a failure by the VMX.  It is most likely a timeout, but check the VMX log for the true error.
2019-09-12T18:05:56.343Z cpu44:76290)CBT: 1341: Created device 20d0803-cbt for cbt driver with filehandle 34408451

 

VMWarning.log (source ESXi host).

2019-09-12T14:38:17.513Z cpu10:77420)WARNING: CBT: 1133: Unsupported ioctl 62
2019-09-12T18:05:56.302Z cpu29:81337)WARNING: Migrate: 273: 2017306294537764927 S: Failed: Failed to resume virtual machine (0xbad0044) @0x418018883122
2019-09-12T18:05:56.312Z cpu27:76130)WARNING: Migrate: 6279: 2017306294537764927 S: Migration considered a failure by the VMX.  It is most likely a timeout, but check the VMX log for the true error.
2019-09-12T18:05:56.343Z cpu44:76290)WARNING: CBT: 1133: Unsupported ioctl 63

 

VMKernel.log (destination ESXi host).

2019-09-12T18:05:56.337Z cpu22:190653)Hbr: 3489: Migration end received (worldID=190654) (migrateType=1) (event=1) (isSource=0) (sharedConfig=1)
2019-09-12T18:05:56.337Z cpu22:190653)WARNING: Migrate: 6749: 2017306294537764927 D: Migration cleanup initiated, the VMX has exited unexpectedly. Check the VMX log for more details.
2019-09-12T18:05:56.337Z cpu22:190653)WARNING: Migrate: 273: 2017306294537764927 D: Failed: Migration determined a failure by the VMX (0xbad0092) @0x4180296b3091
2019-09-12T18:05:56.337Z cpu22:190653)WARNING: VMotionUtil: 7649: 2017306294537764927 D: timed out waiting 0 ms to transmit data.
2019-09-12T18:05:56.337Z cpu22:190653)WARNING: World: vm 190653: 3566: VMMWorld group leader = 190654, members = 2
2019-09-12T18:05:56.337Z cpu32:190660)VMotionUtil: 7552: 2017306294537764927 D: Socket 0x430a8b349c00 sendSocket pending: 563164/563272 snd 0 rcv
2019-09-12T18:05:56.525Z cpu14:65689)CBT: 1376: Destroying device 5920950-cbt for cbt driver with filehandle 93456720

 

VObd.log (destination ESXi host).

2019-09-12T18:05:56.333Z: [UserWorldCorrelator] 244452187008us: [vob.uw.core.dumped] /bin/vmx(190653) /var/core/vmx-zdump.001
2019-09-12T18:05:56.333Z: [UserWorldCorrelator] 244476489345us: [esx.problem.application.core.dumped] An application (/bin/vmx) running on ESXi host has crashed (2 time(s) so far). A core file may have been created at /var/core/vmx-zdump.001.

 

VMKwarning.log (destination ESXi host).

2019-09-12T18:05:51.304Z cpu22:190653)WARNING: CBT: 1133: Unsupported ioctl 62
2019-09-12T18:05:56.337Z cpu22:190653)WARNING: Migrate: 6749: 2017306294537764927 D: Migration cleanup initiated, the VMX has exited unexpectedly. Check the VMX log for more details.
2019-09-12T18:05:56.337Z cpu22:190653)WARNING: Migrate: 273: 2017306294537764927 D: Failed: Migration determined a failure by the VMX (0xbad0092) @0x4180296b3091
2019-09-12T18:05:56.337Z cpu22:190653)WARNING: VMotionUtil: 7649: 2017306294537764927 D: timed out waiting 0 ms to transmit data.
2019-09-12T18:05:56.337Z cpu22:190653)WARNING: World: vm 190653: 3566: VMMWorld group leader = 190654, members = 2

 

This is caused because of TegileNasPlugin installed on ESXI host. Make sure you connect with NAS vendor

 

Advertisements

#tegilenasplugin, #vcenter, #vmotion, #vmware-log

Recent task isn’t showing any update in vCenter web client

This issue affect both HTML and Flex client of vCenter administration. Any recent task that is either running OR completed doesn’t visible in recent task tab. At the same time we can ongoing task from monitor tab (Monitor\Task & Events \ Task)

RecentTask.jpg

This issue happens because of browser cache issue. cleaning the cache & restoring browser setting to default should solve this problem.

For example, for google chrome following steps can be carried out:-

Settings \ Expand Advanced \ section Reset and clean up then Restore settings to their original defaults

#recent-task, #vcenter, #vsphere

ESXi disconnect and connect automatically.

You may see ESXi disconnect then connect automatically in vCenter within fraction of seconds.

esx-disconnect-connect.jpg

Following lines can be seen in VPXD.LOG


info vpxd[7F20FA9D3700] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-713e154b] [VpxdHostCnx] No heartbeats received from host; cnx: 5254c90c-d813-51e1-1e6b-0cc0a8e9b10f, h: host-984, time since last heartbeat: 14356285014ms
info vpxd[7F20FA9D3700] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-713e154b] [VpxdHostCnx] Marking the connection alive to false: 5254c90c-d813-51e1-1e6b-0cc0a8e9b10f
info vpxd[7F20FA9D3700] [Originator@6876 sub=InvtHostCnx opID=CheckforMissingHeartbeats-713e154b] [VpxdInvtHost] Got lost connection callback for host-984
warning vpxd[7F2118B73700] [Originator@6876 sub=InvtHostCnx opID=HostSync-host-984-f11f4fd] Connection not alive due to missing heartbeats; [vim.HostSystem:host-984,esxi.mydomain.local], cnx: 5254c90c-d813-51e1-1e6b-0cc0a8e9b10f
warning vpxd[7F2118B73700] [Originator@6876 sub=MoHost opID=HostSync-host-984-f11f4fd] [HostMo] host connection state changed to [NO_RESPONSE] for host-984
info vpxd[7F20FB468700] [Originator@6876 sub=vpxLro opID=lro-30458735-12d91ab0] [VpxLRO] -- FINISH lro-30458735
info vpxd[7F2118B73700] [Originator@6876 sub=InvtHostCnx opID=HostSync-host-984-f11f4fd] Succeeded restoring heartbeat; [vim.HostSystem:host-984,esxi.mydomain.local]
info vpxd[7F2118B73700] [Originator@6876 sub=MoHost opID=HostSync-host-984-f11f4fd] [HostMo] host connection state changed to [CONNECTED] for host-984

 

This happens because of network issue between ESXi host & vCenter server. Temporary workaround is to set heartbeat timeout as per KB

vSphere Client:

Open the vSphere Client. Connect to vCenter Server.
Select Configure then Advanced setting under setting.  If following key isn’t present then create it manually.
Click Edit.
Name config.vpxd.heartbeat.notRespondingTimeout
Value 120
Click Add then OK. Restart the vCenter Server service.

#esxi-disconnect, #vcenter

All the options of update manager for vCenter web client display error “interface com.vmware.vim.binding.integrity.VcIntegrity is not visible from class loader”

This particular error is seen with vCenter 6.5 however other version may be affected.

um.jpg

You may see following lines in vSphere-Client.virgo.log

[WARN ] http-bio-9090-exec-10 org.springframework.flex.core.DefaultExceptionLogger The following exception occurred during request processing by the BlazeDS MessageBroker and will be serialized back to the client: flex.messaging.MessageException: The supplied destination id is not registered with any service.
at flex.messaging.MessageBroker.routeMessageToService(MessageBroker.java:1477)
at flex.messaging.endpoints.AbstractEndpoint.serviceMessage(AbstractEndpoint.java:1046)
at flex.messaging.endpoints.AbstractEndpoint$$FastClassByCGLIB$$1a3ef066.invoke(<generated>)
at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:149)
at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint(Cglib2AopProxy.java:689)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at org.springframework.flex.core.MessageInterceptionAdvice.invoke(MessageInterceptionAdvice.java:66)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.adapter.ThrowsAdviceInterceptor.invoke(ThrowsAdviceInterceptor.java:124)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.Cglib2AopProxy$FixedChainStaticTargetInterceptor.intercept(Cglib2AopProxy.java:573)
at flex.messaging.endpoints.AMFEndpoint$$EnhancerByCGLIB$$4442b278.serviceMessage(<generated>)
at flex.messaging.endpoints.amf.MessageBrokerFilter.invoke(MessageBrokerFilter.java:101)

 

To resolve this issue, take SSH access to vCenter server. Stop and start vmware-updatemgr & vsphere-client services.

service-control --stop vmware-updatemgr
service-control --stop vsphere-client

service-control --start vmware-updatemgr
service-control --start vsphere-client

Refer KB for more details on vCenter service stop/start.

#update-manager, #vcenter, #vmware