Fixed
Status Update
Comments
sj...@google.com <sj...@google.com> #2
Engineering is aware of the issue and is working on a fix.
hi...@tsuney.com <hi...@tsuney.com> #3
I guess something happened sometime between March 2 and March 7;
because my nested-virtualization VMs worked very well before March 2,
around then I actually often stopped and started them.
Hope this issue is resolved quick and smooth.
because my nested-virtualization VMs worked very well before March 2,
around then I actually often stopped and started them.
Hope this issue is resolved quick and smooth.
gs...@gmail.com <gs...@gmail.com> #4
Agreed. On our side, we also believe we were able to stop/start VMs and still have nested VMs work. Hope this is resolved soon too. Thanks in advance, Google!
jd...@redhat.com <jd...@redhat.com> #5
The same here...something is broken regarding nested virtualisation.
I had it working for months and it stopped working during in last days, even though the license feature is still enabled in the instance:
$ gcloud compute instances describe env01 | grep vmx
No zone specified. Using zone [europe-west1-d] for instance: [env01].
-https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
I had it working for months and it stopped working during in last days, even though the license feature is still enabled in the instance:
$ gcloud compute instances describe env01 | grep vmx
No zone specified. Using zone [europe-west1-d] for instance: [env01].
-
hi...@tsuney.com <hi...@tsuney.com> #6
Same here, jd. It was until around the end of Feb this year I often started/stopped/started; but looks like this started sometime around March 2nd, since then when you stop/start a nested-virt VM, it ends up with vmx=0.
No sure exact timeframe, but it seems the nested virtualization VMs I was using have got unstable (hitting 99.7 CPU% and saturated getting unresponsive) when we stop a qemu vm, which might have been a pre-cursol on this issue...
Google, let us know the status; looks like the issue 74331479 has blocked some work on this issue, though. It would be really help if you let us know what ETA you are aiming at; thanks, Google.
No sure exact timeframe, but it seems the nested virtualization VMs I was using have got unstable (hitting 99.7 CPU% and saturated getting unresponsive) when we stop a qemu vm, which might have been a pre-cursol on this issue...
Google, let us know the status; looks like the issue 74331479 has blocked some work on this issue, though. It would be really help if you let us know what ETA you are aiming at; thanks, Google.
al...@gmail.com <al...@gmail.com> #7
Hello All,
i am suffering from the same issue.
zone: us-west1-a
i am suffering from the same issue.
zone: us-west1-a
km...@fortinet.com <km...@fortinet.com> #8
I have the same issue.
sc...@google.com <sc...@google.com> #9
Hi folks, PM for GCE, here, responsible for nested virtualization. I apologize to all of our Beta users for breaking stop/start of nested VMs. It was a bug in one of the changes we made as we are gearing up for GA and I know it's causing pain and inconvenience, and for that I'm sorry.
We have a fix in hand and we are in the process of qual'ing it and getting it into our next control plane rollout. I don't have an ETA yet, but I'll post further updates here until the issue is resolved.
We have a fix in hand and we are in the process of qual'ing it and getting it into our next control plane rollout. I don't have an ETA yet, but I'll post further updates here until the issue is resolved.
hi...@tsuney.com <hi...@tsuney.com> #10
Thanks, sc...@google. would be great if any updates on this.
ye...@optimalq.com <ye...@optimalq.com> #11
Does this mean that everytime the instance will migrate (if set "Terminate" on migration policy) we will need to recreate it?
sc...@google.com <sc...@google.com> #12
Hi folks, another quick update. We finished qual'ing the fix. It started rolling out to production late last week and should finish in the next couple days, barring any unexpected issues that force us to rollback (seldom, but does happen). I'll provide another update later this week.
RE #11: this should not affect VMs that are terminated and restarted (onHostMaintenance=TERMINATE), those VMs should come back up with nesting still enabled. This bug only affects VMs that are 'stop'ped/'start'ed via the API/UI/CLI.
RE #11: this should not affect VMs that are terminated and restarted (onHostMaintenance=TERMINATE), those VMs should come back up with nesting still enabled. This bug only affects VMs that are 'stop'ped/'start'ed via the API/UI/CLI.
sc...@google.com <sc...@google.com> #13
Hi folks, last update (hopefully!): the rollout finished this morning, so this issue should be fixed in all regions and zones. I'm resolving it fixed. Please reopen if you are still able to repro.
Thank you all for your participation in our Beta and for your patience while we fixed this.
Thank you all for your participation in our Beta and for your patience while we fixed this.
hi...@tsuney.com <hi...@tsuney.com> #14
Hi Google, just an update and thanks notes to you. I have resumed my project over this. Amazingly working well
and it looks like the performance (in particular, stability) has got improved. Thanks for the fix.
and it looks like the performance (in particular, stability) has got improved. Thanks for the fix.
Description
$ grep -cw vmx /proc/cpuinfo
return 0 when it should return 1.
CPU platform
Intel Haswell
[1]