Assigned
Status Update
Comments
tn...@google.com <tn...@google.com> #2
I have also tested config 4 using --disable-sandbox and it made no difference.
/usr/bin/crosvm run --cpus 56 --mem 127721 --rwdisk /mnt/stateful_partition/debian-10-2-0.qcow2 --disable-sandbox -p "root=/dev/vda1" /run/imageloader/cros-termina/13729.82.0/vm_kernel 2>/dev/null
/usr/bin/crosvm run --cpus 56 --mem 127721 --rwdisk /mnt/stateful_partition/debian-10-2-0.qcow2 --disable-sandbox -p "root=/dev/vda1" /run/imageloader/cros-termina/13729.82.0/vm_kernel 2>/dev/null
ar...@google.com <ar...@google.com>
je...@google.com <je...@google.com>
ml...@google.com <ml...@google.com> #3
For reference, concierge spawns crosvm with args:
/usr/bin/crosvm run --cpus 56 --mem 127721 --tap-fd 18 --cid 33 --socket /run/vm/vm.zRnunN/crosvm.sock --wayland-sock /run/chrome/wayland-0 --serial hardware=serial,num=1,earlycon=true,type=unix,path=/run/daemon-store/crosvm/ccc8a25c8f59055eaddfda422b9cccf68d873f7d/log/dGVybWluYQ==.lsock --serial hardware=virtio-console,num=1,console=true,type=unix,path=/run/daemon-store/crosvm/ccc8a25c8f59055eaddfda422b9cccf68d873f7d/log/dGVybWluYQ==.lsock --syslog-tag VM(33) --no-smt --pmem-device /run/imageloader/cros-termina/13729.82.0/vm_rootfs.img --params root=/dev/pmem0 ro --ac97 backend=cras --disk /run/imageloader/cros-termina/13729.82.0/vm_tools.img --rwdisk /run/daemon-store/crosvm/ccc8a25c8f59055eaddfda422b9cccf68d873f7d/dGVybWluYQ==.qcow2,sparse=true /run/imageloader/cros-termina/13729.82.0/vm_kernel
/usr/bin/crosvm run --cpus 56 --mem 127721 --tap-fd 18 --cid 33 --socket /run/vm/vm.zRnunN/crosvm.sock --wayland-sock /run/chrome/wayland-0 --serial hardware=serial,num=1,earlycon=true,type=unix,path=/run/daemon-store/crosvm/ccc8a25c8f59055eaddfda422b9cccf68d873f7d/log/dGVybWluYQ==.lsock --serial hardware=virtio-console,num=1,console=true,type=unix,path=/run/daemon-store/crosvm/ccc8a25c8f59055eaddfda422b9cccf68d873f7d/log/dGVybWluYQ==.lsock --syslog-tag VM(33) --no-smt --pmem-device /run/imageloader/cros-termina/13729.82.0/vm_rootfs.img --params root=/dev/pmem0 ro --ac97 backend=cras --disk /run/imageloader/cros-termina/13729.82.0/vm_tools.img --rwdisk /run/daemon-store/crosvm/ccc8a25c8f59055eaddfda422b9cccf68d873f7d/dGVybWluYQ==.qcow2,sparse=true /run/imageloader/cros-termina/13729.82.0/vm_kernel
ml...@google.com <ml...@google.com> #4
joelhockey@ confirmed that performance is better when not enabling core scheduling for vCPU threads in crosvm (matching non-Chrome OS host kernels that don't yet have the core scheduling ioctl).
joelaf@ - I wanted to check my understanding of our current core scheduling configuration and see if we could tweak it for improved performance without sacrificing the security/privacy/... guarantees we want from it.
As far as I can tell, we configure each vCPU thread with a separate core scheduling cookie here:https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform/crosvm/sys_util/src/sched.rs;l=89;drc=026f72f9fb6b47f4f45a42e791f6faf7785105db - based on the documentation, this makes it sound like no vCPU threads can run simultaneously on the same core/HT pair, even two vCPUs from the same VM.
I may be missing some details, but it seems like we should be able to create a single core scheduling cookie and set all vCPU threads for the same VM to use that same cookie, which would still prevent other threads (including vCPUs from other VMs) from running simultaneously on the same core, but should allow multiple vCPUs from the same VM to run on the same core.
Would that be acceptable, or do we need the stronger guarantee that even two vCPUs from the same guest VM can't be scheduled simultaneously on the same core?
joelaf@ - I wanted to check my understanding of our current core scheduling configuration and see if we could tweak it for improved performance without sacrificing the security/privacy/... guarantees we want from it.
As far as I can tell, we configure each vCPU thread with a separate core scheduling cookie here:
I may be missing some details, but it seems like we should be able to create a single core scheduling cookie and set all vCPU threads for the same VM to use that same cookie, which would still prevent other threads (including vCPUs from other VMs) from running simultaneously on the same core, but should allow multiple vCPUs from the same VM to run on the same core.
Would that be acceptable, or do we need the stronger guarantee that even two vCPUs from the same guest VM can't be scheduled simultaneously on the same core?
Description
I opened the project, waited for sync, and opened MainActivity.kt. This opened the Compose Preview window, which told me I needed to Build & Refresh, so I clicked on that link. After the build, I got a yellow tip notification from the Build Analyzer window button saying it had found new issues for me to look at.
So I opened the build analyzer -- but it's showing me a panel which tells me that there is 1 warning, but nothing is listed; see the screenshot attached.
I thought maybe there was some stale data, so I tried switching tabs to Overview and back, unchecking and rechecking the Group By Plugin option etc, but nothing shows up -- it says "Warnings - Total: 1, Filtered: 0", yet the list is empty.
```
Build: AI-223.8214.52.2231.9615888, 202302160854,
AI-223.8214.52.2231.9615888, JRE 17.0.6+0-17.0.6b802.4-9586694x64 JetBrains s.r.o., OS Mac OS X(aarch64) v13.2.1, screens 3840.0x2160.0; Retina
AS: Giraffe | 2022.3.1 Canary 6
Kotlin plugin: 223-1.7.21-release-272-AS8214.52.2231.9615888
Android Gradle Plugin: 7.4.1
Gradle: 7.6
Gradle JDK: JetBrains Runtime version 11.0.15
NDK: from local.properties: (not specified), latest from SDK: (not found)
CMake: from local.properties: (not specified), latest from SDK: 3.10.2, from PATH: (not found)
```