This is true, the lower the CPU core count of the VM, the more likely the VM is able to be run on the actual CPU cores of the host. Similarly, vCPUs should always be either 1 or even numbers.
it is inefficient, but you can sometimes give a particular VM higher priority to resources, or even pin the resources so that they are not shared, but then the CPUs are consumed even when they are not needed.
The fact that the vCPUs are scheduled rather than “real” can lead to increased latency of network, storage, and compute, but you can get around this fact in some cases like I just mentioned, but you are still going to be limited by the maximum speed and minimum latency of your resources, but those values are also the best case, where the reality would always be worse depending on vCPU scheduling in the hypervisor and how over provisioned the host is.
Some workloads are just not as sensitive to these things, but the same can be true of BigFix if your raw performance of everything is high enough and your use of BigFix is small enough by comparison. It is even possible that if your physical server is low spec enough, you could even get increased performance by going virtual if the virtual resources are good enough, but that is not always the case.
If you are considering moving BigFix to AWS, then this is relevant: Recommendations for deploying BigFix on AWS
How many simultaneous Windows Console Operators? This tends to be a significant impact on the system. The WebUI tends to be less impactful than the Console.