Bigfix infra for 20000 endpoints on Azure

Hi , i have to deploy bigfix application server to support 20000 VMs on Azure / AWS. I can’t use physical server because of virtual environment , As per IBM recommended docs it is best to have physical box instead of virtual to support endpoints more than 10000+.

How can i leverage or deploy infra to support more than 10000+ servers with Virtual Bigfix application server

please suggest.

It’s certainly possible to leverage virtual infrastructure for the BigFix Server, even at scale. I would recommend reviewing the Capacity Planning, Performance, and Management Guide and letting us know if you have any subsequent questions.

Thanks Aram for your reply, i have read this doc before , it is talking about the capacity planing etc and that is good but unfortunately it does provide the clear understanding if we can build core infra over VM to support more than 20k nodes and this should work?

I am wondering if we deploy Base infra over VM with how much high vcpus, Virtual RAM and VDisk , does it really works for best I/O as in case of AWS / Azure to support 20k nodes

Sorry if i am asking the same question again

Yes, it is possible to use virtual hardware at scale. What the capacity planning guide outlines is that due to virtualization overheard, a vCPU only provides about 90% of a physical equivalent, so you would need to up your CPU core count to compensate. Also, you need to ensure you have adequate disk within the virtual env. This should be SSD, and ideally NVMe-based SSD, which you should be able to get these days; but will obviously cost more than standard virtual disk/SAN.

1 Like

Ok thanks for the suggestion

Hi @steve, I can see azure is having premium SSD disk option does it equivalent to NVM SSD?

image

Possibly, I’m not familiar with Azure’s disk options. Can you find any more information on the difference between Standard SSD and Premium SSD?