I think it depends on lot on how you plan to use BigFix.
I would always advocate for have some kind of lab/test system, but how to scale it depends on your individual needs.
At the very least, you should have the ability to stand up some virtual servers to test your backup & recovery ability. Like any product that uses databases and has many interconnected data feeds, backup & recovery can be complex. You should have the same kinds of recovery testing for any of your other complex or critical infrastructure too, by the way, such as AD, DNS, DHCP, Certificate Services, Firewalls, Web Services, HR systems, and anything else the business relies upon.
These can be temporary installs on VMs if the deployment is small or underfunded. In larger companies, ideally you’d have dedicated test resources for all that critical infra, and having a test BigFix to go alongside it is a natural fit.
If you’re only using a small subset of BigFix - maybe only Patch, or only Inventory, and your deployment won’t be critical or frequently changed - then maybe all you need is the capacity to stand up some VMs and periodically test your disaster recovery backups, and practice upgrades periodically.
Honestly though I rarely see BigFix used in such small deployments, and once you get comfortable with the power of BigFix I think you’ll find yourself relying upon it so much that even a day or two of outage will become unbearable. If you don’t build some kind of test lab you may find yourself wishing you had later.
If you do have a large deployment, or plan to use complex features like Server Automation plans, REST API automations, correlating data with ServiceNow, bulk OS upgrades or bare-metal deployments with OS Deployment…then you’ll probably really want to have an environment where you can test those things, confident that you aren’t going to accidentally upgrade the whole company to the latest beta of Windows 15 or something next weekend because you clicked on the wrong target group.