Hi All
I have searched and browsed through this forum and online (including infocenter documentation)
The most recent answer I could find is this:
Has there been any change with the latest releases of BigFix on this point?
Thanks!
Bret
Hi All
I have searched and browsed through this forum and online (including infocenter documentation)
The most recent answer I could find is this:
Has there been any change with the latest releases of BigFix on this point?
Thanks!
Bret
I’m not an expert on clustering but as far as I know, if you are to configure your BigFix server to point to a cluster, there is a cluster IP or name that the cluster will fall under at which point it connects to whichever primary node is active at the time. BigFix wouldn’t even realize it’s in a cluster because once it connects to the parent IP, it doesn’t care what’s on the other side as long as authentication works.
If set up correctly, your cluster should show up as one single server. Do you not know the name/parent IP address of your cluster that you can input it directly?
I have heard recommendations to not use a remote database with BigFix, and I have also heard recommendation to use a remote database with bigfix, which I know is used in some environments.
Using a Remote Database on a SQL Cluster should work, but I’m not certain if it is supported or recommended.
If you have a small infrastructure, then you shouldn’t have issues. If you have a very very large BigFix infrastructure with 100,000+ endpoints or 100+ simultaneous console operators, then if you are going to use a SQL Cluster, it should probably only be used for BigFix and nothing else.
The IOPS for the database and the FillDB location matter quite a bit for BigFix, especially for larger infrastructures.
It is my understanding that both SQL cluster and always on are not supported: IEM with Remote Data Base on MS SQL cluster
@strawgate is correct. We recently migrated over to a SQL 2014 cluster and found that it works great until you try to take anything out of the cluster and reboot. BigFix seems to want to hang onto connections and it causes errors to be produced everywhere. It doesn’t seem to be aware of when a server it taken out of a cluster that it should be making a new connection so it hangs on until the SQL server stops and causes errors in the Web Reports and console which causes out scheduled reports not to be sent. It’s quite a hassle actually but I currently have no other alternative.
For the least amount of pain all around you should put SQL on the BigFix Host on fast disk and allocate enough additional CPU and Memory for both SQL and BigFix.
Remote DB means manual upgrades for every release of BigFix. Remote DB introduces addition latency.
Remote DB allows easier administration from your DB folks – which you generally don’t want. Nobody should me poking in the BigFix DB
Sorry to dredge this topic back up but we are evaluating our options for DR of BigFix and the documentation and forums only seem to provide a lot of information on what doesn’t work and what’s not supported.
Our current root server is a physical box with local SQL DB and we have a physical mirror of it at another location. We had planned on using DSA but after meeting with IBM they told us that DSA requires sub 10ms network latency between the DSA nodes so they would not recommend it. Our latency between the two locations is 15-20ms and we learned that DSA is more of a local DR option and not for geographically dispersed root servers.
We’re looking at possibly building out two new root servers as VMs and using DSA locally, using VM HA, or a combination of the two.
Can anyone with a similar setup provide any insight or recommendations on DR options?
Hi.
I’d use DSA.
I’m not sure where the requirement for <10ms latency comes from – that doesn’t really make any sense.
DSA is not real-time replication, it’s scheduled replication and thus you just need sufficient bandwidth between the two sites to replicate the information.
A 10ms latency would allow you a MAX of 1860 miles (speed of light) between your HA zones which doesn’t really make sense.
Many organizations have multiple DSA members spread across the globe with different replication frequencies depending on the bandwidth available.
Agreed, one of my DSA servers is about 1500 miles and 100ms away. Haven’t had any issues with that, other than initial replication taking a weekend to complete.
Be sure to reduce the replication frequency using the besadmin tool.
Yeah, we were quite taken aback that IBM would not recommend BigFix’s own, built-in solution for DR/Failover. Instead they suggested a combination of either SQL mirroring or a full SQL DB restore to another server that was pre-configured and setup to act as the existing root server and then do a DNS change to point to the new server when a failover was needed.
I’ve tested DSA replication for over 2 weeks between my two servers and found no issues with synchronization all the way down to 5 minute intervals, which is acceptable for our needs. So DSA is back on the roadmap.
Thank you both for you input!
For additional context, DSA can introduce a fair bit of overhead. As scale increases, the requirements for high bandwidth and low latency also increase. It’s certainly possible to leverage DSA across higher latency links depending on scale, however, there will be circumstances where this may not be feasible.
An alternate approach to DSA for disaster recovery purposes is to have a ‘standby’ server. In this model, you would essentially build a replica of the primary instance manually, and schedule periodic synchronization of the database, and necessarily application elements from the file system. This has the benefit of very little overhead introduced to the primary BigFix server, as well as the ability to function in much more restrictive network conditions, but sacrifices the potential for a little bit of data loss (depending on your synchronization frequency - which might typically be once per day).