Finally, after several months of negotiation, we are migrating our bigfix (windows 2008 + SQL) to Linux RedHat 7.4 and DB2.
we were discussing with The Linux administrator and DBA about what is the best practices for Bigfix over Linux and DB2.
For example:
The size bigfix database will be around 300GB and They recommend creating the bigfix databases specifying 06 tablespaces that will be related to 06 disks (50gb).
The size bigfix directory /var/opt/BESServer will be around 200GB, so they recommend to have 04 disks (50gb )
Please, could you share some best practices for configuring DB2 and filesystem for bigfix?
@pejosorio - Unless I’m mistaken, the IBM documentation referring to specifying 06/04 tablespaces/disks for data/app respectively is for physical systems/disks. Assuming that you’re virtualizing and leveraging SAN/SSD, then what becomes more important is:
Implement dedicated storage partitions for OS, database, database logs, BES applications and BES logs.
Implement dedicated virtual SCSI controllers for each storage partition (no exceptions).
Implement redundant 10GB/s (or better) fiber connections to said storage.
Don’t use VMware’s storage replication for DR (have seen it cause latency issues with the primary storage in prior deployments).
In large implementations you want sub 0.5-to-1.0 millisecond read/write I/O, so if your budget permits I would strongly suggest upgrading to redundant Flash storage.
To @masonje point, environment specific tweaking will likely be required. So in addition to your total number of managed endpoints, the size of your BESRelay infrastructure in addition to any supporting services (i.e. Compliance, Inventory, WebReports and WebUI) will absolutely increase your instance systems/storage I/O requirements.
Schedule backups/stats/reorgs. Reorgs in particular become critical over time.
Details in the capacity planning guide, but your DBAs should manage this by default.
Both a local and remote database can work well. You will pay some overhead for remote but this is typically a few per cent for standard operations. We tend to see the largest impact for upgrade operations where we migrate a large amount of data (something we try to avoid, but has been necessary for things like Unicode conversion).
Once you have it set up, feel free to contact me and we can do a health check for you.
By this do you mean “drives” as vSphere presents it? I’ve never seen this recommendation before, but it certainly makes sense for high I/O applications.
Within Vcenter when configuring a guest with multiple drives/partitions the default is a single virtual SCSI controller. However with high I/O root BES server implementations it’s critical to configure each drive/partition to have a dedicated virtual SCSI controller. This is of course in addition to standard virtualization best practices like redundant fiber storage connections, high-speed storage media (i.e. SSD or flash), etc.