BFI Increase Data Import Speed

Hello

We are running BFI version 10.0.7.0 on a Windows server with SQL installed locally. We have created 6 scan groups using “mod 6”. (computer ID = mod 6) = 1 as an example. Each group contains approximately 7000 computers. The import is taking an average of 7.5 hours to run. SQL isn’t working very hard if I look at the amount of CPU and RAM being used by that process (4.5% CPU and only 300 MB of RAM) JAVA Platform SE Binary is using almost 3 GB of RAM but very little CPU less than 1%. I am trying to understand why the import is running so slow and what can be done from a tuning perspective to increase performance. Is there anything under the Advanced Server Setting that I can adjust to increase performance.

In my experience the throughput of the import is essentially controlled by the Java Heap size setting you have on Java - the default is extremely small, so I would definitely seek to increase it but by how much it will depend on how big the environment is and how much memory you do have. Can’t seem to find the official documetation for it right now but it is done via -Xmx#IntergerVALUE#m in #InstallPath#\wlp\usr\servers\server1\jvm.options. Furthermore, there a bunch of things you can tweak on both BFI configuration & SQL side to improve imports - a bunch of categories available for it here.

1 Like

The table in the following link shows the Xmx setting based on the # of end points.
https://help.hcltechsw.com/bigfix/10.0/inventory/Inventory/planinconf/r_hardware_requirements_server_windows.html

The import log shows the time taken for each ETL step, look for start/success messages corresponding to each ETL step. This will help in identifying which ETL steps are taking longer time.

Opening a ticket with HCL BFI support team will also allow them to review/analyze the ETL steps.

2 Likes