Current BigFix (Lifecycle and Compliance)
Server 11.0.3
Total clients 1,396
No Relay
Total remote sites > 200
Each remote site contain contains 6 - 20 clients.
Generally remote subnets are connected to Operation Centres, which in-term are connected to DataCenter. Most endpoints-to-Operation Centres bandwidth are on 2mbps, operation centres to Datacenter range from 20mbps to 1Gps. There is no Internet connection.
After upgrading the server from 9.5.21 to 11.0.3 we notice very noticeable network issue affecting normal daily operation of business.
Massive client-to-server traffic occured immediately after each BigFix AirGap operation and last for about 3 days, each client can send 20 - 200MB of data to the server.
The infrastructure design doesnât allow setting up of BigFix relays, so that solution is out of the question. Upgrading of bandwidth is also not feasible in the near future.
I had created 3 HCL support tickets to troubleshoot this problem. So far some proposed workarounds:
Increase polling interval to 24 hours
Stop the besclient service at 5am and start it up at 12am, effectively allowing it to only run during the non-revenue hours.
Run BigFix AirGap only when patching is needed, which typically is once in 6 months
Recently I met some BigFix experts who proposed to use aggregation and convert some endpoints to relays.
Are these workarounds feasible?
Are there any documentation detailing how to set up aggregation and conversion of endpoints to relays?
Have Support explained what exactly is causing the actual traffic? Is it scheduled jobs/deployments; is it BigFix clients reporting up the chain BigFix property results; is it for example, BigFix Inventory scan results or something else?
I would start with that first and then consider the options because it does make a difference. If for example, you have a ton of custom properties that are collecting/reading back a lot of data then you could adjust frequencies and maybe apply filters to trim the data; if for example, the traffic is caused by deployments then if I recall correctly there are pre-caching options which maybe you can schedule in off hours only. Just saying, before you go with something drastic, there may be something more specific that will help based on the actual culpritâŚ
As a general solution - has it been proposed/have you considered peer nesting? Working with PeerNest Itâs similar to relay solution but it leaves it dynamic rather than dedicated⌠- it does have its advantages/disadvantages which may/may not suite your scenario but definitely worth considering as it may solve the problem.
Unluckily, HCL Tech Support was not able to find out what is the traffic that contributed for the big data from client to server.
I can confirmed that it is not caused by Scheduled jobs, deployments or inventory scan because during the time of BigFix traffic sudden surge, there was no scheduled jobs, no deployment, no action, no inventory scan. What is very clear is that the traffic surge comes right after the BigFix AirGap is run on the server. I also disable the custom properties and analysis, these doenât make much difference.
@orbiton,
How do I find out if the traffic is about Gathering Binary or something else?
Are you sure the traffic is from the client to the server (upload)? Because as you describe it, I would expect somewhere around that amount of download traffic to the client - where the client initiates the connection, but downloads the new site contents from the server.
From the traffic captures of the firewalls, we can confirm the large traffic is client-to-server and the surge comes right after an AirGap is run on the server.
After importing air gap, the client will get the UDP message from the parent relay/server that there is a new content, then the client will initiate a TCP connection to the parent relay/server and will download the diff of those sites - if you got the timestamp from the network device, and the source devices - you can check the BigFix Client Logs and see which activity was made from the BigFix Client t side.
Can you please verify if that is the activity you are seeing on the Client log files?
I have seen similar issues with various clients over the years. The good news is that it can be fixed with a good relay architecture and careful tuning. I would start with a couple of things:
Running the air gap (gather i assume) operation more often so that there is less of a delta on the clients. There is a big difference between 30 days vs 120 days worth of patch content relevance for example.
Get some relays. Remember that it doesnât mean you need a dedicated machine. For an environment your size, see if you can piggyback a relay service on a computer at each operations center. If so, test with one ops center and assign the appropriate clients to that relay. Then watch the traffic for that ops center to see if there is a difference. If traffic is still an issue, try some throttling on the same clients and relay.
Be judicious of your client to site subscriptions. Every site update/content update is going to cause the client to evaluate and report back. If I remember correctly, one of the relay functions is to âbundleâ client report data and upload that to the server (or the next relay in a chain).
The AirGap is run once a month after Microsoft Patch Tuesday. Running AirGap more often than once a month doesnât help us because bulk of the updates only happen after Patch Tuesday (Microsoft, Oracle, Adobe, etc) release updates at almost the same time.
I find that relays help with the download. The problem with are facing is the upload.
How do I reduce the site subscriptions? Where can I find what are they currently subscribing?
I am suspecting the subscribed BigFix sites are contributing to this large bandwith upload to server after every BigFix AirGap. Based on these subscribed sites, can anyone tell me which one is not required?
BES Asset Discovery
BES Inventory and License
BES Support
BigFix Client Compliance Configuration
BigFix Labs
BigFix Manager for Windows Defender
BigFix Remote Desktop for Windows
CIS Checklist for AIX 7_1 RG03
CIS Checklist for MS SQL Server 2012 DB Engine
CIS Checklist for Oracle Linux 9
CIS Checklist for RHEL 8
CIS Checklist for RHEL9
CIS Checklist for Windows 10
CIS Checklist for Windows 2012 DC
CIS Checklist for Windows 2012 MS
CIS Checklist for Windows 2012 R2 DC
CIS Checklist for Windows 2012 R2 MS
CIS Checklist for Windows 2016 MS
CIS Checklist for Windows 7
DISA STIG Checklist for AIX 7.1
DISA STIG Checklist for Internet Explorer 10 RG03
DISA STIG Checklist for Internet Explorer 11 RG03
DISA STIG Checklist for Internet Explorer 8 RG03
DISA STIG Checklist for Internet Explorer 9 RG03
DISA STIG Checklist for RHEL 6 RG03
DISA STIG Checklist for RHEL 7
DISA STIG Checklist for Windows 2008 R2 DC
DISA STIG Checklist for Windows 7
DISA STIG Checklist for Windows 8
DISA STIG Checklist for Windows XP
DISA STIG on Windows 7 v1r2
IBM License Reporting (ILMT) v9
Master Action Site
OS Deployment
Patches for AIX
Patches for ESXi
Patches for RHEL 8
Patches for Windows
Patching Support
Power Management
QRadar Vulnerabilties
Remote Control
SCM Reporting
Server Automation
Software Distribution
Updates for Windows Applications
USGCB Checklist for Windows 7
Virtual Endpoint Manager
Vulnerabilities to Windows Systems
Vulnerability Reporting
Looking at the sync-ed sites on the Client, I notice that there is a site called âEnterprise Securityâ which has a size of 818MB, the site contains a file called SupersededControlled.fxt which has a size of 455MB.
Now I recalled that at one point we suddenly cannot find superseded fixlets in BigFix Console, we then requested HCL to make them available to us. Since that day onwards we started to receive complains about network bandwidth overwhelmed, and each time is right after we ran AirGap.
Sure, that'a the internal name of "Patches for Windows" site. It's not surprising that syncing the site would generate some traffic, but that should be "download" traffic (client downloads from Relay, Relay downloads from Root). This is why I was trying to hone in on the direction of the traffic earlier.
If this is in fact site download traffic, then the earlier recommendations like those from Viking hold. Subscribe clients to only the sites you need, and distribute Relays to cache locally. These don't need to be dedicated relay hardware, it's a common scenario to just add a relay service to one or two clients at a remote site and let them cache sites for the others to gather.
The site downloads are compressed on the wire, which may account for you seeing only 20 - 200 MB when there's more than a GB of uncompressed content to download.
One thing that Iâm not seeing much (if any) mention in this topic is bandwidth throttling. Hereâs a link to the documentation in case it helps: Managing Bandwidth
There are settings available to manage bandwidth for both network traffic downstream (i.e. to the Clients) as well as upstream (i.e. from the Clients).
I notice the bulk of the BigFix traffic is from the subscription of sites. I had since removed all the compliance sites (CIS Checklists, DISA STIG Checklists) from the External Sites branch, and the custom compliance sites under the custom sites branch. In record, during May 2025 AirGap, I detected upload size of 200MB from client to server. In the last AirGap on 1 Nov 2025, the largest upload size detected was 80MB whereas the download size was 3GB.
I found that if I change the âComputer Subscriptionsâ of a site to âNo computersâ, the corresponding sub-folder in the client â_BESDataâ folder is immediately removed, and if I change it to âWin10â, the corresponding sub-folder is re-created and files downloaded on all Windows 10 clients. So, this triggers an immediate sync without even running AirGap.
The largest folder is âEnterprise Securityâ which corresponds to âPatches for Windowsâ site. The folder size is 588MB now on Windows 10.
I wonder why the clients need to sync with servers all the sites that it is subscribed to? Can I change the âPatches for Windowsâ siteâs Computer Subscriptions to âNo computersâ yet not affecting the patching of the clients?
Negative, the clients need to be subscribed to the patching content sites, else they will not report 'Relevant' or 'Not Relevant' for the fixlets.
They will also not respond to an Action for a fixlet that comes from a site to which they are not subscribed.
Create a custom site that will contain only the copies of the Patches for Windows fixlets you require for your remote systems. Then you can unsubcribe the affected systems from the external site Patches for Windows.