Fine tune BigFix configs for production patch

Hi Friends,
We are in the process of onboarding BigFix patch to production. Have few queries below,
• If execute a baseline at 9 PM from BigFix, on the client machine it takes sometimes to execute like 9.05 or 9.10 or 9.15 PM. How to avoid this? If we execute at 9 PM, on the client machine it should execute on 9 PM itself. What need to configure to achieve this?

• If we use the scheduled patch baseline and pre cache option on the baseline what time the pre-cache will start and complete,
EX: At 3 PM, If we schedule the baseline patch for 9 PM, at what time the pre-cache/download will start/finish and how to make sure the package was downloaded / pre-cached successfully and what time the download will start/finish when we execute the baseline at 3 PM. Is there any place we can check and make sure it was downloaded before actual patch starts.
• or from web console can we see the downloads/pre cache was done successfully because we cannot login each server to see the file/package was downloaded or not.
• Again, the patch was scheduled at 9 PM, how can ensure the patch will start exactly at 9 PM without any delay because we are having very critical servers and very minimal downtime. Can we enforce the baseline to start at exactly 9 PM?
• Last thing I want to reboot the servers after the patch was successful, how can I achieve this in BigFix like check the patch is successful and then reboot the servers.

Thanks in advance guys.

There could be several reasons for the delay in execution, including:

  1. Action propagation delay : When the action is initiated from the console
  2. UDP communication blocked : If the client doesn’t receive the UDP notification
  3. Client in a different time zone : May cause scheduling offsets
  4. Client responsiveness : If the client is busy or under load

Recommendations:

  • Enable Command Polling on clients (e.g., every 15–20 minutes) for environments where UDP is blocked.
  • Consider enabling Persistent Connections to maintain continuous communication with relays.
  • Schedule the baseline in advance (a few hours ahead) rather than just-in-time.
  • Use the execution option “Start client downloads before constraints are satisfied”, this helps clients prepare ahead of time.
  • Use UTC zone to avoid client local time zone conflicts.

Note: Even with best practices, there can be minor delays due to network latency or client performance. Always define a reasonable deployment window to account for such variations.

When using the Pre-Cache Wizard, content is downloaded and cached on the relay servers, not directly on clients. This ensures quicker delivery when the action starts, thats also depended on how your network is setup like relay locations, network bandwidth etc.

If you enable “Start client downloads before constraints are satisfied” in the action settings while scheduling the baseline, the client will start downloading the required payloads in advance, based on start time constraints.

  • In the BigFix Console or WebUI, check the action status of each client:

    • You will see progress indicators such as:
      • Evaluating
      • Pending Downloads
      • Waiting
  • Clients in “Pending Downloads” or “Waiting” state before the start time confirm that content is being downloaded in advance.

Yes, this can be monitored from the BigFix Console or WebUI:

  • Use the Action Status tab in the BigFix Console or WebUI.
  • You can see the status per computer indicating whether it is waiting, downloading…

Follow these best practices:

  • Enable command polling
  • Use persistent connections
  • Pre-cache content using “start downloads before constraints”
  • Deploy the action ahead of time with a start constraint set to 9 PM
  • Use UTC time zone

While scheduling the deployment action, under the “Post-Action” tab:

  • Enable the option: “Restart the computer after the action completes”

I’ve tried to cover as much as I could, though it’s been a while since I last worked directly on patching. For more in-depth documentation and guidance, I recommend syncing with your Technical Advisor or raising a support case with HCL for official assistance.

4 Likes

Hi bro,
Thank you for your advice.
We will take note of the polling and persistent for the instant execution.
Sorry I wrongly mentioned about pre-cache, the correct is start downloading the required payloads in advance
Regarding the above we are getting inconsistent output,
At 11AM we are scheduling patch for 5PM with the above features enabled but we are getting inconsistent result like,

  1. Only status waiting for whole time
  2. Not Relevant
    And how can make sure it is cached the downloads on client successfully. Is there any place in console we can ensure it is done before actual patch.
    Also, in the client machine where can see the packages are cached successfully.
    As per document, we have checked this path but couldn’t anything under BES Client\__BESData\__Global\__Cache\Downloads .

For reboot,
but we want to check the whether the patch status is successful or not. If successful then we need to reboot, if completely failed/or any particular package failed then we may not reboot.

Yes, we already raised a case to HCL and awaiting response. In mean while checking here for any valuable opinion.

Thanks,
Riz.

You should wait for the Support Team’s analysis, as many moving parts require their deep dive. In the meantime, here are my recommendations:

  • Verify your time conversion if you’re not using UTC.
  • Review Relay and client logs for “Server returning old data,” which can prevent content gathering and cause an action to go from Waiting to Not Relevant.
  • Review Relay health check dashboard
  • Check UDP connectivity to ensure no firewalls or network devices are blocking UDP.
  • Identify the stuck component: in the Console, open Action Info for your deployment and examine Component Status to see which package or download remains Waiting.
  • Extend your caching window: a five‑hour window may be too short. Consider scheduling payload downloads 12–24 hours before the maintenance window.
  • (Optional) Confirm the client cache by running this relevance on a few test machines:
(names of it, modification times of it) of files of folder (pathname of data folder of client & "\__Global\__Cache\Downloads")
  • Verify that upstream relays have the content under:
%ProgramFiles(x86)%\BigFix Enterprise\BES Relay\wwwroot\bfmirror\downloads\sha1
  • Automate your reboot logic with REST API:
    If the overall action status is Success, trigger a reboot.
    If any package fails, skip the reboot and alert or retry as needed.

  • Alternatively, create a custom Web Report that displays the status of your selected actions, highlights failures, and then you can target those endpoints accordingly.

1 Like

Hi khurava,
Thanks for your extreme help.

HCL already replied with the option of polling interval and for reboot there suggest to go with ‘Restart Needed’ Fixlet.

We are checking internally also.

1 Like

Lots of really good suggestions here but I do want to warn about command polling. Its a tricky setting and should be carefully examined. On internal networks, UDP being blocked (on port 52311) is and of itself a problem and should be fixed in order to let BigFix function as designed. Command polling being set too low can have an adverse impact on the client processing “loop” and client behavior overall. This could lead to machines appearing greyed out in the console and other unintended side effects. IMHO command polling should not be set to anything below 1 hour. I’ve set it too low before and created more issues than I solved. As with anything… YMMV.

2 Likes

Hi dmccalla,
Support has provided the polling method as alternative but however our TAM has advice the polling method will make drastic delay while it is evaluating more than 100+ components so again we are in the initial phase of doubts what to do next to achieve my queries.
Hi Khurava,
Support says the pre-cache downloads constraints for the multiple baseline package is not supported on linux machines since the linux will do dependency resolution only during the runtime, so they advise it is not supported as same as windows.

For Linux patching in BigFix, there are two main methods:

  1. Plugin-Based Patching: This is the default method which you are using currently, it will always going to consume more time.
  2. Use Existing local Repositories: You can bypass plugin overhead by using BigFix to push simple commands like yum update or yum download, while relying on your internal repos or Satellite server for patch downloads.
  • Its faster than plugin-based patching
  • Puts less load on BigFix infrastructure
  1. Optional, Pre-Cache with Plugin: If sticking with plugin-based patching, consider automating pre-caching of expected patches via REST API OR someone can push these pre-caching actions to BigFix infra whenever needed.

Linux patching (unlike Windows) always involves more overhead. Testing all methods is crucial to decide what works best for your environment.

Please also be aware the Client side precache, occurs before relevance checks, so it will cache the entire baseline which may require more space than expected, not just what’s really needed for that client.

Validate your Bigfix parameters for how much space is available for use with cache.
(The below are some settings i have worked with, but support may provide better advice)

_BESClient_Download_PreCacheStageDiskLimitMB
_BESClient_Download_NormalStageDiskLimitMB
_BESClient_Download_MinimumDiskFreeMB
_BESClient_Download_RetryLimit

I’d have to test again but I do not believe this to be the case; only those components that are Relevant should cache their downloads.