Patch reporting

For a specific deployment scenario, today we create a baseline each month that contains the current month’s Windows patching fixlets. With this we can report on the number of compliance servers (relevant to the baseline) via the BES Root API and then deploy the baseline for force patching.

For Linux this won’t work due to the large number of fixlets (over 500+ per month) and the limitation on the number that can be added to the baseline. Any suggestions here? We can’t use the WebUI Patch Policy because 1) we can’t individually curate the approved content and 2) there is no API to report on compliance. We’re trying to maintain that we report on compliance from the same mechanism that deploys that patches.

It’s an interesting scenario, to report Compliance against a custom baseline rather than the actual source fixlets, or to manage large baselines for Linux patching.

These sound like several distinct issues.

“The Code is more what you’d call guidelines, than actual rules”. The idea that a baseline is limited to 100 components, or 250, or 500, is for the sake of evaluation time on the client. There’s a lot of variation there; one fixlet with huge / slow-evaluating relevance can have as much impact on the client as dozens of well-tuned fixlets. You didn’t say which Linux distributions you’re interested in, but for instance most of the Red Hat fixlets are based on RPM information, which is straightforward and cached on the client; the RPM queries perform really well, so it’s conceivable you could have much larger baselines without an issue. In contrast, many of the Windows fixlets are based on checking file metadata for dozens (or even hundreds) of files and don’t perform as well as queries against the RPM cache.

If it’s just for reporting purposes, you could add the “approved” fixlets to a very large baseline, but put the baseline in a site that doesn’t have any subscribed computers; then you could base your compliance on a query like
names of applicable computers of source fixlets of components of component groups of fixlets whose (baseline flag of it) of bes custom sites whose (name of it = "Site My Clients Do Not Evaluate")
(but you’d still have the problem of how to deliver those actions).

Aside from Baselines, you could use other methods to mark a Fixlet as “Approved”. Perhaps you’d create custom copies of them into a Custom Site, then you could take actions and report Compliance against all the fixlets in that site?

Or maybe Globally Hide the ones that are not approved (and you can make new fixlets gather as “Hidden by Default” in BESAdmin, so you un-hide them when you approve them in the Console)? I have not checked this but I’d expect that Patch Policy would ignore Globally Hidden fixlets. Test that, but even if Patch Policy doesn’t respect fixlet hiding, you could use the Console to Globally Hide / Un-Hide fixlets as you approve or reject them, and have your operators select “all the fixlets they can see that are relevant” for their actions, multi-action groups, or baselines; and report compliance based on the bes fixlets whose (globally visible flag of it)

Do any of these seem worth further exploration? I can think of at least two other options, but they’d require some custom coding to track fixlet approvals as dashboard variables, or taking it out of the Console entirely and building an approval / orchestration using REST API (I know of at least a couple of customers who have done that, but our goal would really be to add the requisite features to WebUI so everyone benefits).

1 Like

The Linux in question is RHEL 6,7,8 and OEL 7 but only the current month’s patches. If we investigated creating a single large baseline (for reporting and possibly deployment), would you suggest we first create that and only have it be relevant on a few computers, then push TROUBLESHOOTING: Enable BES Client Usage Profiler to them to monitor the evaluation time of the large baseline?

And there is API for the WebUI Patch Policy to see which computers are relevant for the policy, right?

WebUI itself does not, as far as I know, have an API; rather it is mostly an API client itself.

It would be difficult to determine which systems are relevant before activating the policy; Autopatch does not create Baselines, rather it just creates a Multiple-Action Group from the source fixlets so there is not a “baseline” to check for applicable computers. You could of course configure the policy to generate actions days ahead of time (by default I believe it’s 12 hours but you could have the actions created earlier) and then report the applicability status on the actions themselves.

For the Linux patch baselines, I’d probably create separate baselines for each flavor - one for RHEL 6, one for 7, one for 8, and another for OEL 7. The main reason for that you’ll probably find that you need to use the Multi-Package Baseline feature to speed up patching time, and those require different tasks at the start and end of your baselines for each OS version. That should also help with the baseline sizes.

If you haven’t checked it out yet, I’d also suggest switching to the Patch Management Domain at the bottom-left of the console to build your patch baselines. That helps a lot in filtering out “Superseded” fixlets. I used BigFix for about seven years without ever looking in the Patch Management domain, but it really does make baseline building simpler than the “All Content” domain where I usually stay.

1 Like

I’ve never heard of the Multi-Package Baseline feature (https://help.hcltechsw.com/bigfix/9.5/patch/Patch/Patch_RH/c_multiple-packagebaselineinstallation.html). Is it only for baselines or any type of Multiple Action Group?

It’s specifically for RHEL patching. If you are sending Multiple Action Groups from the API, I don’t see a reason it couldn’t be included; but from the Console, when you select fixlets and create Multiple Action Group, you can’t control the component order so I don’t think it would be useful there.

The basic idea of the multi-package baseline is that a standalone RPM package is not enough to resolve dependencies for a package or multiple packages; you need an RPM repository for that, which has the metadata for which RPM files and versions provide features.

To get that to work on the client, we need to download a copy of the current RPM repository metadata, then run some RPM commands to find dependencies, and then download those dependencies. With a standalone fixlet, we download that metadata, create a cached RPM repository locally, resolve our dependencies, install them, and then delete the cached RPM repository.

If we had three fixlets in our baseline, we’d download the RPM repo metadata individually for each component, and then throw it away afterward.

What the Multi-Package baseline does is to download the RPM repo data once, at the start of the baseline; reuse that cache to resolve dependencies for all the RPMs for all the fixlets in the baseline; download all of the RPMs needed for all of the fixlets in the baseline (and their dependencies) into a cache area; and then at the end of the baseline we install all of those RPM packages at once.

Downloading the repo data once for the baseline, instead of once for each fixlet in the baseline, can save hours of time if you’re using large patch baselines.

Using the multiple-package baseline method requires putting one task at the start of the Baseline (to download the repo data and mark the start of the transaction) and another at the end of the baseline (to commit, running the yum commands to install all of the queued-up RPM packages). It matters that those are the first and last components in the baseline, so it’s difficult to use them from a Multi-Action Group instead of a Baseline.

1 Like