My patching process for older patches

I’m just wondering what people’s thoughts about this process are.

In many cases we need to be deploy older patches (released before the current year) to clients on our network. What I’ve done is created a site called “Old Patches”, and have created baselines broken down by year for years that don’t have many patches, or by quarter for years that have a lot of patches. This keeps the number of baseline components to a reasonable level.

In our regular patch site, there is a policy that looks for a custom setting called Last_Patch_Baseline. If it doesn’t exist, or if it exists and is set to 0 the relevant computer is automatically subscribed to the Old Patches site.

Each of the baselines in the Old Patches site has the “baseline will be relevant…” setting disabled for every component except the last one, which is a task that sets the value of Last_Patch_Baseline. For the first baseline (the one with the oldest patches) the relevance is set so if Last_Patch_Baseline doesn’t exist or is set to 0 it becomes relevant. The last task that sets the Last_Patch_Baseline value sets it to the name of the baseline.

The next baseline (next oldest patches) checks the value of Last_Patch_Baseline and if it’s set to the name of the previous baseline, it becomes relevant. Again, the last task sets the Last_Patch_Baseline to the name of the baseline.

Back in the regular patching site there is a policy that looks for the value of Last_Patch_Baseline to be the name of the very last Old Patches baseline, when a computer becomes relevant it unsubscribes that computer from the Old Patches site.

I also have a task in the regular patching site to reset the Last_Patch_Baseline to 0, this will force any computer that has already gone through the old patching process to become relevant again.

With this process any new computer that comes onto the network will first install any old patches that may be applicable so when it comes time to do our regular patching they will be close to being up to date.

Thoughts/comments/suggestions?

1 Like

I don’t know enough about Auto-Patch but that may answer this question entirely – otherwise a solution using the REST API could be employed.

The reason we avoid having numerous large baselines is the client has to evaluate every single patch in the baseline – and because a baseline is just Fixlet1 AND Fixlet2 AND Fixlet3, the agent has to evaluate every single patch in the baseline in one swoop without being able to process anything else. This causes a ton of issues if not kept in check. This is on top of the relevance processing done by the agent of the patch itself in the Patches for Windows site.

There is nothing you can do to avoid the processing of the original Fixlet in Patches for Windows – so it’s best to take advantage of that.

I think the optimal solution is a solution using the REST API:

  1. Query for missing legacy patches
  2. Compile a list of devices and their legacy missing patches
  3. Query to see if any of those legacy missing patches have open actions the devices in the list
  4. Create a mailbox action (an action which targets specific agent IDs) against each device with its list of missing patches (or create a mailbox action for each patch against any missing devices).

In this we reduce client relevance evaluation time in exchange for a potentially large number of mailbox actions. Mailbox actions are specifically designed to scale much better from a client perspective than normal actions because only the targeted devices even know that the action exists.

Based on your comments I’ve changed how I handling this. I’ve gotten rid of all of what I had listed in the first post in this thread and now have a script that checks each computer in our “patching” site, then creates baselines based on regional computer groups. The appropriate baseline is deployed to computers based on the group membership. This covers not only legacy patches but any new patches that are required. The downside is the first time we run like this some of the baselines will be pretty large, but going forward they should be much smaller.