BES Client evaluation loop timing

I just watched the replay of the June Bigfix webinar and they showed the Evaluation Loop analysis…

I enabled this analysis this property:
average of evaluationcycle of client / 1000 * second / minute

I was surprised to see that of about 1500 clients:
~ 33 clients report an hour or more
~ 300 clients report 30 - 59 mins
~ 200 clients report 21 - 29 mins
~ 592 clients report 11 - 20 mins
~ 522 clients report 1 - 10 mins

Is this considered normal?
How can i improve the 300+ clients that are taking more than 30 mins to evaluate?

I have gone through and reduced a number of analysis frequencies from every report or hour to every 6 hours or once a day, and also disabled analysis that are not useful for my environment today.

I have read that baselines are always being evaluated. I do hold the last year of baselines in custom sites per OS release. Can i do anything short of deleting the old baselines to stop them from being evaluated? (Hide them or the like)

Ok as i wrote this I realize it might be as simple as adding a ‘false’ relevance to the old baselines. Is there an easy way of doing this- is this likely the fix to the long evaluation loops?

@mesee2, there are numerous factors that contribute to the average of evaluationcycle of client relevance values returned from managed endpoints. For example:

  • Contents of Master Action Site (including content and groups).
  • Number of External Sites subscribed and applicable site subscriptions plus site contents.
  • Number of Custom Sites and applicable site subscriptions plus site contents.
  • Number of OpSites and Operators in your BES environment.
  • Number of activated analysis and global properties and then configured evaluation cycles (i.e. every report, hourly, daily, etc.).

While there are other factors (like endpoint resources), the above are generally the primary factors that can inflate the average of evaluationcycle of client.

Another management opportunity for deprecated content is to create a Custom Site with no subscribed endpoints and move content that’s no longer needed to it for a period of time before deleting from the environment.

To you last question, “is there an easy way of doing this”… there’s no silver bullet in managing average of evaluationcycle of client over time. Instead, it’s highly recommended and advantageous to implement Best Practices from Net New or to fully review your environment and refactor as necessary to get to a Known Good State with respects to your documented best practices.

Reference Links:

The alternate site would be good but you can not ‘move’ a baseline between sites. It only supports ‘copy’ and ‘delete’ original. Which would disassociate the history of that baseline from the nodes.

the only reason i am holding the old baselines is to maintain the history of what has been run on the nodes.

I am expecting that in the near future this data would be more easily reported on and available via Insights, but that is still a bit away for me due to the replicated database requirement.

@mesee2, you are correct that “moving” content isn’t possible and that “copy and delete” will disassociate the action history. However, it doesn’t remove the action history. Furthermore, I’d argue that applicability to content is infinitely more valuable vs. action history. Just because an endpoint has prior action history for content doesn’t mean that content isn’t still applicable to said endpoint. This is especially important when we look at the OS/app patch content delivered in the external sites.

You are correct that the new Insights data lake will provide functionality to retain cradle to grave action history on your managed endpoints.

There’s an Evaluation Loop analysis? Where is that?

Are we talking about this one of @jgstew’s?

https://bigfix.me/analysis/details/2994765

I had not seen that actual analysis, but i like it.

I just did a simple managed property with

average of evaluationcycle of client / 1000 * second / minute

==
I did go through and mark a number of my old Windows baselines with a relevance to false, and now i have 1440 systems reporting under under 10 mins, most under 3 mins

I do realize that the baseline will not report relevant anymore.
Each month i create a new baseline incorporating missed and new updates, i don’t keep reusing the old or even ‘resync’ the old baselines so they tend to hang on to bad relevance, and actions, and other things… (I know)

  • if i were were to sync the old baselines the MS patches would automatically become not relevant due to normal updates and 'supersedence ’ pushed from HCL.

There’s nothing OOTB, but you can (and should in my opinion) create your own using the following relevance or referencing @jgstew’s fixlet.

average of evaluationcycle of client / 1000 * second