Activating Analyses - Globally vs Locally

(imported topic written by SystemAdmin)

Can someone please explain to me the difference and benefits between the two ways of activating analyses?

Is it better to activate globally or have each operator activate locally? By activating globally, the master actionsite grows rapidly and slows down the client evaluation loop because the master action site is reviewed every loop…is this correct? Just trying to gain perspective between the two activation types.

This is a great question. I am surprise there was not an answer. Did you find the answer? I an very interested.

That is correct. This should have more in-depth coverage. Helps us understand the intricacies better. Gentlemen, please shed some light.

I’m going to take my best guess at explaining this based on what I believe the context of globally and locally relate to.

In the context of the console, globally and locally refer to the user’s ability to see content, I believe, based on their permission level. A user that is not specifically a “Master Operator” would not have the permission to activate an analysis to all users who access the IEM console. It seems to me that someone who “locally” activates an analysis would only have the ability to see the results on the console the analysis was activated on hence it being called “local”.

If that is the case, the benefits to this would be a more responsive console to all users because it would be less content called for from the database from every console.

That’s my best guess and I don’t know if it’s accurate but that would be my understanding of this status.

1 Like

Good point. I agree. So to re-phrase it. Am i correct in saying that there are no performance considerations because that will be totally dependent on the relevance and property of the analysis itself. Local & global are simply basic wrappers to hide it from one’s view if you have not enabled “show hidden”.

The only performance consideration will be from loading data into the console’s perspective. So if hidden, i load fewer entries.

That would be my understanding of it.

What about the “Evaluate Every” setting in the analyses? I would think this affect the performance. Any thoughts?

As per my understanding “Evaluate Every” for analysis based on Client report, whenever a client post its report along with that every report results for such analysis will also be posted or can be defined based on timeline like hours, days, month etc.

Well ! it will be surely impact on performance if there are multiple analysis activated for every report & all endpoints are posting their results along with every client post report. Hence while activating such analysis 1st check how frequently you required such report.

It might impact Endpoint Performance in terms of evaluation loops, but it is my understanding that the properties are only reported to the Server when they “change” or when a Forced Refresh is issued to a computer.

4 Likes

You should avoid using “evaluate every” because it affects client performance, but @TimRice is correct that the client only sends differential reports when the value changes, so setting it to “evaluate every report” does not necessarily effect root server performance.

Almost every property should be set to evaluate every 1 hour or less often, unless it is very important and has a cascading effect on other things down the line, like use of REST API automation or WebReports email alerts, or something else.

Most of my properties I make don’t need to be updated more often than once every 6 hours, so I set them to that.

1 Like

Thank you for your answer @jgstew and all guys.
As a matter of fact I have this situation, every time I import analyses they automatically set themselves to “Every report”. I have two questions:

  1. How do you control that behaviour? I mean there’s a setting in BES Server to set the Report Period when you import (and acivate) Analyses?
  2. There’s a way to change the “Evaluate Every” setting for more than one Analysis? So you don’t have to change the “Evaluate Every” setting one by one?
1 Like

There is no way to control this. I have requested this feature, that there should be a default report period of once an hour or something similar, plus a way to prevent non master operators from every choosing an option more frequent than once every 15 min or similar.

I would suggest submitting an RFE for this and/or finding exiting RFEs for this.

As far as I’m aware, the only way to do this is with the REST API, though I have never done this myself.


There is actually a client setting that controls how “Every Report” is interpreted by the client by setting a minimum threshold, so that it is not actually “Every Report” if the eval loop is very fast. You could set it to 5 minutes or 15 minutes so that regardless of the analysis setting, the actual fastest report time is that time or longer.

Changing this setting makes a lot of sense for any systems that are in an inventory only or locked state or are otherwise not being managed directly by distributed operators and helpdesk staff. We have quite a few systems that get patching from BigFix that is done centrally and not much else.

1 Like

Thank you @jgstew
I’m looking for that treshold setting you say but cannot find it, I’m looking here: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Endpoint%20Manager/page/Configuration%20Settings

Can you please point what setting is that?

Also, what do you mean for “inventory state”?

A system that is locked cannot process any BigFix actions. I do not recommend using locking to achieve this and instead recommend excluding such systems from most sites and targeting.

When I say a system that is “inventory only” I really mean that by some convention or mechanism you have a BigFix agent installed for inventory purposes but do not ever run actions against that system, except perhaps to upgrade the installed agent itself.

If a system is reporting in to BigFix like this, but is generally not be altered by BigFix, then the impact of analysis evaluation should be reduced since it is less critical that it be near real time since it is the more long term trend of the info that matters.

See here: What is Report MinimumAnalysisInterval? (Client Setting)

1 Like

I have clear the use of lock agentes. I wasn’t sure what you meant for “inventory”, I got it now.
Thank you again!

1 Like

Oh, another thing. You may actually run periodic actions on a client that is “inventory only” but only to collect data for reporting. The general idea is that an “inventory only” system is not altered by BigFix in any significant way even though it may actually run a very small set of actions that may actually download and run things, but not install software persistently.

You could define many different layers of BigFix management.

  • agent installed, but always locked
  • inventory only
  • inventory only plus periodic actions to collect data for better reporting
  • inventory + patch
  • inventory + patch + self service

There are many more possibilities, but these are some major ones.

Does anyone know which would be the pattern to detect the Evaluation Period of an Property of an Analysis?

I can only see in the BES.xsd schema File this:

 <xs:complexType name="Property">
    <xs:simpleContent>
      <xs:extension base="RelevanceString">
        <xs:attribute name="Name" type="ObjectName" use="required" />
        <xs:attribute name="EvaluationPeriod" type="NonNegativeTimeInterval" use="optional" />
        <xs:attribute name="KeepStatistics" type="xs:boolean" use="optional" />
      </xs:extension>
    </xs:simpleContent>
  </xs:complexType>

But it doesn’t tell me how to write a well format EvaluationPeriod Value.
In and old post I found the following pattern, but it’s not working to detect the attribute.

EvaluationPeriod="P(0-9+D)?(T(0-9+H)?(0-9+M)?(0-9+(.0-9{1,6})?S)?)?"

Seems that it follows ISO 8601 Duration Format in the following manner:

Durations are represented by the format P[n]Y[n]M[n]DT[n]H[n]M[n]S or P[n]W as shown to the right. In these representations, the [n] is replaced by the value for each of the date and time elements that follow the [n]. Leading zeros are not required, but the maximum number of digits for each element should be agreed to by the communicating parties. The capital letters P, Y, M, W, D, T, H, M, and S are designators for each of the date and time elements and are not replaced.

P is the duration designator (for period) placed at the start of the duration representation.
Y is the year designator that follows the value for the number of years.
M is the month designator that follows the value for the number of months.
W is the week designator that follows the value for the number of weeks.
D is the day designator that follows the value for the number of days.
T is the time designator that precedes the time components of the representation.
H is the hour designator that follows the value for the number of hours.
M is the minute designator that follows the value for the number of minutes.
S is the second designator that follows the value for the number of seconds.

For example, “P3Y6M4DT12H30M5S” represents a duration of “three years, six months, four days, twelve hours, thirty minutes, and five seconds”.

With that, I should be able to get what I was looking for, I have not tested, yet.

1 Like