Sigh.
I’m kicking myself right now because I actually used to know the answer to this question, but I either didn’t write it down anywhere OR I wrote it down then lost it. In either case, I know this is possible, I just can’t remember how.
I’m looking for the Session Relevance that will get me the date that a fixlet became relevant for a given computer. I remember it involved joins and was (I think) only available via Session Relevance & SOAP versus “Normal” Relevance & REST.
Does this ring any bells for anyone? Can anyone point me in the proper direction? Thanks!
(/me goes back to fruitlessly Googling…)
I am! (And I actually had just found it again via Google-fu. Apparently, all I need to do is ask a question aloud in order to quickly find the answer myself.)
So, I now have a test Session Relevance that loos like this:
first became relevant of result ( bes computer 127472120, (relevant fixlets of bes computer 127472120) whose (source severity of it = "Critical") )
which supplies a result of:
Fri, 04 Jan 2019 10:36:03 -0500
Any pointers of the joins necessary to get properties of both the computer and the fixlet? The end goal here is to get a list of computers that have been seen in the last two weeks along with a list of their relevant Critical fixlets and when they became relevant for those fixlets.
(Now that I’ve asked this aloud, we’ll see if I can figure it out, too. )
Okay … no joins necessary for that part…
(name of computer of it, name of fixlet of it, first became relevant of it) of result ( bes computer 127472120, (relevant fixlets of bes computer 127472120) whose (source severity of it = "Critical") )
Maybe I was thinking a join would be necessary when that result gets ugly (as in pulling relevance of fixlets with Source Severity = Critical OR Category contains Security…)
And this is gonna take forEVER when I change those "bes computer 127472120"s to “bes computers”.
You could make it a bit more flexible by creating a custom web report that uses the following source code:
<?relevance unique values of (name of computer of it & " ~ " & name of fixlet of it & " ~ " & id of fixlet of it as string & " ~ " & first became relevant of it as string) of results of bes fixlets ?>
and use the filter to limit the output according to your requirements, like Content Source Severity is Critical and Content Site is Patches for Windows, along with something to limit the computers being queried, like Computer Computer Groups is Test).
I concatenated the results, so the unique values would sort the output by computer name, and made the tilde the separator, since there are commas in the dates.
2 Likes
Now that, my friend, is a completely different kettle of fish, but one I hope to examine someday.
This particular need is for data to be provided to an external reporting engine that’s merging results from multiple sources, so custom BigFix reports are (unfortunately) out.
Here’s what I have so far (getting close):
(names of computers of it, names of fixlets of it, categories of fixlets of it, first became relevants of it) of results ( (bes computers) whose (last report time of it > (now - 1*minute)) , ((bes fixlets) whose ((source severity of it = "Critical") and (applicable computer count of it > 0) and (globally visible flag of it is true)) ) )
(I have the Last Report Time set crazy low for testing…in production it’ll be 14*day.)
Are you looking to include only computers that haven’t reported in 14 days, or would it be better to look for fixlets that became relevant more than 14 days ago (a common SLA use case) ?
At first look I feel like there are several areas that can be tuned, but I’m not entirely sure where you’ll get the most payback. So keep updating the post, I’ll grab some popcorn.
One thing you might consider is limiting the sites, as-is you’ll get sites you may not be interested in including operator sites.
fixlets whose () of bes sites whose (name of it = "Enterprise Security" or name of it = "Updates for Windows Applications")
2 Likes
Ha! Thanks, @JasonWalker.
We’re actually grabbing all computers that have reported in the last 14 days, not haven’t.
Comparing the fixlet relevance dates against SLA (or policy) is exactly why we’re doing this, but there are multiple thresholds (that occasionally change), so I’m gonna just give it the date and let the report figure out what to count.
Regarding sites, the list of those is actually pretty long, so we’ll probably look to exclude any outliers (or, again, let the report weed out some of the things we don’t want). Also, this will potentially allow operators or sites to decide if they feel something is Critical and should be included.
I don’t think the Session Relevance will change significantly from here other than to possibly tune it for performance. Lots of good info on that here.
@straffin - do you want to account for remediation in this report? Perhaps include columns for relevance and remediation:
(names of computers of it, names of fixlets of it, categories of fixlets of it, first became relevants of it, relevant flags of it, remediated flags of it)
3 Likes
Hrm…assumed that the (applicable computer count of it > 0)
part was only returning relevant computers. Is this not the case? Will have to do some testing…
Hey Jason, i’m trying to minimize the data by targeting sites whose name of it contains “patches” (basically all patch sites). I’m trying to formulate that along with the below query, however for some reason i’m not getting the appropriate result.
(names of computers of it, names of fixlets of it, categories of fixlets of it, first became relevants of it) of results ( (bes computers), ((bes fixlets) whose ((source severity of it = “Critical”) and (applicable computer count of it > 0) and (globally visible flag of it is true)) ) )
4.5 years later, I’m finally getting back into this.
Does anyone (@JasonWalker?) have any insight into querying only those Fixlets that are currently relevant? The above relevance snippet doesn’t do it.
Apparently I just need to ask questions in order to stumble upon the answers myself.
tacking whose (relevant flag of it = TRUE)
on the end of the results
section does what I was looking for. So, I’m now looking at the following (or something similar):
(ids of computers of it, names of computers of it, names of fixlets of it, categories of fixlets of it, first became relevants of it) of results ( (bes computers) whose (last report time of it > (now - 30*day)) , ((bes fixlets) whose ((source severity of it as lowercase = "critical" OR source severity of it as lowercase = "high") and (applicable computer count of it > 0) and (globally visible flag of it is true)) ) ) whose (relevant flag of it = TRUE)
It feels like this is going to take forever to run…anybody have thoughts on making it more efficient? @jgstew?
As mentioned above (and a long time ago), I also need to exclude fixlets in particular sites (like “BES Support”… WHY ON EARTH IS THE “Install BigFix Relay” FIXLET “Critical”?!). What would be the best way to do that? I’m currently trying to figure out the syntax to do name of site of fixlet not in "list1","list 2,...
Would (not exists elements of intersection of (set of (names of sites of it); set of ("BES Support";"Platform Beta";"Some Dumb Site")))
be a good way to do it?
The main speed problem there is the cross-product generated by
( BES computers whose(), BES Fixlets whose() )
For each BES Computer, it repeats the query to list & filter all the fixlets. That can take a long time.
The trick is to build a set of bes computers, and a set of bes Fixlets. Since a “set” is just a single item, that makes the computer search and the Fixlet search run only once.
I’ll write some more on that when I get back to a computer, typing on the phone now.
1 Like
On my (very small) lab deployment, this query took over seven seconds to show me 184 results:
number of results (
(bes computers) whose (last report time of it > (now - 30*day))
,((bes fixlets) whose ((source severity of it as lowercase = "critical" OR source severity of it as lowercase = "high") and (applicable computer count of it > 0) and (globally visible flag of it is true)) )
) whose (relevant flag of it = TRUE)
Refactoring it slightly, so the ‘bes computers’ and ‘bes fixlets’ are each put into a set, so they’re only queried/filtered once, then pulling the results from those already-retrieved set elements, gives the same count in 184 milliseconds. See if this is a good start:
number of (
results (elements of item 0 of it, elements of item 1 of it)
) whose (relevant flag of it) of (
set of (bes computers) whose (last report time of it > (now - 30*day))
,set of ((bes fixlets) whose ((source severity of it as lowercase = "critical" OR source severity of it as lowercase = "high") and (applicable computer count of it > 0) and (globally visible flag of it is true)) )
)
Once we have the results, you can pull the properties from the result in the same way (I added some error-checking for missing values as well)
(id of computer of it, name of computer of it | "No Name", name of fixlet of it, category of fixlet of it | "No Category", first became relevant of it as string | "None") of (
results (elements of item 0 of it, elements of item 1 of it)
) whose (relevant flag of it) of (
set of (bes computers) whose (last report time of it > (now - 30*day))
,set of ((bes fixlets) whose ((source severity of it as lowercase = "critical" OR source severity of it as lowercase = "high") and (applicable computer count of it > 0) and (globally visible flag of it is true)) )
)
1 Like
This is great! Thanks @JasonWalker. Is there anywhere that explicitly describes what’s happening when you use the results of (set 1, set 2)
construct? I would have expected there to be some defining of keys or joins or something…
I feel this pain. So often.
1 Like