Session Relevance that captures *when* a fixlet became relevant?

There’s not a whole lot on this, but there’s a bit of description on how tuples create a cross-product at https://developer.bigfix.com/relevance/guide/basics/tuples.html

What could probably use a lot more is that when building a tuple of plurals, such as

( items 0, items 1 )

i.e.

( ("a"; "b"; "c"), (1; 2; 3) )

becomes

( “a”, 1 )
( “a”, 2 )
( “a”, 3 )
( “b”, 1 )
( “b”, 2 )
( “b”, 3 )
( “c”, 1 )
( “c”, 2 )
( “c”, 3 )

the full list becomes a cross-product of every ‘item 0’ to every ‘item 1’. If either of those take a long time to retrieve (like a list of fixlets or a list of computers), that long lookup is repeated for each result in the cross-product.

Using a ‘set’ for each of those items, a ‘set’ is considered a single element, so that (relative slow) query for fixlets or computers is only run once. After the sets are built, expanding them into their individual elements still ends up with the same number of total results, but looping through the elements of the set is much faster than repeating the fixlet/computer search.

1 Like

Tuples I kinda get… I was more curious about the specific relationship between the BES Computers and BES Fixlets sets that allows then to have properties like relevant flag and first became relevant between them.

Ah, I see, yes those are properties of a bes fixlet result. For those the properties are documented at https://developer.bigfix.com/relevance/reference/bes-fixlet-result.html

Particular interest is that ‘first became relevant of it’ is only available in Web Reports, not in Console Dashboards https://developer.bigfix.com/relevance/reference/bes-fixlet-result.html#first-became-relevant-of-bes-fixlet-result-time

Um… wut? I’m getting values in my REST API queries, so I think we’re good.

Yes REST API can get that as well (REST is serviced by Web Reports). It’s just a bit of an edge case that it does not work in a custom Console dashboard or in the Console Session Debugger

1 Like

In comparison, your code, run against my production system last night, returned nearly 42 million characters of XML in just under 12 minutes. That don’t seem too shabby.

Thanks again!

1 Like

Great to hear!

At a scale like that it may still be worth trying to get more efficient. See how this one does, it reduces the time by 7x on my lab, just by changing the order of the filters on ‘bes fixlet’.

(id of computer of it, name of computer of it | "No Name", name of fixlet of it, category of fixlet of it | "No Category", first became relevant of it as string | "None")  of (
   results (elements of item 0 of it, elements of item 1 of it)
   ) whose (relevant flag of it) of ( 
  set of (bes computers) whose (last report time of it > (now - 30*day)) 
 ,set of (bes fixlets whose (globally visible flag of it and (applicable computer count of it > 0 ) and  ((it = "critical" or it = "high") of (source severity of it as lowercase)) ) ) 
 )

On the ‘bes fixlet’ filters, the ‘globally visible flag of it’ and ‘applicable computer count of it’ can be retrieved faster (and discard more fixlets) before the ‘source severity’ string comparisons are processed. And the ‘source severity of it’ only has to be lowercased once, and then compared to the two values “high” and “critical”, instead of being lowercased once to compare to “critical” and then lowercased a second time to compare against “high”. The string lowercasing & comparison is one of the slower parts of this query, so performing that against a smaller number of fixlets (the non-relevant ones already having been discarded) improves the speed.

2 Likes

This stuff right here is pure gold. Thank you!

1 Like

Happy to help! And…would love to hear if the new comparisons are faster at your scale

Oddly enough, the second query was actually slower. :open_mouth:

First query: <Time>708563.921ms</Time>
Second query: <Time>729886.131ms</Time>

Granted, they were run on different days and at different times of day, and it’s only 21 seconds difference, but there it is. I may try some different filters to thin things down to both test with smaller subsets and to reduce the initial load… for instance, maybe skipping all the BigFix Compliance fixlets?