Inactive relay report

I have this report that runs in WebReports as a scheduled report that pulls from 5 different datasources with 50k+ endpoints, but the report takes nearly 10 minutes to run. Does anyone have any suggestions to make it faster? The purpose is to let us know when we have a Relay or Root Server that hasn’t reported in within the last x hours.

            <table>
            	<table id="resultsTable" class="sortable">
<th>HostName</th>
<th>Last Report Time</th>
<th>Location</th>
<th>Datasource</th>

<?Relevance
concatenation of (trs of 
(
td of (name of it) & 
td of (value of results (bes properties whose (name of it = "Last Report Time"),it)) & 
td of (value of results (bes properties whose (name of it = "Location By IP Range"),it)) & 
td of (database name of it)))

 of bes computers whose ((value of results (bes properties whose (name of it = "Last Report Time"), it) as time < (now - 2*hour) AND (relay server flag of it = true)) or (value of results (bes properties whose (name of it = "Last Report Time"), it) as time < (now - 3*hour) AND (root server flag of it = true)))
?>
</table>

Is this part of a larger report?

Is there any reason not to just use standard filters and not use session relevance?

(BES Relay Service Installed is -blank- filters out the root servers and relays.)

1 Like

Hi, I could do that, but since our Root Servers listed in the report can be DSA servers, their replication doesn’t occur every 2 hours so they show in active on the report. If the filter could be more granular(Relays inactive > 2 hours OR Root servers inactive > 3 hours), that would work. But the filter can’t be that granular, so I had to create a custom report.

Sorry, I didn’t understand the original question well enough – a lack of reading on my part.

Have you thought about just making two reports? I would assume a server being inactive is much more critical than a relay and thus I know I would want to separate the reports.

I think your current issue is just complexity:
Loop through 50,000 computers, for each computer look at all of the properties twice and then run different queries against those properties a couple times each.

If you could reduce the number of computers you’re looping through by specifying a custom site or a computer group that would probably give you the biggest performance increase otherwise you can try specifying the exact property (so we aren’t looking through 1000 each properties for 50,000 computers) and simplifying how you query:

Is this any faster of a query to run?

of bes computers whose ((root server flag of it and (value of results from (BES Property "Last Report Time") of (it)) as time < (now - 3*hour )) or (relay server flag of it and (value of results from (BES Property "Last Report Time") of (it)) as time < (now - 2*hour )))

in a custom webreport with multiple data sources, the singular “BES Property” only looks for results in the first data source, so the results are not complete. You have to use the plural version “bes properties” like I did in my example, but then that really slows it down.

Is this any faster?

of bes computers whose ((relay server flag of it = true) AND (value of results (bes properties whose (name of it = "Last Report Time"), it) as time < (now - 2*hour)) or ((root server flag of it = true) AND value of results (bes properties whose (name of it = "Last Report Time"), it) as time < (now - 3*hour)))

the old way took 10+ minutes to run, and you new one takes only seconds. The only change I see the new version is that you had the relay/root service check first. Why would that make a different? I thought relevance would all put together in a single query and not filters by (exists relay service), and then by (last report time).

We are using lazy evaluation, instead of looking at 1000 properties for every computer, we only look at the properties IF it is a relay or a root server.

When you have a statement like:

true or (very long query)

the or statement here allows us to do lazy evaluation, if the first part is true we don’t care what the second part is and thus it’s not evaluated.

Same thing for and:

false and (very long query)

If the first part is false then we don’t need to run the second part because there is no way for the statement to be true if the left half of the and operator is a false.

The issue is if we do the very long query first then it will evaluate the long query first and then the true:

(very long query) or true

Because we’re likely to have <1% of our devices be root servers or relays it’s much more efficient to check if it’s a root server or a relay first and only if it is then check the properties

2 Likes

gotcha. makes sense. That logic is only for Session Relevance and not client relevance, right?

What about a custom report that pulls back info on all bes computers with a last report time prior to 3 days ago? How would you do that? The report I use on this one also takes many many minutes:

<table id="resultsTable" class="sortable" border="1"> <th>HostName</th> <th>OS</th> <th>DataSource</th><th>PlatformSupport</th> <th>Last Report Time</th> 
<?Relevance 
concatenation of trs of 
(td of 
(if (exists hostname whose (it contains ".")of it ) then (preceding text of first "." of hostname of it) else (hostname of it))  & 
td of (if (operating system of it as string as lowercase contains "win") then (operating system of it) else ("Linux" as string)) & 
td of (database name of it as string) &
td of concatenation "" of values of (results (bes property "Location By IP Range" ,it)) &
td of (value of results (bes properties whose (name of it = "Last Report Time"), it)))

of bes computers whose (value of results (bes properties whose (name of it = "Last Report Time"), it) as time < (now - 3*day) AND (exists hostname of it)) 

?> 
</table> <table class="sortable" id="table1" border=2>

The report I posted above

Actually should do exactly that – is there a reason you need to use session relevance for this one?

unfortunately, yes. The output is emailed to a tool that scrapes the report output and does actions based on it so I need the output to be in a very specific format.

It looks like “Last Report Time” is actually a session inspector that is available (which would let you bypass the properties junk)

<table><table id="resultsTable" class="sortable" border="1">

<th>HostName</th>
<th>OS</th>
<th>DataSource</th>
<th>Last Report Time</th> 

<?Relevance 
concatenation of trs of 
(
 td of (if (exists hostname whose (it contains ".")of it ) then (preceding text of first "." of hostname of it) else (hostname of it))  & 
 td of (if (operating system of it as string as lowercase contains "win") then (operating system of it) else ("Linux" as string)) & 
 td of (database name of it as string) &
 td of (last report time of it as string)
)

of bes computers whose (last report time of it < (now - 3*second) AND (exists hostname of it)) 

?> 
</table>

Which would make your original one:

<table><table id="resultsTable" class="sortable">
<th>HostName</th>
<th>Last Report Time</th>
<th>Datasource</th>

<?Relevance
concatenation of
(
 trs of 
 (
  td of (name of it) & 
  td of (last report time of it as string) & 
  td of (database name of it)
 )
)

 of bes computers whose ((relay server flag of it = true and last report time of it < (now - 2*hour)) or (root server flag of it = true and last report time of it < (now - 3*hour))) 
?>
</table>

(I removed the location by IP range stuff so that anybody else using this can run it without modification – you can easily add these back in)

1 Like

ah, I didn’t realize that was a reportable native session object. Works great. thanks very much.

1 Like

NO! Both session relevance and client relevance do lazy evaluation. The order in which you write both matters!!!

Read these:

Both of these examples have to do with session relevance, but the concepts of lazy evaluation apply to client relevance as well.

In general, you want the fastest to evaluate relevance to be run first, particularly if it eliminates the majority of endpoints before moving on to relevance that takes longer to evaluate. You can substantially optimize your evaluation loops of your clients by doing this.