Analyses with Relevance Feedback

This is just feedback in case others have seen this and is in regards to creating an Analysis that uses Relevance. In other words, the Analysis is targeting something specific. The example below is one, but we’ve seen this when doing this with other Analysis.

In this example, we have a single Property (evaluated every report) Analysis that returns the version of the Palo Alto GlobalProtect VPN service, PanGPS.

The relevance of the Analysis itself originally was simply: exists service "PanGPS"
The Property was version of service "PanGPS"

We found were a percentage of systems would return < error >, a sure sign that it was evaluating the Property before the Relevance of the Analysis itself. In addition, data would remain static, because it would no longer evaluate the Property, because the endpoint was no longer relevant to the Analysis.

This scenario should be easily reproduced. My thought is that the priority or order of relevance is not being honored.

I’ve since change the Relevace to a more general name of operating system contains "Win" and the Property to if exists service "PanGPS" then version of service "PanGPS" as string else "Not Installed"

After the change, the results started to correct themselves. This all being said, perhaps a “Best Practice” would not to be too restrictive on targeting with regards to an Analysis.

1 Like

I wonder if the <error> results were coming from non-windows systems?
Could it have been a case of the relevance of the Analysis generating the <error> because of the lack of inspector on the OS evaluating the Analysis? That would also explain why adding the Windows check resolved it.

These were all Windows systems. The < error > would be expected if you only evaluated the Property as is (an error due to not checking for the existence of the service first). But since the Analysis was supposed to check for the existence of the service, then the service would have been assumed prior to checking the version of it.

This is why I think this is easy to duplicate. Just create an analysis that targets a service that isn’t on all systems and have a property that checks the version. The expectation that you will ONLY see results for endpoints with the service installed…and I don’t believe that is the case.

I’ve seen similar behavior. I’ve taken to using else (Nothing) to minimize how much I keep in my Database.

Neat. I never knew about (Nothing). Very cool and thanks!

What is the text of <error> when you mouse over it in the console? This should return the error string the client had when it tried to evaluate.

I typically use this exact pattern (filter on the analysis level) and haven’t seen issues

What I would expect: Singular expression refers to nonexistent object.

What version of client are you using? This sounds similar to an earlier post of a suspected bug, in that systems that were not targetted to an action were still spending a large portion of their evaluation cycle evaluating the action’s relevance.

Potentially related to Does group targeting actually limit applicability evaluation? ?
@AlanM

Our environment is 9.5.12. I’ll check out your link.

Just an update on the initial bug - I have got an open PMR which has been escalated to engineering however the contact point is currently on holiday until later this month. As soon as they return, I’ll be asking for a status update.

3 Likes

Kind of makes me wonder if there is some self-protection/tamper protection going on in the Global Protect agent.

I dont believe service "PanGPS" actually looks at the file on the disk, i believe it just probes the services. I believe once you do version of <service> you’re implicitly casting the service to a file and then interrogating the file for its version. Though that doesnt really make sense given the if.... thing works.

Definitely PMR worthy!

So with the “errors” is it possible that the error came from an earlier version of the relevance before you restricted the analysis?

We definitely will keep the results presented before the restriction came in place so results can be stale if you restrict the targeting.

The way the targeting relevance works is the analysis activation has that relevance in it and if that doesn’t become relevant on the endpoint then the endpoint doesn’t evaluate the analysis. In addition the endpoint must be subscribed to the site the analysis lives in. If both of those are true then we will evaluate the properties in the analysis. If the endpoint was relevant in the past, the old answers will persist.

Thank you for the explanation. This could be easily verified in an environment with a few hundred endpoints. For me, the take away is to use caution if targeting within the analysis.