Improve evaluation of relevance (User account info)

One of my team members included this in an analysis and I found out today that it has a long evaluation time.

Anyone able to give some tips to make this more efficient?

(name of it,(concatenation "|" of names of items 1 of (sid of it, local groups) whose (exists (item 0 of it as string as lowercase, members of item 1 of it as string as lowercase) whose (item 0 of it = item 1 of it))), last logon of it as string |"Never", logon count of it, account disabled flag of it, password expiration disabled flag of it, password change disabled flag of it ) of local users 

q: (name of it,(concatenation "|" of names of items 1 of (sid of it, local groups) whose (exists (item 0 of it as string as lowercase, members of item 1 of it as string as lowercase) whose (item 0 of it = item 1 of it))), last logon of it as string |"Never", logon count of it, account disabled flag of it, password expiration disabled flag of it, password change disabled flag of it ) of local users
A: redacted, redacted, Never, 0, True, True, True
A: redacted, redacted, Never, 0, True, True, False
A: redacted, redacted, ( redacted ), 15, False, True, False
A: redacted, , Never, 0, True, False, False
T: 551.403 ms
I: plural ( string, string, string, integer, boolean, boolean, boolean )

Thanks for any help you provide.

I don’t think you will be able to solve this on client side. The problem is the reitterative nature of it all which grows things exponentially - for every single local user you are pulling all groups and comparing them one by one. To offer the same exact data I broke it down to “user account info” and “group members”, and then pass the actual parsing/merging to something else - I’ve done a few in Web Reports with session relevance but even there you start hitting bottlenecks, ideally I’d export it to something with a bit more processing power and with more native data mapping (scripting language; BI tool; etc). You can always write the data retrieval in a script and run it to output to a file but entirely up-to-you. Just for example, I wrote similar relevance on Unix/Linux where it was checking data from passwd file, shadowers file, sudoers file, on top of regular account data… yes, it worked but it was running so slow that it was completely prohibitive! I guess what I am saying that just because you can write it in relevance, it doesn’t mean that you should always do it and always balance the pros-cons of it.

For what it was worth it, I’ve always advocated that WR should be offering relational data mapping (a way to define how data from one property maps to data from another); regex type of filtering of data; etc. I even had an RFE for it one point but it never went anywhere and can’t even find it. That was the whole purpose of it. You just can’t do everything in client relevance without impacting client cycles and it will be great if you can map data sets for reporting purposes in the reporting interface directly…

I spoke with the author. This is for Windows and we discussed using a PS script to dump formatted data into a file, or maybe even the registry, then pull it from there into an analysis.