Relational Data in table format as one line per object

Hi all,

I just wanted to gauge something - how do people deal with relational data? For example, if you want to retrieve all local accounts with x (5, 6, 7, etc) per-user attributes (name, status, last login, last password change, etc). If I am to create a property just for the account names (names of users) and then a separate property for last logins (last logins of users), and then display the data from the two you will have no way to identify which login name is for which date… Same can be for anything - scheduled tasks, services, installed apps, etc,

What I have been doing is put all the attributes in a single string on a line per object iteration and produce them back then do the parsing on Web Reports once I have the relationship done for me (it’s all on the same line). The problem from this approach is that I end up with these massive properties size-wise that cause downstream problems. For example, I recently discovered that BigFix Insights, unlike the Platform, has max of 4000 characters on property results length, so Insights is completely unusable for those!

Does anyone has any better approach?

There are a few approaches you could take. The first strategy I like to use is to determine that the information is actually needed and useful in some way - there’s often a temptation to retrieve more than we really need, especially when stretching BigFix to report on non-traditional areas like user management. It could be that Active Directory or LDAP tools could be a better fit depending upon what you’re trying to do. So I’d start by choosing carefully which attributes are needed - do we really need to know profile paths of users in an Analysis, do we need last login times, do we need all users or just those in specific groups (ie Administrators), do we need all users or can we filter to the ones who haven’t logged on in a while…

If reducing the attributes and the size of result sets still leaves us with excessively-long strings, we might try splitting the attributes into separate properties. One strategy could be to create key:value pairs for each attribute you want to retrieve, and deal with them as separate properties. For instance you might have one property retrieving only “UserName”, another for “UserName:ProfilePath”, and another for “UserName:LastLogonTime”, and then use a session relevance query that loops over each “UserName” result and filters to the other properties that match on the given username.

Thanks, @JasonWalker. Yes, unfortunately removing attributes is not an option and we face this problem on at least 20-30 different use cases, not just account related, and all of those are things that no other tools can be used.

I’ve thought about that as well and I guess it can be a workaround but it does seem a bit excessive. In some cases I have up-to 10 different object attributes and if for every single one I am producing the identifier and then even the session relevance would get complicated/cumbersome in a hurry.

The more I think about it, the more I am leaning towards an RFE that would be something like “relational analysis” where you define a single relevance statement for all properties in the analysis and you just link which numbered object is what property name, and client knows how to link things together for you. The relevance can be structured something like: (name of it, last login of it, id of it, etc…) of users. On session relevance it should essentially be returning tuple back instead of individual value when you do: values of results of properties of bes fixlet whose (name of it "Sample Relational Analysis).

This is actually a very common scenario with larger deployments. For us, we have approached this under a divide and conquer paradigm.

  1. If the data must be “usable” from within the normal web reports interfaces and the the data collectively represents a single record (for example, the currently logged in/active user on a desktop and you want say their username, full name, size of their profile folder), we will represent this as a single property with a multi-value result prefixing each value as “Attribute: Value”. While all the data does end up together in the single property, you can in Web Reports break this into multiple rows. As long as you ensure even empty attributes are represented as blank values but include the attribute name in the result, you can then use filtering to remove the attributes you don’t want to see. They will all appear in the same column but at least they will be tied to the same record.
  2. If the data must be “accessible” from within say a custom web report and you don’t care too much for the ability to split it in the normal interface or if the data represents multiple records, we just use a similar approach like you where we combine all the values together and use custom processing logic in JavaScript to then break it apart to more usable pieces. Concatenation of strings, XML, JSON, etc are all approaches we use here depending upon how we are going to post-process it. For larger values, sometimes base64 encoding the results can yield smaller property sizes at the expense of more post-processing and larger sizes for some percentage of values.
  3. If the data is relatively large and/or contains a large number of individual attributes but you can deal with this data not being directly accessible from within web reports, you can leverage the upload manager function of the client. Here you would set a re-applying action to run at the frequency the data is expected to change (AND that you require the change to be available for retrieval) that captures all the required data and saves it into a file, then have the client configured to upload that file whenever it changes. You will need a process that runs on the server looking for those files that then handles processing them. This is how BigFix Inventory works to get the scan results.

As to your idea for “relational analysis”, I have actually done something very similar to this in the past using normal properties. When making the analysis, I add as metadata the format pattern for each property:

<property>|||<format_key>|||<separator_pattern>|||field1<separator_pattern>field2<separator_pattern>…

As I always know the the pattern and fields definitions for the property, I can use this to reconstruct the records and have the flexibility of changing the encoding (none, base64, xml, json, etc), the separator pattern used when combining the attributes, and the order of the attribute values. I then just use a common parsing routine in my custom reports that produces the data as an array of objects. My custom report would allow the user to select all the properties they wanted, then for those properties that followed this format, it would automatically deconstruct them and present everything just like the normal web reports interface. If some advanced parsing is required but not common enough to make generic, I would embed that javascript directly within the analysis metadata as well so the generic parser knew how to handle that without ever having to modify the report.

1 Like

Thanks, very good feedback and generally exactly what we are doing.

The problem seems to be that for example BigFix Insights has 4000 max character length of ResultsText field in LONGQUESTIONRESULTS, so even though the full data is perfectly fine in BFEnterprise and usable in Web Reports, it completely rules out the being used in Insights (if you check a field it just cuts the data to first 4000 characters and anything after is disregarded) with all the benefits it carries (data aggregating & historical reporting; no-impact on production services from dealing with this big of data sets; etc) which is such a shame… This leads me back to a need for system-level native solution that would be supported through all components and potentially be a lot easier to work with (without all the parsing/look-ups/custom reports/etc that we use right now).