Workflow considerations for using REST API to set Client Settings on a system-wide scale

Hello!

I’m looking into pushing values from another system into BigFix by using the REST API to set Client Settings. While I understand there is an API endpoint to set computer settings directly, I’ll initially be doing this for tens of thousands of computers and potentially hundreds of computers weekly after that. As I also understand that setting a Client Setting is done via an Action and creating one action per computer for thousands of computers will be extremely inefficient (or will it?).

So, my thinking was that, instead of individual Actions per computer using the POST “api/computer/{computer id}/setting/{setting name}” endpoint once per computer, I’d try to do some organizing/categorizing and use the POST “/api/actions” endpoint instead. This, however, will likely end up with a CRAZY relevance or a very long list of target computers, which is also fairly inefficient.

Does anyone have some wisdom and/or experience to share with me before I tread this path?

The main wisdom I’d lend, from hard-won experience, is to start small :slight_smile:

Posting to ‘/api/computer/{computer id}/{setting name}’ does have a side-effect to consider - you can only access that as a Master Operator account, and when you do, a Global Property is also created to retrieve the value of that client setting (the main reason I try to avoid using that endpoint).

To some extent it depends on how private you need these values to be. If you post to /api/actions and use a computer list, a separate copy of the action is sent to each computer’s mailboxsite; this can be a bit of work on the server but lightens the load on the clients, as only the targeted clients even see that the action exists. But if it’s tens of values on thousands of computers it can add up.

One technique you might consider is, instead of issuing actions, posting to a Site’s ClientFiles resource, and uploading a file with key/value pairs. If these are fairly public properties, like “computername to sitename” mappings, you could have a single file with thousands of computers in it like

computername1:value
computername2:value

and then a single Action on the clients that updates their client setting with the rows from the file that match their computername, checking whenever the file is updated.

If you need the values to be private, though, you could post an individual file for each computer to the computers’ mailboxsite (which, also, requires Master Operator permissions). This is the technique used by the ServiceNow Data Flow integration, to balance server workload and site updates, so I know it scales well at least into the tens of thousands.

If you use a common file in a custom site, you can have relevance in a Task that checks for the date on it, something like

exists file "MySettingsFile.txt" of client folder of current site and not exists settings "MyClientSetting" whose (effective time of it > modification time of file "MySettingsFile.txt" of client folder of current site)

This makes the Task relevant if the client setting doesn’t exist, or if the configuration file is newer than the client setting value.

Then in ActionScript you’d do something like

setting "MyClientSetting"="{following texts of firsts ":" of lines whose (it as lowercase starts with (computer name as lowercase & ":") of files "MySettingsFile.txt" of client folder of current site}" on "{now}" for client

You can do the same with files in mailboxsites, just adjusting the client path a bit like client folder of site "mailboxsite"

Or, you can use a technique like the “Location Property Wizard” - which accepts a text blob of key/value pairs mapping computers to values, and then generates a really long relevance statement to match each computer to the desired values.

3 Likes

Wow. This is… amazing.

All of this is very public information, so there’s no worries about it being seen. I actually would like a Global Property to be set, but I can do that manually instead of using that API endpoint to set it (or, if I manually set the value on one computer as an MO, the Global Property will be created, yes?).

I’ve read of the ClientFiles before and it being used to ensure that certain utility files, etc., are present but I’d not heard of using it for hashtables like this. Brilliant! Now to play around with the POST “/api/site/{site}/files” endpoint.

I’ve got some work to do… Thank you!

1 Like

Okay… another question.

I’ve generated the first pass at a text file (250+K) and uploaded it to a CustomSite for testing. Why would I not want to just have a property in an analysis in that site that returned something like
following texts of last "|" of (lines of file "data.txt" of client folder of site "CustomSite_XYZ") whose (preceding text of first "|" of it is (computer id as string))
versus an action that sets a computer property?

I have some reasons of my own why not, but I’d like to hear what you–or anyone else–think(s).

(Also, computer names aren’t sure to be unique around here so I went with computer id which is already exported to the other system.)

Also wondering about some ways to possibly shrink the size of the data file. How onerous is the mailboxsite delivery method? How onerous would using it 12,047 times (at least to start)? :slight_smile:

There’s nothing wrong with using a ‘lines of file’ directly to report a property from the client; a lot of it depends upon how you’ll be using the property. For things like location tagging or setting management scope that would be fine.

Where I’ve used this before, part of what I’ve deploy were Maintenance Window definitions, and those maintenance window definitions would be re-evaluated by each of the actions that use the maintenance window constraint - so it’s easier to write the relevance, and more efficient to evaluate the client setting repeatedly, than it would be to use the file directly.

A 250KB text file seems manageable as far as client gathers. I’ve have to test it again to be sure but my recollection is that the site gather uses zlib compression for the https session “over the wire”, so the bandwidth usage is more efficient than just copying the text bytes over the wire.

Sending individual files to each client mailbox, there are some tradeoffs and deciding which is best probably requires testing in your specific environment. Posting a single file to a custom site is much easier on the server and faster on your API client, but gathering individual smaller files on each client is easier on the client gathers, especially if you only post updates for individual machines when the file content changes. That said, the ServiceNow Data Flow uses this method and it’s quite scalable, though posting all the files for the first-time sync.can take a while (i.e. an hour or more of looping through each client).

In the end I think the biggest factor is whether you’re ok with every client seeing the data for every other client. My gut feeling is that if you’re ok with that, I’d just stick with the single file; 250KB isn’t really that large; looping through each client mailbox and sending individual files to each machine is much more complexity to handle in your API script; and perhaps most importantly, posting the file to a custom site does not require master operator permissions for your API client, where a master op would be needed to post to client mailbox sites.

1 Like

Thank you again… this is great. Your points about relevance and evaluation are well taken. I was thinking of other systems and processes being able to take advantage of the client setting (versus an analysis property) as it’s recorded in a registry key.

The initial push would be best done by a single custom site file, but future changes to the data for each client may be best delivered by individual mailboxsite files. I may try to come up with some sort of hybrid process that pushes files to mailboxsites based on an activity queue and a nightly reconciliation that gathers all the data (so that, if a queue item is missed somehow, it’s taken care of overnight). We’ll see… thanks again!!

A year later, I’m finally getting back to this. Things have changed a little and the file size has now increased by a factor of 9 (to 2.25MB). With some abbreviating, I can get it down to around 1.5MB, and with some more work (that would require hashtables to decipher) I can get it to 1MB, maybe smaller.

How big of a deal is a 2.25MB Site file?

How would I best consume a Site file-based hashtable and decipher the device data in BigFix action script?
Example: The smallest file would render a line like the following:
3356810|id=7023|sg=118|dg=27|ss=100|us=abc123|ut=4|vp=0|ex=20131027|
where the sg=118 would need to look up the value for “118” in another Site file. Same with dg=27 and ut=4.