Custom ActionResults Computer Status

Someone will likely correct me if I’m wrong but I do not believe you can return a custom string with a Fixlet or Task. You can return an error code with:

Exit <integer>

That being said – I see two options for what you’re trying to do…

  1. Use an analysis to pull the results and pull the results of the analysis via REST
  2. Create a client property and store the results in a client property
1 Like

Thanks for the quick reply. I forgot to include the caveat that I’m no the actual IEM engineer writing and maintaining the Fixlets – I merely call them and create automation workflows which utilize IEM.

When you say I can use an analysis to pull the results, would it be the results of the Fixlet itself, or is an analysis something completely separate that runs on the endpoint server?

Again, pardon the ignorance, but where would the client property be stored and maintained, and how would we access that?

-Matt

Matt,

So the typical workflow would be:

Fixlet has actionscript of

waithidden netstat -rn > "C:\Windows\Temp\Diags-Netstat.txt"

Analysis has a property called, “Netstat Diag Results” with relevance of:

lines of file "C:\Windows\Temp\Diags-Netstat.txt"

You can then pull the results of the analysis through the rest api.

Something like this:

(name of computer of it, values of it) of results of properties of BES Fixlets whose (analysis flag of it and name of it = "Test Analysis 1")

Should get you started with pulling analysis results

1 Like

Interesting…

Is it possible to have the Fixlet employ logic to examine the results of a series of commands and then write those results into a file which could then be read by an Analysis?

Here’s another example… a Fixlet runs a script which does some basic diagnostics and writes them to a file. Then another Fixlet runs and does the same commands, writing the results to a different file, then performs a diff against the two files and writes whether or not there are differences to a new file which would then be read by an Analysis. Is that possible?

-Matt

1 Like

What you’d probably do is have the analysis itself do the comparison.

Fixlet 1:

ipconfig /all > C:\test.txt

Fixlet 2:

ipconfig /release
ipconfig /renew
ipconfig /all > C:\testnew.txt

Analysis:

sha1 of file "C:\test.txt" = sha1 of file "C:\testnew.txt"

I see what you’re saying there. Right now what we have is a shell script which contains multiple functions and based upon input argument executes either a before or after health-check function. “Before” writes the first results to a file and “after” writes results to a new file, then performs the comparison. Assuming the comparison is handled within the script, we could write the results to a third file, and if I understand correctly, have Analysis read the contents of the third file.

Yep – that would work just fine.

Excellent… I’ll take this back to our systems and IEM folks and see what they have to say.

Thanks so much for your help, @strawgate!

One more question… do you happen to know where the Analysis API is located and how I would go about creating the Analysis and gathering the results?

I believe you’re going to have to use the generic Query API

get /api/query

With session relevance like:

(name of computer of it, values of it) of results of properties of BES Fixlets whose (analysis flag of it and name of it = "Test Analysis 1")

Another approach is to use the analysis to get the data directly from the computer. Using BigFix relevance you can inspect the computer without having to use the os command in a script. If you can do this it’s more efficient and you don’t have to set up something to run the command,write out files and so on. This approach is contingent on the data you need being available through a relevance Inspector. There’s a lot of Inspectors available.

2 Likes

To add on to what @strawgate mentions, I’d recommend outputting the results from the commands you run into either the Windows Temp folder, or the BES Client Log folder.

To add to what @gearoid mentions, you may be able to tell what the current state is based upon inspectors, but if you are going to make changes, you might still need to output a before & after so that you can more easily compare the two.

There are many existing analyses in the console and on http://bigfix.me/ that may have info related to what you are looking for.

Thanks everyone for the useful info. The approach we are going to take is using an HBase REST API to post columns upon Fixlet completion, then read the columns at a later date. The IEM administrators are concerned about the amount of overhead required. Also, I was under the impression an Analysis could be kicked off manually, but now I see that they are run incrementally on a recurring basis.

The overhead of a single action is almost non-existent. If you were creating separate actions for every client and not stopping them, that would have some overhead. It is better to create a single action and target an automatic group that defines which clients should run the action or target all machines with the same action or target by properties or other factors.

As far as the Analysis is concerned, the overhead there is typically minimal. As long as the relevance the Analysis properties are evaluating are quick, then the impact on the client is minimal and setting it to only return results once an hour or less often helps reduce impact on the client.

As far as the impact of Analysis properties on the relays / root / other infrastructure. That really is only a concern if the data is changing frequently and large amounts of data are being returned by each property. The number of endpoints returning results to the Analysis is also a bit of a factor.

I’m curious why the IEM admins are concerned, but I don’t know the specifics of your environment and how constrained it might already be. It would also help to know how many clients you’ll be running these actions against typically and if it will be ongoing or only to diagnose specific issues.

In general I use Analyses very heavily and would recommend it, but there are definitely some considerations and edge cases to keep in mind.

Generally anything relevance can’t report on directly can be worked around by using an action+analysis combo as described above.

1 Like

Is there a way to execute an analysis one time and one time only? The scheduling factor is our biggest concern at this point.

Why is scheduling a factor at all?

As part of the analysis you can write a relevance statement that checks when the file was created or modified giving you the point in time the diagnostics occurred.

Why can it only run one time?

The output must be read and processed by a higher level workflow or process, so the analysis result needs to be available as soon as possible. It’s not something which should be constantly running and checking in the background. We want to run a fixlet and get the fixlet output as soon as possible, i.e. execute fixlet, wait for fixlet completion, execute analysis, and read analysis result, since the fixlet is part of either a user-spawned request or part of a workflow performing other tasks. We don’t want that parent process to wait around for too long while the analysis is waiting for its next occurrence, and we don’t want it running over and over on the endpoints.

We have over 20,000 potential endpoints.

The analysis should be available at next client report anyway so that shouldn’t be an issue.

As for the analysis – a normal analysis takes microseconds (1/1,000,000th) of a second to run and the BigFix client already checks over the relevance of 10,000+ relevance and fixlets every ~15 minutes.

Every piece of content available to the client has its relevance checked every cycle – a simple analysis has such a small impact its not even worth considering.

So you set the analysis to only be relevant on computers that have the file that is going to be read by the analysis so if it has never been done, 0 computers will be relevant.

You create the property that returns the result you desire, but set the period to once every 30 days… it doesn’t matter what you set it to.

You run the task, output the file the analysis is going to read, the analysis becomes relevant and immediately processes the property right then and there on the client. This will give you the results fairly quickly. Putting “notify client ForceRefresh” in the actionscript might have it work a bit faster.

It would still probably be faster to use something like cURL on the endpoint to send the results back directly somewhere using a REST API, but I think the above option could be within 5 min or less if UDP notifications are working properly. The advantage of this approach is it does not require the client to have direct access to the thing it is sending the data back to.


The code I have here is an example of using a REST API to return results to a server: https://github.com/jgstew/remote-relevance/tree/master/python

It creates an action on the fly in BigFix that does relevance evaluation, gets the result, and sends it back using cURL.

I haven’t really put much polish on it, it isn’t very usable in its current state, but it is a fully working proof of concept for remote relevance.

Your need is not to send back relevance, but the result of a command, but the idea is very similar. ( I just realized my remote relevance project could also be used to run arbitrary commands and send the results back )

In my experience, this method took between 10 seconds and 2 minutes.

Great info, thanks a ton. I will take this back to the IEM admin(s). We do have our HBase solution configured and working partially, but not perfectly yet, which may end up pushing us to use the Analysis method.

Thanks again,
Matt

1 Like