On linux we use a file creation task to write all relevant information for our security agents and then run an analysis that uses relevance to present the information we need. We are having an issue where when a machine checks in both the analysis and file creation will run at the same time even though they are set to run every hour. This leads to the information on the reports to be inaccurate because the analysis will sometimes run and the file will not be there.
Is there a way to schedule the analysis to be run after the file creation task? What is the best practice for this on linux using big fix? Is there possibly a solution I may be missing?
Hi, usually we would configure the Analysis property to repeat on an interval - hourly, daily, etc.
If the Analysis happens before the file is created, the result may be out of date until the next Analysis evaluation interval.
You could do a couple of things. You could change the Analysis to only be Relevant on computers where the file exists (in which case you may not observe clients that are missing the configuration entirely, unless you set up another Analysis/Property to watch for that).
Or, you could force the client to re-evaluate all properties after you install your security package. You won’t want to do this frequently because re-evaluating all properties and sending a new Full Report can put extra workload on your root and Relays, but if you have a baseline that updates a lot of things and need those reflected in Analyses more quickly you could add a task with the actionscript
notify client forcerefresh
This has the same effect as right-clicking a computer and selecting the ‘Force Refresh’ option in the Console. The client re-evaluates all properties and sends a new Full Report containing all property values.
As of right now both the Analysis and the File Creation Task will run every hour. And we only have issues when a machine checks back in after an extended period as both the task and analysis run simultaneously.
We thought about changing the relevance. The issue is the file is deleted and then re-created as apart of the file creation task. This leave the information blank/not accurate when the analysis is run since it happens at the same time.
We can change the relevance to check for the file existing before the analysis running but this does not solve our problem since it runs for those specific machines after the file is deleted and before it is recreated as both the task and analysis happen simultaneously.
There should be a very low probability of any given client evaluating that specific analysis while the file is being rewritten. I’d have to ask for a bit more detail on how this is operating.
Once you write the file, does it stay, or is it removed again by something else that ingests it? Is the “normal” state for the file to exist, or to not exist?
If this file is being ingested and deleted shortly after your task creates it, you may need your task to create a duplicate of the file somewhere else, and read that copy of it in the Analysis. That way you have a copy of the last-known version of the file, that won’t be deleted and can continue to be evaluated by the analysis.