Is there an easy way to get the custom .detect scripts to endpoints for SCM analyses? I’ve tried adding them to the server build process by creating the folder (/var/opt/BESClient/__BESData/CustomSite_DISA_RHEL6/scm_preserve/Linux/6) and copying the .detect scripts there. This is all good during the build process, however, after rebooting the new system, the scm_preserve/Linux folder is gone. I can’t find any trace of it being removed in the client log. Any help or ideas with this would be great.
I’m not sure what the .detect scripts do (haven’t looked in one yet), but it sounds like your new computer is not subscribed to your DISA STIG site for RHEL6 yet. Once the client realizes it is subscribed (assuming the relevance you defined for your site evaluates to true on this machine) it will download and evaluate the content from the site and report the results of the Evaluation.
You are not going to be able to easily shortcut the BES Agents subscription process. It’s a security measure of sorts. The client will not evaluate content from a site that it is not yet subscribed to.
One thing you CAN do is to adjust the Work/Sleep time setting for the client so the Agent will perform more Work in a given period of time. By defaul the client works for 10ms then sleeps for 480ms. I usually like to turn up the Work time to 30ms and turn down the sleep time to 240ms for the first 24hours the client is installed. It will chew through all the site relevance and Operator relevance very quickly this way. Then after 24hrs, I delete the adjusted settings, returning the Agent to default performance settings.
I believe @jgstew recently posted a comment about a recent Bigfix.me item that performed a similar function.
I hope @TimRice means that I have recently posted tasks to BigFix.me to tune client performance because I don’t know a thing about .detect scripts.
Also, from the description, it doesn’t seem like the issue is with the CPU usage of the client, it seems more to do with manually putting files into a place that the client wipes out automatically. There is probably a better method or location to use for this purpose, but in either case @TimRice is correct that tuning certain settings on the client will make the initial “build” process faster.
Some examples of speeding up the BES Client:
http://bigfix.me/fixlet/details/5024
http://bigfix.me/fixlet/details/3942
For Client CPU usage, I would always recommend putting the SleepIdle to 500ms and put the WorkIdle up or down to increase the effective CPU usage. The reason for this has to do with modern processor’s sleep states and how quickly they go into sleep states. For maximum efficiency reasons you want your sleep state to be as long as possible. A WorkIdle of 1 and a SleepIdle of 5 might be the same effective CPU usage, but it is much more efficient to have a WorkIdle of 100 and a SleepIdle of 500.
The maximum percent CPU usage of the client is given by:
( WorkIdle / (WorkIdle+SleepIdle) )
Also, it is a very good idea to use a clientsettings.cfg file where applicable so that you can tune client settings from the very start. The most important thing to tweak is set the following to 1:
_BESClient_Resource_StartupNormalSpeed
This will cause the BES Client to go full tilt until it’s first relevance evaluation loop has finished, which should take only a few minutes at most. This will significantly increase the speed of initial provisioning.
Example ClientSettings.cfg file:
I wrote a lengthy post about how to create custom sites, and attach files to the site to distribute to clients, but on a second reading of your initial post I’m not certain my original understanding of your problem is correct.
Did you actually create a Custom Site in the BES Console named “DISA_RHEL6”, and did you subscribe your client to the site? Or did you physically create a folder under the client’s __BESData named “CustomSite_DISA_RHEL6”?
If you have properly created the Custom Site, and subscribed a client to the site, and you want all subscribed clients to get a particular file, you should use the BigFix Console to “Attach” the file to the site. When Attaching a file there is a checkbox option for “send file to clients”.
This looks like what I want.
Sorry for the confusing post everyone, it was wordy when I wrote it and short on explanation.
The newly build servers are Red Hat 6 servers that get subscribed to a site named “DISA RHEL6” which is a custom copy of “DISA STIG Checklist for RHEL 6 - RG03”.
As for the custom .detect scripts, these are scripts that override the default checks provided by IBM. Sometimes the scripts will return false positives or need to be modified for your environment, in which case these .detect scripts are called. They must be located in the …/scm_preserve/$OS/$OS_version directory to be called (see http://www-01.ibm.com/support/knowledgecenter/SS6MCG_8.2.0/com.ibm.tem.doc_8.2/Security_and_Compliance/SCM_Benchmark_Guide/c_understanding_the_output.html). What I am trying to accomplish is to have the files available to the client once it has subscribed to the “DISA RHEL6” site so when the SCM checks run, it will be 100% compliant out of the box.
All of that being said, I’ve never used the “Files” section of any sites but will give that a try. Can I specify where the file gets stored on the clients?
I think we went a little bit in the wrong direction. If you attach the file to a Site, it ends up at that Site on the client (stored at __BESData\SiteName\filename.ext).
Looking at the site you referenced, there’s a line
After running the Deploy and Run Security Checklist task, the scripts are located in a directory under /var/opt/BESClient/SCM
I think you’d need to modify the “Deploy and Run Security Checklist task” to install your customized detection script.
Rest assured, @jgstew, that I was referring to the two URL’s you re-posted above, NOT to “.detect” scripts.
Also, as a follow up, the custom .detect scripts are apparently an undocumented feature. Way back in February of 2014, I was having issues with some SCM checks for Red Hat and Jeff Saxton from IBM was helping me get them resolved. He pointed me to this undocumented feature for the custom .detect (and .remediate) scripts as a work around for checks that are failing.
I’m not sure if modifying the “Deploy and Run Security Checklist” is the best place to put the checks, since that is a never-ending task that runs daily. If I added any new custom .detect scripts, I would have to stop the task, modify it and deploy it again. I guess I could use a baseline or something along those lines to ensure the files exist on the endpoints, and if they don’t, download them.