Hello,
Are there any settings that limit I/O read on clients? I have noticed that the numbers on servers are really high. We have Windows Server 2008 R2 and windows 2012 R2. Both are affected. The client version is 9.5.3
Hello,
Are there any settings that limit I/O read on clients? I have noticed that the numbers on servers are really high. We have Windows Server 2008 R2 and windows 2012 R2. Both are affected. The client version is 9.5.3
I wonder if there is an analysis/fixlet that is causing a lot of I/O.
I would try using the fixlets “BES Client Setting: Enable Debug Logging” and “TROUBLESHOOTING: Enable BES Client Usage Profiler” and see if they point to an issue.
Martin
How high is the IO?
What are you seeing?
As @TheTick mentioned, this is almost always caused by operator-created analyses on the system reading large files or reading small files with a property reporting frequency of “Every Report”.
Another thing to note is that the BESClient process starts as soon as the system starts and so it’s I/O is since system start. IF you have a high uptime system the BESClient can appear to have high I/O compared to other processes that come and go during the system uptime.
Hi Guys,
Thanks for your replies. I have enabled debugging, but can’t find anything particular till now.
The total read i/o for one of the servers is now 300GB (server uptime is 47 days). I have lowered the frequency of all analyses we have launched and also deactivated most of them. I have no running actions but the read I/O is still going up fast.
Did you just do the debugging or did you also use the “BES Client Usage Profiler”?
You may have to attach some logs for us to read (make sure you strip any identifying information). To look at this more, the client log file, the profile usage log and the debug log file.
You mention that you modified the analysis and deactivated others, how about managed properties?
With something eating up the I/O, I wonder if there is something that is doing a search through files and folders.
300GB over 47 days is 6gb/day or 250mb/hour or 4mb/minute which isn’t really that much but there are certainly things you can do to reduce it.
Deactivate unnecessary analyses and looking at the debug log or at slow fixlet evaluations are a good start.
One thing to keep in mind is that the BES Client uses very few resources to do what it does … But it uses them CONSTANTLY!
I had an email from one of our Linux Admins a years or so ago. He was concerned that the bes client process was using huge volumes of CPU resources. Turns out he was looking at a report of cumulative utilization over a 24hour period, and BigFix was the top process.
I explained that the client runs 24/7 and by default works for 10ms then sleeps for 240ms to keep performance down and allow other processes to run normally. Eventually he understood that it was working normally, and contrary to his concerns, it was not interfering with other processes.
And with that in mind, lowering your WorkIdle and increasing SleepIdle client settings will cause it to evaluate the same content less frequently. This should cause content that causes disk I/O to occur less frequently and hopefully to reduce the I/O you are seeing (as well as reduce CPU…).
Please note that IO reads/writes in Windows task manager also includes network IO (not just file/disk), which can greatly contribute to the numbers seen.
The following technote is a good general reference to review in this regard: http://www-01.ibm.com/support/docview.wss?uid=swg21505815
This is very true. There are a couple of “sleep” modes where you can have the clients basically stop their background processing and either allow the CPU to really go to sleep or just have the client sit and wait for a change to occur or a timeout to occur. You might want to look into those if you are concerned about this.
Your mileage may vary, but this is what we found. If what I say is incorrect, please let me know. I haven’t looked into this for a year or two.
While all of what is mentioned here is good info, there is one part of this that I feel is overlooked. The relevance is being read from the disk every time it is evaluated, this process takes up more I/O more than nearly anything else. Yes a poorly written relevance query will scan your hard drive for a file and cause massive headaches. But once you have cleared it out you will still find that the I/O is huge due to the relevance reloading.
Every time a fixlet is evaluated, the relevance is being read from disk. Even if nothing changed in the fixlet, it will still read a fresh set of relevance for every query. This will destroy the I/O over time, especially if you have written very refined relevance, as the longer the relevance the more I/O is used up, indifferent to what the relevance is actually doing.
Does anyone know of an option for loading the relevance checks into memory and updating it as needed, instead of reading from disk for every query?
Its an interesting option and while possible would eat up a LOT of memory which a great majority of customers do not like. Its a bit of a balancing act of what resources to use and what not to. That is why I suggested the sleep modes.
They scan the content until nothing changes and then go to sleep unless the sleep times out or something new comes from the server etc. It can be a good middle ground.
It may take a lot of memory, but with 8GB of RAM becoming more and more the norm, it would be a nice way to NOT destroy the SSD drives on higher end machines.
Ideally you would be able to assign which machines used RAM for relevance and which used the hard drive. Thus being able to support older machines and take advantage of the capabilities of newer machines.
If you look at the usage you should see the client doing a lot of reads but relatively few writes which should not affect SSD’s much at all.
Again, stopping the client using a sleep mode will stop all the disk activity without affecting much of the abilities of the client to respond
Hi Aram, is there an updated URL for this?
http://www-01.ibm.com/support/docview.wss?uid=swg2150581538
Thanks!