Console memory usage

I normally use the console on the root BES server but it has gotten slower and slower over time, to the point where it’s almost unusable.

Today I installed it on a different device, logged in with the same account that I was using previously, the new console up the same as the old one. Fields to display in each node, field order, sorting, preferences, etc.

When the old console starts, it immediately uses 4.5 GB of RAM. The new one has been hovering around 2.3 GB. Both are the same version - 11.0.2.125.

What would cause this big of a difference?

Sounds like normal to me.

You cache level level, fields in use, etc… it fluctuates all the time.
I added a config to our masthead to prevent admins from running the console with less than 8gb memory.

The slowest console I use on a regular basis, is on our master server and it has 16gb of memory (SQL offbox) and 8cpu I believe.

If you are talking about the size of the console cache - look into the “Cache Options” and “Expiration a policy” on the following link : Preferences

If you are talking about the size of the console cache - look into the “Cache Options” and “Expiration a policy”

I’m talking about the memory usage that I see in Task Manager.

the cache & expiration policy could probably do that; the cache is loaded into memory at startup, and your longer-running console probably has more expired things in the cache.

The default cache policy is probably going to increase RAM but decrease network usage. If you want to compare console memory usage, probably do it with the cache disabled entirely.

If you want to compare console memory usage, probably do it with the cache disabled entirely.

I’ve disabled the cache and restarted both, then gave it about 10 minutes to do whatever startup processing it does.

The one running on the BES root server is using 4.5 GB.
The new one is using 4.3 GB.

I set the cache on both back to what it was originally (“Keep full cache on disk” with Moderate expiration), restarted both and again waited 10 minute.

Now the one on the BES root server is using 4.6 GB.
The new one is using 3.6 GB.

What does this prove? Probably nothing, it’s still slow…

Yeah, the memory usage is going to depend a lot upon how many computers there are, how many Analyses activated, how many results have been reported, and what columns/sites/analyses you’ve tried to view. As you click around in the console it will request and precache more results.

4.5 GB usage is not out of the ordinary at all, I’ve seen it as high as 16 GB when there were 150k+ endpoints. My lab console right now is at 6 GB with only forty endpoints but I have a lot of analyses activated and many results.

I believe it is not recommended to run the BigFix Console on the root server due to performance reasons.

I believe it is not recommended to run the BigFix Console on the root server due to performance reasons.

Is that documented anywhere?

I expect that’s probably in the Capacity and Planning Guide. BigFix Capacity Planning guide - Customer Support

Whether documented or not, it’s definitely best-practice to run the Console on some machine other than the root server. The root server shouldn’t be expending resources supporting client applications or user logins.

1 Like