Large amounts of custom sites (2000+) Problems with that?

(imported topic written by jfschafer)

We are a very large organization with 2000-3000 locations and over 100,000 endpoints. I want to make a custom site for each location, mostly because I want custom groups ect, to be shared by the IT people in the console role for that location but not shared with others in other roles. If you don’t make a custom site, then IT people in a single role can’t see groups/custom content that another person in their role makes. (You can’t share local operator content with another operator unfortunately).

So if we have 3000 custom sites, as long as they only contain a few groups and a few custom takes in each one, will that cause performance issues? Loading those sites when logging into the console shouldn’t take long as long as there’s not a ton of content in that custom site correct? (Another issue is why do console operators that aren’t subscribed to sites have to cache 100% of all sites in their user profile when they log into the console?? If a console operator is subscribed to 10 sites, and there’s 100 across the organization, when opening console it should only cache 10 sites and not all 100. That cache process takes a long time, especially for the external sites published by IBM. Seems like a performance optimization potential for a future update in terms of both disk storage on a terminal server and database performance by not having all operators cache sites they can’t see or access anyways.

Although we’ll have a ton of operators, we don’t expect more than 50 operators or so to be in the console at the same time. Plus our database and core server will be running on 300,000-400,000 IOPS, less than 1 ms latency average storage so that should alleviate any performance issues from having a lot of console users.

1 Like

(imported comment written by jgstew)

I agree that console performance and caching could use some optimizations.

We have a very similar structure, but on a smaller scale. ~350 custom sites, ~300 operators, ~30000 endpoints. We have up to 100 operators in the console on a terminal server at once, with around 50 typical.

Our Root/DB server is not as fast as yours, but we definitely have performance issues (particularly the console and webreports) and we are not certain of the cause. We are getting a health check from IBM, hoping for some guidance on configuration, solutions and hardware upgrade recommendations.

I’d love to know the specifics of Root Server hardware/configuration in large environments, especially in a case similar to ours like you are talking about. If you do this, let us know the results.

@jgstew @jfschafer

I hate to bump an old thread but I’m very curious how this turned out for you? What were IBM’s conclusions for your performance and what steps did you take to address the issues?

2 Likes

Are you having any similar issues? What is your environment like?

One thing is to have the DBs and FillDB on fast storage, preferably high iops and low latency. Not spinning disks. If the DB and Root are on separate systems, having a 10gig link between them is helpful, and same goes for the terminal server where windows console sessions are taking place. Having the console caches on fast storage is also a good idea, and redundancy doesn’t matter quite as much.

IBM has made some tweaks to help on the platform, as well as adjusting some settings. Increasing the console refresh interval for all users using BES Admin tool helps a little, but if you do that too much you really start to loose the speed of results which is annoying.

It might make sense doing it all over again to have custom sites that were a little higher level instead of having custom sites for every little sub group when they might not actually end up being used. If there is a real need for it, then sure, make it, but otherwise I think having less is better.

Another thing I’ve done lately is have some sites and custom sites have a delay of when they will be subscribed by the client to speed up initial provisioning and installation of software with bigfix for new machines. See here: Delay BigFix site subscription to speed up client provisioning

It really depends on what issues you are having as far as what areas to look to address those issues.

1 Like

So I work for a prominent University in the South and we have used Bigfix for years within the central IT organization. Recently however we expanded our licensing to cover the entire University and have been making it available to the Network administrators in the various colleges, departments, and satellite campuses.

This expansion is occurring very rapidly and it’s made me ponder the design philosophy of whats best in practice for supporting decentralized IT communities across campus. Creating sites for each college perhaps, and then groups to represent each department and assign rights as such, the downside is everyone sees all the groups that may not be pertinent to them in the site and if they have writer privileges as I have given most in their own sites they could inadvertently mess with someone else’s groups.

We haven’t run into the specific problems illustrated in this thread, nor are we even close to that number of custom sites. I wanted to try to avoid these pitfalls and learn from those of you who have experience in weighing more custom sites vs automatic groups in sites, and the clutter / potential headaches of Network admins sharing a site space could entail.

You kind of addressed this specifically.

“It might make sense doing it all over again to have custom sites that were >>a little higher level instead of having custom sites for every little sub group >>when they might not actually end up being used. If there is a real need for >>it, then sure, make it, but otherwise I think having less is better.”

So to sum up I was looking for insight on planning our future growth to avoid these potential pitfalls, and provide the best experience possible for our IT community.

Thanks for the feedback!

1 Like

Ideally they would not have write access to the groups that assign them computers, they would only have write access to the groups that are created ad-hoc by that college or department. All operators should only have access to manage the computers they should be managing, but the custom site / content could be shared at a level or 2 above that. It wouldn’t be as strict, and there could be issues with multiple sub-groups of a department having write access to the same content, but ideally that issue would be minimal and could be self policing… but that doesn’t mean there won’t be potential issues with this approach. It is hard to tell.

1 Like