From our latest set of load testing results we found that the console database queries can take much less time if the server is configured to have a larger network packet size. The network packet size controls how much data the server sends before waiting for a response from the client that the data is being accepted. A small network packet size translates into more round-trips between the client and the server, which means worse performance for BES consoles connecting over a high-latency link. Changing this setting could also help in situations where BES Consoles running over high-latency links are holding database locks for long periods of time, preventing other BES Consoles from propagating.
We would appreciate some feedback on how changing this setting in a deployment with real high-latency links affects console performance. In order to set this setting to the maximum, run Query Analyzer paste the following in and hit F5:
sp_configure ‘network packet size (B)’, 32767
reconfigure
If you want to set it back to the default (for whatever reason), run “sp_configure ‘network packet size (B)’, 4096” and then “reconfigure”. After changing this setting, you should see that consoles take less time to load directly from the database (without a cache). Feel free to post your experiences with this change. Thanks
You would have to run the Query Analyzer and connect to the machine that is hosting your BES database. In a local database deployment, this is the same as the BES server.
Ok, I have made the change on the server. Haven’t seen any real improvement in loading. My console took 2 minutes prior and now about 1:40. Deployment seemed to pick up some speed (should I see improvement there?).
That 20 second improvement is still good news. You would see an even greater improvement if there were higher network latency between the BES Console and the database. Could you try pinging your database machine from your console machine and posting the round-trip time? Thanks.
I am basically running a gig from my system to the server. So latency is not really an issue. It is more on the server itself where the slowness seems to be. We are thinking about getting new hardware and running it on 64bit.
Reply from 192.168.66.206: bytes=32 time<1ms TTL=127
Reply from 192.168.66.206: bytes=32 time<1ms TTL=127
Reply from 192.168.66.206: bytes=32 time<1ms TTL=127
Reply from 192.168.66.206: bytes=32 time<1ms TTL=127
We would appreciate some feedback on how changing this setting in a deployment with real high-latency links affects console performance. In order to set this setting to the maximum, run Query Analyzer paste the following in and hit F5:
sp_configure ‘network packet size (B)’, 32767
reconfigure
FYI, Changing this parameter to this setting actually triggers a bug in SQL 2005. Using this setting causes an obscure error when creating maintenance plans/MSIS pakages that was a pain to track down. The error was “OLE DB error code 0x80004005 (Communication link failure)”