Here and there in the forum I see that many are using the Direct Download feature in the context of patching, where a vendor CDN is already in place (RH, MS and so on).
Does anybody is using this feature (_BESClient_Download_Direct = 1) for distribution of custom software?
Do you see a valid point in setting up a private CDN or a HTTP distribution point based on Object Storage as a source for all the custom software?
In contexts where many workstations connect to the BigFix infrastructure through Internet, I thought it could make sense.
Thanks for any comment
Iâm aware of some customers setting up infrastructure like that, but generally only for âinternalâ or âon-vpnâ clients. The main concern is that the client wonât pass up any authentication credentials at all, so whatever you host on your web distribution point has to be available anonymously (hence they limit access to internal networks). Youâd have to consider what youâre hosting to decide how much risk/liability to accept on that.
Hi Jason and thanks for your reply.
Yes I also think that is the critical part of the solution. I though it could be addressed leveraging security features of the many cloud offerings. E.g.: AWS S3 as an origin point of a CloudFront distribution can be protected with a WAF that inspect source IP, user-agent string, and so on.
By the way, the addition of layers of security to a pure public distribution point would add costs to the overall solution. In the end costs could balance advantages.
Why not just host a bigfix relay in the cloud and have clients talk to it? As long as client authentication is enabled, then only already connected bigfix clients would be able to reach it.
Yes, that is the normal architecture for this use case. I am doing a theoretical exercise looking at the costs point of view.
For a low or middle scale environment (under 10k devices) some bigfix relays in cloud should do the work at a reasonable cost. For high scale environment (up to 100k devices or more) we need to choose VM instances of superior class and disk storage capable of the throughput required. E.g.: an Azure premium disk storage for every bigfix relay would be a big cost addition to the bill. In the end there is to consider the cost of egress traffic from the cloud.
In this context, my exercise was to consolidate custom software download on object storage and adopt download direct for everything. This would avoid the need for premium storage on the relays. Even more egress traffic from object storage should be less costly.
By the way this scenario as the limit pointed out by @JasonWalker of no authentication at all. Adding some layer of security through cloud features would raise the costs re-balancing the equation. In the end this probably is no the way to go.
You might also consider going outside of the clientâs native download commands. Instead of using âprefetchâ or âdownloadâ commands, one could also use ActionScript to drive a curl or wget or PowerShell command to download files, during the script execution. That allows options for authentication, provided you have some means for authenticating to your CDN or web provider. This might be Kerberos tied to your machine credentials, or distributed certificates, tokens, or username/password.
Since you are downloading outside of the BES infrastructure, youâd need your own validation of the downloaded files, whether thatâs validating the hashes or digital certificates.
If you experiment with that and find it useful, you could also consider the âexecute prefetch-pluginâ command, which allows your custom script to behave like the âprefetchâ statement - downloading files before the Action is scheduled to begin executing, so one could gather all the necessary installer files before the action scheduled time arrives.