If the IEM/BigFix root server has a large enough cache, then each package will only be downloaded once and the rest of the downloads of that same package will come from the root cache. I generally recommend making the root server cache at least 300GB or more. Alternatively you could greatly increase the cache of your top level relays and that will help significantly.
The other issue is if you don’t have a relay to cache the content closer to the endpoints being updated. You not only want a cache at the top end to speed things up, but you also want a relay to cache the content in the same data center or WAN link or similar to greatly speed things up.
It should be that only the very first download of a package is slow and each subsequent download is fast, assuming a proper relay infrastructure.
This should be possible, but it will be complicated and hard to do. What if multiple endpoints are being updated at the same time, and they are both trying to download the same new file to the same network share? That will be a problem.
Your best bet may be to use the REST API to determine which fixlets are relevant on the endpoints you are trying to update, pull all of the prefetches from those fixlets, download them all to the network share using a single machine to “pre-cache” them. Then take all of the relevant fixlets and copy them into a baseline, but rewrite the parts that download the items to instead point at the network location, and do the same for the installer command to point at the network location.
While I think this is technically possible, this is a huge amount of work. I would expect it to take 6 months to figure out.