Copy of a Parameterized Task fails to show Parameter fields

It would be useful if you could share some of what you are doing, though I realize that might not be entirely possible. I’m sorry to hear it was so difficult to accomplish, but I’m glad you were able to figure it out. You should consider contributing to this wiki post: How to use Parameterized Fixlets

It is possible to use an automated cleanup script that looks at all of the SHA1s in all fixlets, tasks, baselines, and actions and then deletes or archives any SHA1s that are on the root server but NOT in any content. It would be great if IBM or organizations that have one of these to share it. It is possible for anyone with console access to the root server to trigger something like this from within the console, or to have it run on a schedule.

This is not a perfect solution, but I generally recommend that organizations making heavy use of BigFix for software installation set up a software repo with a webserver that the root server can reach (and potentially only the root server). You then use these URLs in your prefetch statements instead of uploading the content to the root server first. This causes your custom software items to populate the Root Servers webcache instead of the manual uploads cache. The webcache is managed automatically by BigFix pushing out old content to make room for the new content. I also recommend greatly increasing the default size of the Web Cache in this use case.

I sometimes use DropBox with public URLs for this purpose for content that I don’t mind being completely public.

In my Org, we sometimes use Box for this purpose, which creates a very long URL that is fairly secure to use, since it can’t be easily guessed.

Also, I generally recommend pure SSD storage for the root server, EXCEPT, you can symlink the web cache and manual uploads folders to a very large and slower RAID5 array of 7200rpm SATA disks. You can throw storage at the problem very inexpensively. Around $750 for 12TB give or take. The load on the web cache and manual uploads is primarily felt by the top level relays, not the root server itself, especially if you have larger caches on your top level relays.

The cache is primarily write once, read many, so it is pretty easy to have your top level relays entirely on inexpensive SATA SSD storage, particularly since relays are pretty quick and easy to swap out, and have a built in failover mechanism.