Copy of a Parameterized Task fails to show Parameter fields

Thank you for your response.

Yes, I see the script element icon and I have made sure not to touch it. The copy seems to not recognize it though. I spoke to the IBM PMR folks again, and they were able to repeat the problem on their end, and have forwarded the ticket to the Dev team. I have not heard a response from the Dev team yet.

I wish there was a parameterized content team that I could speak with, or I wish the guide for creating parameterized fixlets was more accurate. I already tried to request a new feature for editing parameterized content within the BES Console, and I have not heard back on whether it was accepted or not.

In the meantime, are you able to copy the VMware Delete Virtual Machine task properly?

I have found that the file “vrt_utils.js” must be in the correct site within the Console directory on the system where the BES Console is installed in order for the parameterized content to properly render in the task’s Description window. So when I copied the parameterized task to the Master Action site, the task could no longer find the “vrt_utils.js” file. In order to fix this issue, I had to move the “vrt_utils.js” to the correct site. This is not an ideal solution, but its all I have to go off of right now. Ultimately, our customer won’t accept this and I unfortunately see them selecting another product. I wish the creation of parameterized content was available within the BES Console. [%APPDATA%…\Local\BigFix\Enterprise
Console<server-name><operator-name>\Sites\Server Automation]

You can create parameterized content within the console. You can also, as a choice, have a script tag use a <script src="vrt_utils.js"> tag which puts the script content in an external file (which is especially handy to allow multiple fixlets to reuse the same script code). When you’re making a custom copy of someone else’s fixlet, you can either accept how they did it, or further customize it yourself.

In your custom copy, you can replace the script source link with the actual content of vrt_utils.js. You’re probably better off keeping it as it is (you’ll have to repeat the process if you make custom copies of multiple related fixlets). You can edit the script link in the console, but since some of it doesn’t display well in the console, you may be better off exporting the fixlet to XML, modifying with a text editor, and then re-importing to your custom site.

edit: Don’t quote me - I’m not sure the exact syntax of the script src tag, but it’s something along those lines.

My team of BigFix engineers are not full-fledged developers, and all we have is Notepad or WordPad to work with at the moment. So doing anything outside the BES Console is pretty ugly and error prone. The IBM PMR team doesn’t support Parameterized fixlet problems because I suppose they too realize this is an incomplete portion of the BigFix product. It really needs an added feature to fully create parameterized content (fixlets, tasks, baselines, and SAPs) within the BES Console, and it really doesn’t have that ability currently. I’m not talking about simple text boxes, I’m talking about drop down menus that populate based off of session relevance and incorporating primary parameters which lead to a selection list for the next parameter, etc etc. This gets into some very detailed .js code and is currently outside the scope of realistically using the BES Console exclusively. An incorporated Dashboard for this would be helpful. If anyone has a different solution aside from what’s been mentioned so far, we would welcome the idea, otherwise we’re probably going with Chef or something similar to replace BigFix for our needs.

You can source vrt_utils.js dynamically like in the document here: https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/90553c0b-42eb-4df0-9556-d3c2e0ac4c52/page/2efca3e3-1a65-4845-b84c-5e27cb620375/attachment/7656c719-4b4e-4c52-9d0b-568e40f1601e/media/Creating%20Parameterized%20Fixlets%20V1.0.doc

I created some better relevance to target that file: https://bigfix.me/relevance/details/3003679

unique value of values of mime fields whose (name of it = "vrt_utils.js") of fixlets whose ( "Parameterized Fixlet Library" = name of it ) of bes sites whose( "BES Support" = name of it )

Here is what it looks like implemented: https://github.com/jgstew/bigfix-content/blob/master/ParameterizedFixletsDescription.html


You can build a dashboard or use parameterized fixlets, or do something with the REST API. Those are all options for automation with BigFix.

This is a work in progress, but I am having success with using Parameterized fixlets: How to use Parameterized Fixlets

Thanks for digging into this again. I wound up building a couple parameterized fixlets that we now use as a template for our software deployment engineers. The template integrates with the Manage Software Distribution (MSD) dashboard. The engineers upload their file to the dashboard and hardcode the Product/Vendor/Version into the XML of the exported .bes file, import it back into the console, and then the software package shows up in the fixlet’s description dropdown menu as a selection choice. The chosen selection is then represented as a parameter in the action script. (I had found the example fixlet that integrated with the MSD dashboard, however, it allowed the operator to select any piece of software from the dashboard without restriction. For our DevOps process we needed to restrict the deployment operators to just the software package approved for implementation.) The summary here might sound straight forward, but since the example MSD fixlet didn’t meet our needs and there’s no official support for it, I really had to start from scratch. Parsing the parameters needed for the prefetch block was a feat within itself. Overall, it took months of development, testing and bug fixing before I finally wound up with two parameterized fixlets that meet our needs for DevOps. One fixlet incorporates an additional section with 3 text boxes for username and encrypted password verification, and the other one does not. The integration of the MSD dashboard allows us to have version control and also allows us to use the BES Console to delete content from our BigFix server’s sha1 directory when we no longer need it. As you know, if we had continued to use the Software Upload Wizard, the chore of deleting old content would be difficult at best because we would have to drill down into each sha1 to identify the hashed content, and it had to be done by remotely connecting to the server manually, as opposed to using the BES Console application. These two fixlet’s salvaged the BigFix product for our software deployment purposes.

1 Like

It would be useful if you could share some of what you are doing, though I realize that might not be entirely possible. I’m sorry to hear it was so difficult to accomplish, but I’m glad you were able to figure it out. You should consider contributing to this wiki post: How to use Parameterized Fixlets

It is possible to use an automated cleanup script that looks at all of the SHA1s in all fixlets, tasks, baselines, and actions and then deletes or archives any SHA1s that are on the root server but NOT in any content. It would be great if IBM or organizations that have one of these to share it. It is possible for anyone with console access to the root server to trigger something like this from within the console, or to have it run on a schedule.

This is not a perfect solution, but I generally recommend that organizations making heavy use of BigFix for software installation set up a software repo with a webserver that the root server can reach (and potentially only the root server). You then use these URLs in your prefetch statements instead of uploading the content to the root server first. This causes your custom software items to populate the Root Servers webcache instead of the manual uploads cache. The webcache is managed automatically by BigFix pushing out old content to make room for the new content. I also recommend greatly increasing the default size of the Web Cache in this use case.

I sometimes use DropBox with public URLs for this purpose for content that I don’t mind being completely public.

In my Org, we sometimes use Box for this purpose, which creates a very long URL that is fairly secure to use, since it can’t be easily guessed.

Also, I generally recommend pure SSD storage for the root server, EXCEPT, you can symlink the web cache and manual uploads folders to a very large and slower RAID5 array of 7200rpm SATA disks. You can throw storage at the problem very inexpensively. Around $750 for 12TB give or take. The load on the web cache and manual uploads is primarily felt by the top level relays, not the root server itself, especially if you have larger caches on your top level relays.

The cache is primarily write once, read many, so it is pretty easy to have your top level relays entirely on inexpensive SATA SSD storage, particularly since relays are pretty quick and easy to swap out, and have a built in failover mechanism.

I am working on getting approval to share it.

Getting an automated cleanup script would definitely still be helpful because we had to use the Wizard for quite a while before the MSD. So I know there is still a lot of lost fixlet data.

We had originally setup a repo, but the performance over our network was not so great. By using a SAN to add storage to our BF server we solved that problem. Also, my current customer is very tight on storage allocation. It’s a rare commodity here, but yes, generally just slapping more storage on is what I used to do. Where I used to live, we had several TB’s of storage for BigFix and it wasn’t a concern.

Without SSD’s, BigFix has phenomenal scalability and performance. I can imagine how it would perform with them.

I’d really like to see further DevOps process integration with BigFix. Hopefully that is what the future holds for it.

1 Like

I’m surprised to hear that since BigFix should typically only download the item once. I figure if you have at least a gigabit between the repo and the root, you should be good to go.

Whenever possible I source my downloads directly from the vender website, in which case you don’t need a repo or a manual upload at all.

I was not here with this customer when the repo was setup, but apparently there were problem’s maintaining the connection to the repo. We are on Linux with AIX clients primarily, and on an air-gapped network.

I’ll take your word for it, but the clients shouldn’t be talking to the repo directly. Only the root or the top level relay should be doing that. If BigFix is air-gapped from the repo, then the repo isn’t very useful.

I’m glad I don’t have to deal with completely air-gapped environments. I find the use of a Proxy and other methods to isolate the root server to be sufficient. (including Fake Root and limiting access to port 52311 on the root)

I see what you’re saying. Having it local on our root server works pretty well now. And with the repo we would still have storage being used by old software we don’t need. For instance, a single engineer builds a WAS update that’s 12GB’s compressed, uploads it through the wizard, and then finds out he needs to change it. So he deletes his fixlet, and then creates a new fixlet with the wizard and compresses 12GB’s and uploads it. He proceeds to repeat this process a dozen times. Whether we have a repo or we use the bigfix root as the webserver, wouldn’t we still be using a lot of storage unnecessarily without a cleanup solution? The MSD dashboard is pretty useful because it utilizes the mfs service to delete any software we delete in the dashboard. Serves as a decent index.

1 Like

Not if the operator can add or remove his own content from the repo directly.

The root would access the repo over HTTP(s) but the operators could access it over SMB or similar, potentially only having read/write access to their own folder in the repo.

Also, this is related: https://github.com/strawgate/C3-Platform-Kickstart/wiki/Cleanup-BigFix-Uploads-Directory

What @JGStew is getting at is that the build engineers can just pop their packages onto a random HTTP server they have access to and generate the prefetch targeting that http server.

Part of the issue with uploading the WAS package is you actually end up storing it twice.

  1. First in wwwrootbes/uploads
  2. Second in wwwrootbes\bfmirror\downloads\sha1

So every 12gb upload is actually 24gb of space on your root server

Ideal Workflow

  1. Build engineer commits a change to source control and fires a build
  2. The build process (or engineer) copies the compiled package to a publishing web server and generates a prefetch
  3. The build process (or engineer) uses the prefetch string in a fixlet

When the fixlet gets actioned, the BigFix will draw the file from the publishing server into it’s SHA1 cache (which does its own cleanup automatically).

If the engineer uses that build frequently the bigfix server will retain it in its cache, if they don’t it will get automatically cleared (but always obtainable from the publishing server).

Uploads vs HTTP Repository

Whether we have a repo or we use the bigfix root as the webserver, wouldn’t we still be using a lot of storage unnecessarily without a cleanup solution?

The big difference is that the storage on the bigfix server is abstracted away from the engineers – if you give them a publishing share you can slap a quota on it and make them cleanup after themselves.

1 Like

I can’t say I ever paid close enough attention to notice that, but I completely believe it. That makes me even more glad I don’t typically upload files to the root directly.

I would point out that 12gb of the manual upload is semi-permanent until a manual clean up, and the other 12gb is only a cache and gets cleaned up automatically at some point, but that means it is even more important than I thought to make sure that the root and/or top level relay has a larger web cache.

We have a very compartmentalized security-centered organization and there are roles that separate the powers of everyone in such a way that the operators given the permission to deploy approved tasks/fixlets to our production environment do not have permission to access anything other than the BigFix console and their windows workstation. So I suppose we had some constraints that not everyone has to deal with.

This is one option that could work. In our organization, I would foresee a problem depending on the engineers to manage their quota on their own without a GUI translating the sha1’s to software package Product/Vendor/Version. I would prefer to depend on the machine than the people. In the MSD dashboard, those values are represented and deleting it from the dashboard also deletes it from the root server. Even management can make sense of the MSD and use it.

I like that idea of a cleanup script.

It is only the person creating the content that needs access to add/remove things to/from the software repo, not the person deploying the tasks/fixlets.

We require that all BigFix console sessions go through a terminal server with 2 factor authentication. You can make the software repo only available to access from those terminal servers, and only by certain users that you allow to create content using AD security groups, so not all operators would have access.