Using AI to write BigFix Relevance

I have played around with using AI / ChatGPT to write relevance.

I will say it works 100% correctly rarely, but it often points me in the right direction. Sometimes the hardest part is figuring out where to look on a computer for the answer you are looking for, and even when it gets the relevance partly incorrect, it can still help with that part.

I wanted to take a string and capitalize the first letter and I wasn’t sure how to go about it.

I asked, and I got this answer:

(it as uppercase as string & following text of first 1 of it) of "title"

This wasn’t quite right and returns TITLEitle but it pointed me in the right direction.

What I really wanted was this:

((it as uppercase & following text of it) of first 1 of it) of "title"

Then I wondered about doing Title Case in relevance.

I came up with this which wasn’t exactly right, but close:

concatenations " " of ((it as uppercase & following text of it) of first 1 of it) of substrings separated by " " of "hello to the world"

Which gives me this output: Hello To The World

I then gave this relevance to ChatGPT and ask it to make the relevance give title case.

It gave me this:

(item 0 of it) & " " & (concatenations " " of ( (if it is contained by set of ("a"; "an"; "the"; "and"; "but"; "or"; "for"; "nor"; "on"; "at"; "to"; "from"; "by"; "over"; "in"; "of") then it else ( (it as uppercase & following text of it) of first 1 of it ) ) of items 1 of it)) of ( ( (it as uppercase & following text of it) of first 1 of it ) , (substrings separated by " " of "hello to the world") )

This didn’t work at all and gave errors. Not surprised.

But then I came up with this based upon this incorrect relevance:

( /* Always Uppercase First Letter */ (it as uppercase & following text of it) of first 1 of it ) of concatenations " " of ( ( /* Uppercase first letter of each word unless it is on a list */ if it is contained by set of ("a"; "an"; "the"; "and"; "but"; "or"; "for"; "nor"; "on"; "at"; "to"; "from"; "by"; "over"; "in"; "of") then it else ( (it as uppercase & following text of it) of first 1 of it ) ) of it) of (substrings separated by " " of (it as lowercase) of "hello To the wOrld")

This is quite complicated to fix and get right. Definitely not something for a novice, but I did find the direction it gave me helpful, even though I had to fix it a bunch.

The real reason I went down this path is because I wanted this:

(name of it & ( ( " " & concatenation " " of ((it as uppercase & following text of it) of first 1 of it) of substrings separated by " " of codename of it ) | "" ) ) of operating system

Which should work on most operating systems except for windows. Example output:

Linux Red Hat Enterprise Linux 8.1 Ootpa

I would be curious if anyone has had better luck with a different AI / Model.

I posted these final relevances:

Also, since none of this required a particular operating system to run on, I tested each one on the online evaluator on developer.bigfix.com :

4 Likes

What text prompts did you use to generate these outputs?

In the current state of LLMs, it seems that the art is in learning the tricks of phrasing to get it to produce the right thing.

1 Like

Maybe but I have asked ChatGPT to help calculate staffing models based on information like, Complex, non complex files, time allotted for each variation of files, hours staff work and so on. First answer didn’t work, way off on the calculation, and I entered the question again (Ask Again), with the exact same language and I got an answer that actually worked.

So maybe you have to ask more than once. :smiley:

1 Like

Grok was very useful for me, while Copilot failed over and over. Haven’t tried Copilot/ChatGPT anymore in recent months because Grok did a good job in my cases.

2 Likes

Agreed, things can be inconsistent.

I don’t know that I captured that in these examples, but I was just playing around with Gemini 2.5 Pro and it seems to work the best for both relevance and session relevance. I feel like I need to come up with sample input text, desired outputs and then test the relevance these generate to see how good they all are.

The query: “write bigfix relevance to capitalize the first character of a string”

in Gemini 2.5 pro gave me this relevance, that worked:

(first 1 of it as uppercase & following text of first 1 of it) of "your string here"

In the same chat context, I gave it: “now capitalize the first letter of every word in a string”

and it gave me the working relevance:

concatenation " " of ((if it = "" then "" else (first 1 of it as uppercase & following text of first 1 of it))) of substrings separated by " " of "your string here"

I tried asking a follow up for sentence case and it did not work.

I started a new chat with a fresh context, and asked: “write bigfix relevance that takes a string that contains a sentence and outputs that string in title case.”

and Gemini 2.5 Pro gave me this relevance that worked:

(first 1 of it as uppercase & following text of first 1 of it) of concatenation " " of (if (it is not contained by set of ("a"; "an"; "and"; "as"; "at"; "but"; "by"; "for"; "in"; "of"; "on"; "or"; "the"; "to"; "with")) then (first 1 of it as uppercase & following text of first 1 of it) else it) of substrings separated by " " of (it as lowercase) of "the quick brown fox jumps over the lazy dog"

So far, I am very impressed with Gemini 2.5 Pro.

Have you ever compared the different versions of Gemini, including the free model? Also, which GPT model are you currently using?

I tend to prefer GPT overall, but it can be frustrating how poorly it handles certain complex prompts. I often notice that it follows the correct logic but applies the wrong properties. That said, its performance has noticeably improved over the past year.

1 Like

I’ve been using it (ChatGPT) as a starter model for awhile. I’ve noticed that sometimes if you’re trying to get it to correct the initial prompt it’ll occasionally switch over to session relevance from client relevance but that has been few and far between. I agree that it’s rarely ever 100% on the first try but definitely can steer you if you’re stuck on something!

1 Like

I tried this in Github Copilot recently. My issue was it was mixing client and session relevance, so nothing worked :smiley:

I was wondering if it would be possible to train a model on relevance by pulling all Fixlets/Tasks/Analyses into a data format for it to learn from, as well as internet sources / training guides. I haven’t done anything further on this though and not sure if I ever will.

1 Like

Gemini 2.5 Pro is the one that works the best by far I have found.

I have also found that some models will mix up the 2 and then nothing will work.

I have wondered about doing something similar. There is a few options here.

One is to take any model and provide it with a bunch of known good relevance and relevance training materials to provide it as part of the context window so then the model has more recent and “good” relevance knowledge to go off of. This would require testing to see what context to provide to get better results, but the nice thing is such an approach can work with any model now or in the future.

There is also an option of using something called MCP to provide a model a way to look up information and seek out more context about BigFix relevance itself. You might even be able to allow the model to use it to test it’s relevance it generates to see if it throws errors or not for validation. Would maybe even be possible for the AI to run actual session relevance queries against a real bigfix server to test it.

Another option is to take a model that has open weights and basically resume training from where the model left off and train it on specific bigfix / relevance stuff to make it better at relevance. The resulting model could actually be combined with the above approach where you provide it context as well. This is very hard and complicated to do and to do well, and you need to update it periodically as new models are released.

Based upon my positive experience with Gemini 2.5 Pro, I’m less convinced that training a custom model is required.

2 Likes

Gemini 2.5 works well, but one on the subtle tricks is to use deep research. Here’s why, it crawls almost all of the bigfix sites (forums, hcl documentation, etc.), and then returns a wordy report. I used your initial example, and it got it exactly right. One other tip is to ask your AI of choice to write the prompt for you. If you do that, I find that the results are far more accurate.