Relay http API?

Does anyone know of a way to have a server side script (cgi, asp, php, etc) running on the Relay behind the http service? For example, I’d like my clients to be able to hit their relay and then the Relay would go run a this script against an external API, then feed the results back to the IEM Client. http://:52311/lookup/host=server123

thanks

Why not use a different server for this task, rather than the relay? Is the returned result going to involve a large amount of data, or just text? Why not have the client query the external API itself?

It doesn’t seem like a good idea to try to run a second web server or hook into the existing web server to accomplish this.

Can you provide some details of what you are trying to accomplish? It doesn’t have to be too specific, but a general idea would be helpful.

I’d like to use the BESRelay service because 52311 is open everywhere, and if I ran 2nd webserver on the Relays, then I need to open that traffic up and security policies in our org make that very difficult.

the back story:
We currently import client info from our cmdb tool by have a single file for each IEM client, stored on our master BES root server under /wwwrootbes/blah/ and each IEM client runs a task every 2 hours that does a “download now as” against that url, and downloads its own information. Then it sets client settings based off the info in the file it downloaded.

At one time I was trying to get this data cache down to the Relay level so the clients didn’t have to connect all the way back to the BES root server, but due to the size of our environment and some of the lower powered Relays, expanding this directory cache on each Relay every 2 hours caused the disk io to spike on the Relay, so I had to resort to leave the cache only on the BES root servers. Now i’m seeing some issues where the BES Root server doesn’t always provide back the response file until it is rebooted (not sure the root cause there but probably something to do with tcp connection exhaustion). So I was hoping I could get rid the cache on the BES infrastructure all together and have the download now call hit the Relay, and the Relay would pass that request back to a Couch DB, and the response would be sent back down to the client.

I’d suggest leveraging the mailboxing feature associated with v9+ to streamline and optimize such integrations. In the past, I had considered a similar approach to leverage Relays to distribute external data, however Mailboxes make this much easier as well as more efficient in general (Clients will only have to gather changes in their external data as they happen, and will do so much more quickly). Additionally, it sounds as though relatively few changes to your existing integration would be required.

2 Likes

hi Aaram, assuming all of our clients are running v9+ (which their not), are you thinking theirs a way to use the REST API to add each server’s info file to its mailbox site, or some other way?

1 Like

Reminder that the “download now as” actionscript requires that the client be able to see the server directly and cannot navigate through scenarios like a Relay in a DMZ etc. In addition, all caching and distribution of the downloads through relays will be skipped so this can mean that if you have 100K agents they could all hit the server at the same time.

a valid point about “download now as”. Unfortunately, I had to use this command because when I used “download”, (even though it traversed the IEM infrastructure which was great), the Relays would cache the downloaded file and not try to download the file from source each time.

The better solution would be to use the dynamic download functionality. This is in the documentation but should address your changing file situation if you use a manifest file which is updated whenever a change occurred as it can contain the hash of the file so it will always download the correct file.

See https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Endpoint%20Manager/page/Dynamic%20Downloading for some starts but the actual documentation or other sources to help you construct this correctly would be the better way to go.

1 Like

I see no reason why you couldn’t use a separate web server that isn’t a relay that uses port 52311.

Also, it would definitely be best if you used dynamic downloading or a similar solution like the others have pointed out.

You could use the REST API running on a separate system that would generate actions for every client individually that would not use any downloads at all and would only contain the settings in the body of the action directly. The program that does this could query the client and cmdb and only send the diffs and only if needed. I like the idea of an approach that does not use downloads at all for this type of dynamic data. This would work best with version 9+ mailboxing, but it technically would work with earlier versions as well.

I’m not sure what the limits are of the “createfile” command, but I suppose you could also send the entire CMDB contents you wish to push to the clients in a “createfile” command containing XML or something similar that would be generated with the REST API and pushed to all clients, then the clients would pull out the data that they need from that large XML file. This would also avoid the download problem.


I don’t currently have a cmdb that I need to put info on the clients from, but I do hear of this use case quite a lot. I am wondering generally what the information is from the cmdb is that is being placed on the system and what it is used for. Also, it seems to me that the CMDB tool itself or another tool could be used to query both the data in the CMDB and the data in BigFix using session relevance live when it is needed. The only reason to push data from the CMDB to the client is so that it can be used within BigFix itself, which should be a small set of data that doesn’t change often in my mind, but again I don’t really know.


Part of the power of BigFix is there are many different ways to accomplish the same task and each comes with a different set of advantages and drawbacks and complexities.

I guess I don’t quite understand how the dynamic downloading and manifests works. Would it allow for a “dynamic” url that would allow the IEM infrastructure to pass back the dynamic payload file (no sha1 check) back down to the client?

There is a way to do exactly that, but I haven’t really done it before.

I believe there is whitelist file you have to create on the root server and include the domains that you are going to allow no hash downloads.

Prefetch blocks have something called “add no hash prefetch item” or something like that, which may be all that is required. I think the manifest option might be if you don’t know the URL ahead of time?

I’ve used manifest files for this kind of thing as well. The manifest has the name, size, sha1, and url of a long list of files. The manifest file is attached to a custom site, so it’s available to the client as a site file. The client can parse the file, and use the fields in the manifest file to populate a prefetch command.

The example use case is to have a fixlet that downloads an antivirus definition daily. Each day the size and sha1 of the definition changes but we don’t want to recreate the fixlet. When we obtain a new definition, we also update the manifest file.

The line in the file can have the literal download command syntax like “name=file1 size=1024 sha1=1234567890abcdef… url=http://127.0.0.2:52311/bfmirror/downloads/antivirus/definition.dat”

In the fixlet prefetch block, we can have
prefetch {lines whose (it as string starts with “name=file1”) of file “manifest.txt” of current site}

I’m still not quite following. In our case, each file is uniquely named (hostname) and the file could change hourly. Since i don’t know each file’s size or sha1, can they be ignored in the prefetch command? And will the client download the file directly from the url with a direct connection on 52311, or route the request through the Relays (and do the relays cache the downloads? hopefully not).

url=http://127.0.0.1:52311/sync/{computer name}

Read these:


https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Endpoint%20Manager/page/Dynamic%20Downloading


http://www-01.ibm.com/support/knowledgecenter/SS6MER_8.2.0/com.ibm.tem.doc_8.2/Platform/Action/c_dynamic_downloading.html


add no hash prefetch item http://www-01.ibm.com/support/knowledgecenter/SS2TKN_9.2.0/com.ibm.tivoli.tem.doc_9.2/Platform/Action/c_add_nohash_prefetch_item.html

Size is optional, not required for this command.


execute prefetch plugin http://www-01.ibm.com/support/knowledgecenter/SS2TKN_9.2.0/com.ibm.tivoli.tem.doc_9.2/Platform/Action/c_execute_prefetch_plug_in.html


download now URL http://www-01.ibm.com/support/knowledgecenter/SS2TKN_9.2.0/com.ibm.tivoli.tem.doc_9.2/Platform/Action/c_download.html

This is deprecated, but still works


You could also use CURL like this: http://bigfix.me/fixlet/details/3968

1 Like

hi jgstew, thanks for the info, but I don’t think it’s possible to do the dynamic download method I’m looking for, because:

“add no hash prefect item” does not allow relevance substitution in the arguments of the command

“execute Prefrech Plugin” - without testing this I don’t know much about it, but if sha1 and size are required in a prefetch command (but relevance substituion not allowed in a “no hash” prefetch command, then I don’t think this is a solution either.

“download now” is what i’m using now, and it makes a direct connection to the target URL and doesn’t use the Relay infrastructure like I’d like.

I think it would be prefect if the “add no hash prefetch item” allowed relevance substitution, but of course, it doesn’t.

I haven’t done this, but here’s a thought. On your BES Server, run an Action at whatever frequency you need to generate a manifest file on the fly:
Example from QnA: to build strings matching prefetch statement parameters
q: (“name=” & name of it & " size=" & size of it as string & " sha1=" & sha1 of it & " url=http://myserver:52311/myfolder/" & name of it) of files of folder “c:\temp”

A: name=host1.txt size=5 sha1=8cf2f469b3323c76735159ca3331a17f2cc59ba4 url=http://myserver:52311/myfolder/host1.txt

A: name=host2.txt size=8 sha1=c43a1e80a1568a7164d77a053b144d86e885c9a4 url=http://myserver:52311/myfolder/host2.txt

A: name=host3.txt size=24 sha1=9dc50aea826825400c88bd35fafca1440d8933d9 url=http://myserver:52311/myfolder/host3.txt

// ActionScript to build manifest:
delete __appendfile
appendfile {concatenation “%0d%0a” of (“name=” & name of it & " size=" & size of it as string & " sha1=" & sha1 of it & " url=http://myserver/myfolder/" & name of it) of files of folder “c:\temp”}

// The path to wwwrootbes can be determined dynamically by looking at client settings on the bes server, leaving
// it hardcoded here for readability though

delete "c:\program files (x86)\Bigfix Enterprise\BigFix Server\wwwrootbes\myfolder\mainfest.txt"
move __appendfile “c:\program files (x86)\Bigfix Enterprise\BigFix Server\wwwrootbes\myfolder\mainfest.txt”
// End of Manifest Generating ActionScript

// Action on a client, downloads the manifest, then downloads additional based on what’s in the manifest

begin prefetch block
// Download the manifest file
add nohash prefetch item url=http://127.0.0.1:52311/myfolder/manifest.txt
collect prefetch items

// download the line, referenced in the manifest, that applies to this machine
add prefetch item {lines whose (it as string as lowercase starts with “name=” & computer name as lowercase & “.txt”) of file “manifest.txt” of download folder}
collect prefetch items

end prefetch block

// do something useful with {file (computer name & “.txt”) of download folder}

One last postscript. If you’re planning to run a webserver that isn’t BigFix on port 52311 just because your existing rules allow port 52311, then that would be a violation of your company’s security policy. Port 52311 is just a technical implementation; your Policy is “Allow BigFix between sites.”

You could also get a little more complex with it if you like. For example only processing the manifest if it’s newer than the last time you ran. When creating the file, make the first line match the current time:
appendfile {now}

On the client, after downloading the manifest and before downloading the referenced items:

if { (line 1 of file “manifest.txt” as string as time) > last active time of action}
add prefetch item {lines whose (it as string as lowercase starts with “name=” & computer name as lowercase & “.txt”) of file “manifest.txt” of download folder}
endif

end prefetch block

if { (line 1 of file “manifest.txt” as string as time) > last active time of action}
// process the file to do something with it
endif // end of action

This has a couple of advantages. The manifest itself is likely going to be downloaded every time (I don’t think the relays will cache it, as there’s no sha1 defined for it in the action). But once the manifest gets to the client, the client could ignore the remaining download and processing if the manifest hasn’t changed.

Depending on where your trade-off is, you could actually save the line from the manifest that references your machine’s download in a client setting; then only download if the manifest line does not match your client setting (so instead of checking whether the Manifest has changed, you can check whether your line of the manifest has changed.

hi Jason, looks promising so far. The only question is, when “add prefect item” executes on the IEM Client and the download actually occurs, it the IEM client making a direct connection to the URL listed in the manifest, or is that executed by the BES Root Server and the download file is passed down the relay structure back to the client?

I haven’t actually tried it yet :wink:

seems to be executing the download url from the BES Root server, so that’s encouraging. I was able have a task run on the endpoint and the download occurred as expected, which is great!.

My next challenge is to take a 22MB manifest file (150k lines) and compress it down as small as possible so it doesn’t cause too much bandwidth when it is downloaded to ~50k endpoints every 2 hours…