Prefetch error: Download error: "URLInfo: Attempt to use missing Scheme." [SOLVED]

Hey Community,
I wanted to see if someone has ran into this before. We have a prefetch statement that uses a parameter as part of the uri for the file

parameter “HostURL” = “{(“http://” & (host name of root server) & “:52311”)}”

prefetch 7013596a9ecd07cbb451854b19094d14d667d569 sha1:7013596a9ecd07cbb451854b19094d14d667d569 size:354512979 {parameter “HostURL”}/WINDOWS/HPOM/HPE_OA_12.02_Windows.tmp sha256:fde0c9e586fe05ea8ce0579f69d8f5861d2ab24956c092022ff7dc16070b4fae

it seems be get hung up when getting the file though. we see the following download error:

Download error: "URLInfo: Attempt to use missing Scheme."
Download requested on server:
URL: {parameter
Hash: (sha1)7013596a9ecd07cbb451854b19094d14d667d569
Size:
Next retry: The download will be retried the next time it is requested. Retry now

it works in our test environment, which is on 9.1, but we get the error when running it in our prod, which is on 9.2. Any idea on what may cause a message like this?

thanks in advance!

I haven’t seen that specifically before. You might have better luck using a Prefetch Block (to ensure the parameter is actually getting evaluated during the prefetch processing). You might also get different results depending on whether your actionis set to “begin downloads before constraints are met”.

If it works in 9.1 but not 9.2, you should probably do a PMR.

As a quick workaround, you could always use the url “http://127.0.0.1:52311/whatever

The url is evaluated at the root server so the loopback address refers to the root (unless you configured a Relay with the DoInternetDownloads option)

1 Like

It seems like you are doing this because you have multiple root servers that you are testing actions against.

Like @JasonWalker suggests, you can just use localhost if the file will always be hosted on the root server that you are going to run the action on.

But, I think a better option is to have all of your files hosted in a repository that you expose with HTTP/HTTPS to all of your root servers and host the files there. Then the URL will always be https://mysoftwarerepo.organization.tld/whatever

This means the source of your internal files will not be on the root server at all which reduces the amount of storage you need on your root servers significantly. I still would recommend your root servers have a relatively large web cache, but it won’t be where your source of truth is, and you won’t have to worry about backing up the cache or syncing it and any conflicts that might cause.

@JasonWalker PERFECT! it worked like a champ, thanks so much for the quick response!

hey @jgstew I like the idea of hosting a repos on another server, but IBM said that was frowned upon… have you ever ran into any issues doing it that way??

Hosting your internal files on your own repo actually behaves identically to how vendor downloads are handled. IBM doesn’t keep a copy of every Microsoft patch - the fixlet instructs the server to download these from http://microsoft.com/whatever. As long as you make your internal repository available to the BES server via http/https it works quite well.

hey @JasonWalker, ya totally agree, many moons ago we were told to host all downloads on the bigfix server, not on any other servers, but if that is no longer the case we will definitely keep our option open now. Thank you again for all your help on this.

Thanks Jason, this helped me too. Thumbs up