Office 365 or office 2019 install using ODT set it and forget it

I am reposting this because I thought I figured it out but the fix did not solve this the extraction still fails.
I am trying to build a office 365 install fixlet for the SSA, that we can pretty much “set and forget”.
Anyway using the code from HCL’s fixlet "Install Office 365 Using ODT - Office 365.
But instead of carrying the ODT payload, I am trying to pull the ODT directly from Microsoft’s site.
I don’t understand why this is failing on the extracting, because I am just replacing the HCL filename with my own, I am doing pretty much the same as the “download as” example as shown on this “download as” doc page. https://developer.bigfix.com/action-script/reference/download/download-as.html
I have tested this download as line in the QnA debugger to make sure it is actually downloading the file - download as ODTdownload.exe https://www.microsoft.com/en-us/download/confirmation.aspx?id=49117
I have confirmed the file does not exist when it downloads it.
here is the action details of the section it has failed on, so you can see I replaced HCL’s prefetch section of code, with trying to pull the ODT file directly from Microsoft’s site.

Completed action parameter query “Channel” with description “Enter the Channel to be Installed.%0d%0aAccepted values: Current, SemiAnnual, SemiAnnualPreview, MonthlyEnterprise” with default "“
Completed parameter “SourceFolder”=”{pathname of parent folder of regapp “besclient.exe” as string & “\Office365AdvPatch”}"
Completed if {exists folder (parameter “SourceFolder”)}
Completed folder delete "{parameter “SourceFolder”}"
Completed endif
Completed folder create “{parameter “SourceFolder”}“
Completed //downloading ODT
Completed delete ODTdownload.exe
Completed download as ODTdownload.exe https://www.microsoft.com/en-us/download/confirmation.aspx?id=49117
Completed delete delete __createfile
Completed //Extracting ODT
Failed waithidden __Download\ODTdownload.exe /quiet /extract:”{pathname of client folder of current site & “__Download”}”

I found the problem Microsoft uses a redirect, so this is why the download does not work. I am working on a scraping script that will always grab the latest filename.
will post that later when I have it working.
should I delete this post? does anyone file this helpful?

I for one would like to track your progress if you find a working method.

The current (as of today) download behind that redirect is at https://download.microsoft.com/download/2/7/A/27AF1BE6-DD20-4CB4-B154-EBAB8A7D4A7E/officedeploymenttool_15225-20204.exe

I’m not sure whether a client-side download using ‘curl’ with --follow-redirects would land on the latest version, or whether that is a JavaScript-generated URL that needs more than curl to get the latest version.

I have some automations we’re using to obtain download links like that, but it operates on the admin system and generates new Fixlets, not something to run real-time on the client-side.

thank you Jason! I will give this a try, as Curl was exactly what I was looking to use, but was going to curl -L the page and see if I could scrape the file name from the output file and feed it back in as a parameter, but your idea sounds like a much better way to go, I will give it a go and see if I can get it to work.

ok I am stuck, I can use curl to get the redirect and output it to a file.
but now I am trying to use relevance to scrape the URL from the text file.

I have this that returns the last part of the file name. but I need to get the 83 characters before this text and I am not sure how to do this.
this relevance gives me the last part of the file “15225-20204.exe”

if exists file (“download\redirect.txt") then first 15 of following texts of firsts "officedeploymenttool" of lines whose (it contains "officedeploymenttool”) of file ("__download\redirect.txt") else “n/a”

so I just need to grab the 83 characters before this and join the two together to come up with the URL.
then assign this to a parameter in my action code so I can run the bigfix download as command to output to a static name I can use the extract command on.

It doesn’t sound like you’re actually getting curl to follow a redirect - I wasn’t able to do it either, it still just downloads the HTML page.

So now you’re scraping the resulting HTML and getting a second download link from that, right?

What’s kind of neat is that it looks like we may be able to use our XML inspectors to parse the HTML of the page. I’m going to take a shot at that and will let you know what I find.

correct, it just pulled the entire page, so I went back to my original idea of just pulling the URL from the text it downloads.

I think I’ve got something, how’s this look to you (for retrieving both the download URL and the output filename to generate).

This scrapes the HTML page to find the element with the manual download link by finding the <A HREF> tag with the class “mscom-link-failoverLink” and reading the ‘href’ element from it

q: (it, following text of last "/" of it | it) of node values of attributes "href" of xpaths ("xmlns:xhtml='http://www.w3.org/1999/xhtml'", "//xhtml:a[@class='mscom-link failoverLink']" ) of xml document of file "C:\Temp\test.html"

A: https://download.microsoft.com/download/2/7/A/27AF1BE6-DD20-4CB4-B154-EBAB8A7D4A7E/officedeploymenttool_15225-20204.exe, officedeploymenttool_15225-20204.exe
T: 4.900 ms
I: plural ( string, string )
2 Likes

sweet! :slight_smile:
I will give it a run tomorrow morning.
thank you very much for your help on this!!!

ran into a snag, this is what I have so far, but the debugger is complaining about needing the {} guards, but it looks like I have them in the right place?

if I take the guards off I don’t get an error about the guards, but on the “download as” line it gives me an error that “relevance substitution not allowed”

wait cmd /C curl -L “https://www.microsoft.com/en-us/download/confirmation.aspx?id=49117” >> “C:\temp\DownloadODT\redirect.txt"
parameter “SourceDownload”=”{(it, following text of last “/” of it | it) of node values of attributes “href” of xpaths (“xmlns:xhtml=‘http://www.w3.org/1999/xhtml’”, “//xhtml:a[@class=‘mscom-link failoverLink’]” ) of xml document of file “C:\temp\DownloadODT\redirect.txt”}"
download as __Download\ODTdownload.exe “{parameter “SourceDownload”}”

With out relevance guards on the scraping section.

wait cmd /C curl -L “https://www.microsoft.com/en-us/download/confirmation.aspx?id=49117” >> "C:\temp\DownloadODT\redirect.txt"
parameter “SourceDownload”=(it, following text of last “/” of it | it) of node values of attributes “href” of xpaths (“xmlns:xhtml=‘http://www.w3.org/1999/xhtml’”, “//xhtml:a[@class=‘mscom-link failoverLink’]” ) of xml document of file "C:\temp\DownloadODT\redirect.txt"
download as __Download\ODTdownload.exe “{parameter “SourceDownload”}”

a parameter has to be a single string, or, if using relevance substitution, the returned value must be a single string (or something that will easily be cast to a single string - and sometimes you have to forego the resilience of plural relevance to achieve the single value.

I think you need to make this

parameter “SourceDownload”=”{node value of attribute “href” of xpath (“xmlns:xhtml=‘http://www.w3.org/1999/xhtml’”, “//xhtml:a[@class=‘mscom-link failoverLink’]” ) of xml document of file “C:\temp\DownloadODT\redirect.txt”}"

thank you for the suggestion!
I tried your code and the debugger seems happy with it.
the last issue is the download line.
it gives an error “relevance substitution not allowed” which is odd because in the HCL “download as” example they are using relevance after the download as command.
this is the line it is failing on.
download as __Download\ODTdownload.exe “{parameter “SourceDownload”}”

Relevance substiturion is not supported on download or add nohash prefetch item, basically if you can’t provide the hash and size in advance you can only use static URLs. I ended up doing what Jason suggested and created static prefetch statements from a script and created fixlets from XML via the API - the API works really well, it’s worth looking at.

ok thank you for the answering this.

Ah, I see…I actually expected your use-case wouldn’t use the ‘download’ or ‘download now’ commands at all, I thought you’d just call curl.exe on the client directly

After all, you’re already using curl in the first step to get the redirect.txt…just call curl again to get the ODTDownload.exe

that was my next idea to use curl or bitsadmin for the download.
wait cmd /C curl “{parameter “SourceDownload”}” --output “C:\temp\ODTdownload.exe”

but I need to back up one line, this line setting the parameter is not working right.
but I should add that both sections below do create the output file correctly so I don’t know why it is unhappy.
if I use the suggestion above
I get the error “relevance clauses must be surrounded by { and } guards.”

parameter “SourceDownload”=”{node value of attribute “href” of xpath (“xmlns:xhtml=‘http://www.w3.org/1999/xhtml’”, “//xhtml:a[@class=‘mscom-link failoverLink’]” ) of xml document of file “C:\temp\DownloadODT\redirect.txt”}"

if I use the original with guards I get the same error “relevance clauses must be surrounded by { and } guards.”

parameter “SourceDownload”="{(it, following text of last “/” of it | it) of node values of attributes “href” of xpaths (“xmlns:xhtml=‘http://www.w3.org/1999/xhtml’”, “//xhtml:a[@class=‘mscom-link failoverLink’]” ) of xml document of file “C:\temp\DownloadODT\redirect.txt”}"

so I think I need to figure out what it does not like about the guards??
I think if I can just get the parameter set then I should be good to go?

I have played around with putting {} guards in different spots and I just can’t figure out where it wants me to put the guards?

You get that error message for any failed relevance substitution - the problem I think is the first curl command is not outputting the file correctly

wait cmd /C curl -L “https://www.microsoft.com/en-us/download/confirmation.aspx?id=49117” >> “C:\temp\DownloadODT\redirect.txt”

Should be using the --output parameter to curl instead of >>

It could also be a problem in how cmd.exe quotes parameters, or that the output folder does not yet exist. Try

folder create "c:\temp\DownloadODT"

wait cmd /C "curl -L "https://www.microsoft.com/en-us/download/confirmation.aspx?id=49117" --output "C:\temp\DownloadODT\redirect.txt""

( Adding the doublequotes around the whole cmd.exe command line, with embedded doublequotes inside, is intentional. It’s specific to how CMD handles quoting. )