List of vendor websites/IP addresses

Does anybody have a list of IPs/DNS for the vendor provided patches in Patch Management?

We have to implement egress rules in one of our environments and was hoping someone might have a list of all the rules that might be required for the content download process.

Thanks!

1 Like

You can get this information using session relevance by parsing out the URLs in all actionscript.

Also, I don’t know if this helps, but the Root Server itself does not need to reach out to these sites itself. It can use a Proxy Server, or a Top Level Relay could do the downloads instead.

Related:

1 Like

This is not perfect, but it is a start: http://bigfix.me/relevance/details/3002398

unique values of preceding texts of firsts "/" of following texts of firsts "://" of parenthesized parts whose(it contains "/") of matches (case insensitive regex "^(download|prefetch|download now|download now as) http(\S+)") of scripts of actions of fixlets of all bes sites

As written, this will not work with Prefetch Blocks, but it will work with most other file downloads in actionscript.

This also should not work with baselines, but I think that is a non-issue in this case.

Here is one that should work with prefetch blocks: http://bigfix.me/relevance/details/3002399

unique values of preceding texts of firsts "/" of following texts of firsts "://" of parenthesized parts whose(it contains "/") of matches (case insensitive regex " url=(\S+) ") of matches (case insensitive regex "add prefetch item .*") of scripts of actions of fixlets of all bes sites

Here is another possible option: (this should include urls in comments, not just downloads)

unique values of preceding texts of firsts "/" of following texts of firsts "://" of parenthesized parts whose(it contains "/") of matches (case insensitive regex " http(\S+)") of scripts of actions of fixlets of all bes sites
1 Like

Thanks!

It looks like the Linux patching sites use a dynamic download process that doesnt include a download link – any ideas where these might be getting retrieved from?

In this particular environment if we were using a proxy or a top level relay we would still need to know every url that the proxy or relay is hitting.

It’s unfortunate and a little silly but we gotta do it…

1 Like

Just be glad you are in an environment where you can do it at all.

I certainly understand the value of whitelisting instead of blacklisting, it is much more secure.

I will say that I think the risk is somewhat minimal as far as accidentally getting malicious code due to the SHA1 & SHA256 hash checks of the files, BUT where I do see this making sense is if you had a rogue operator trying to download & use arbitrary files from untrusted sources to do bad things. If only whitelisted sources can be used, then this would make a rogue operator’s job much harder.

I think it uses a download plugin, or it is using the internal linux update tools. (yum, apt-get)

You can certainly whitelist apt-get / yum repos by looking at a fresh linux install and figuring out what those are set to.

I’d have to look at an example to tell.

From what I can see if you’re not using custom repos it still uses the prefetch caching architecture – I found the Unbuntu one i think I might just have to patch some linux machines and look at logs to figure out the RHEL and CentOS ones…

1 Like

This is related: Identify what web sites all the fixlets need access to?