Managing the UploadManagerData Folder

I am assuming you have no warnings in the import logs wrt the scan results. Keep us posted :slight_smile:

Correct @rohit … ILMT Import appears to run fine and no related errors are reported.

–Mark

Hi all … a review of the data I sent as part of the PMR seems to indicate that ILMT’s detection of software scans is working as designed … the issues seem to point to the scans themselves:

  • software scan timeouts (scan return code 9 or 29)
  • memory issues on two endpoints (scan return code of 125), although we checked these servers and there appears to be plenty of memory on them)
  • scan return codes from the operating system, for instance:
    138 - SIGBUS
    139 - SIGSEGV
    143 - SIGTERM
    (not sure what this means right now, but that’s what we were told)

The general recommendation for most of these is either to increase the scan timeout (from the default of 12 hours) and check for large folders that can be excluded from the scan.

More to come, I’m sure.

–Mark

Does the UploadManagerData setting really mean anything? I am over the directory size by 1 GB and the number of files by about 150,000. It seems like the warning is just reporting that the limits have been exceeded, but there is no functional effect.

In terms of the IBM License Metric Tool and BigFix Inventory, yes, it does … that is sometimes the reason for these tools to report “missing scan data”.

–Mark

What criteria are you using to determine which files should be removed?

I’m not removing any files from UploadManagerData. --Mark

@mbartosh, as I mentioned earlier in the thread, we run a daily script against our Upload Manager folder structure to do several things. First it moves out files based on naming convention into respective shares by data type. Using a regex for uploaded file naming convention is very helpful as we have several different types of data uploaded daily that is co-mingled in the Upload Manager structure.

The second part of the script clears the structure to be ready for tomorrows data. We have it timed so that our uploads take place overnight. These scripts run first thing each morning. As respective data is transferred to alternative shares, it is then consumed by other sources.

Helpful hints include having a very structured naming convention for all uploaded data. Then build a script on the fly using relevance substitution with the regex of your convention. This can be batch, powershell, etc. Have a daily policy action on your root server to dynamically recreate that script, then execute it on a daily basis. Rebuilding it daily ensures that endpoints that have been added/removed/upload requirement changes are all accounted for. Be mindful of embedded quotes, carriage returns, etc when building your script.

We found it useful to create a file containing the endpoint computer name in it that is included in the upload archive. This makes it easy to parse out which archive belongs to which endpoint at the root. Our endpoints have a very structured naming convention as well which we can also regex to lift out whichever endpoints we need by attributes.

Here is an example line:
appendfile {(“xcopy %22” & item 0 of it & “*applog_*.log%22 %22e:\applog” & item 1 of it & “%22 /E /C /I /Y%0d%0a” ) of ((pathnames of it, lines 1 of files “Name_0_computername.txt” of it as string) of folders of folders of it ) of folders “e:\BigFix Enterprise\BES Server\UploadManagerData\BufferDir\sha1” whose (exists file “name_0_computername.txt” of folders of folders of it)}

Repeating similar lines, one can change out the source and destination criteria. Similar lines can also be used to clean up afterwards.

Hopefully this gets you started. A bit tricky to get set up correctly, but once it is in place it works like a champ. We’ve been using this approach successfully for several years with up to several GB of data daily.

Just to point out there are two APARs opened in the context of UploadManager

IV89719 - to fix problems with BufferDirectoryMaxSize and BufferDirectoryMaxCount settings
IV91535 - to add a cleanup mechanism based on BES Computer Remover approach

These problems were found in the last patches of 9.2 and 9.5 …

I’m just now updating our environment to 9.5.5 and have actually run into a pre-req failure due to having too much in our Upload Manager Directory. Looking at the APARs, I see they haven’t been touched since December 2016. Would any progress be updated there, or would that be communicated with the customer that had it generated? I’d be curious to know if there’s going to be anything done, or if we’ll need to continue to manage that folder ourselves. Thanks.

The APAR is updated just when its has been identified the patch/release that will include it.
To be notified about the APAR status it is needed to be subscribed to the APAR.

The first APAR IV89719 has been fixed and delivered with 9.5 patch 5.

About the second apar IV91535 the activity is ongoing, the current plan is to deliver it by the end of the year.

@mtrain looks like the RFE you opened was uncommitted. Does anyone know if this issue is fixed in 9.5 and obsolete scan results are cleaned up?

Looks like that idea is gone since it was on IBM site and unlikely it was recreated.