Managing the UploadManagerData Folder

@BigFixNinja, thanks for checking into this for us.

What I’m concluding is, if BigFix is being used for ILMT/BFI, absent any IBM recommendations, we really should consider increasing these settings to an arbitrarily large number to prevent the UploadManagerData folder warning from occurring:

_BESRelay_UploadManager_BufferDirectoryMaxSize 
_BESRelay_UploadManager_CompressedFileMaxSize 
_BESRelay_UploadManager_BufferDirectoryMaxCount 

Your posting seems to answer the question about how the UploadManagerData directory is managed.

What about this question: When the “UploadManagerData directory full warning” occurs, its description says that the server will stall until this limit is increased (or data in UploadManagerData is deleted manually). Exactly what does it mean that “the server will stall”? Could someone describe what I would see when the server “stalls”?

–Mark

Hi all …

I submitted the RFE. See http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=84100.

–Mark

I upvoted this. Also I have noticed that if you go ahead and delete all the contents of the uploadmanagerdata/sha1 folder, and do an import, SUA shows missing scan results for all the computers. So it expects the scan files to be sitting in that directory for existing computers. These files are referenced via dbo.uploads table in BFEnterprise.

Thanks @rohit, I’ll have to check this table for myself and see what’s there.

I have an open PMR regarding a larger than normal number of computers that ILMT reports “missing (software) scan results” and maybe this has something to do with it.

–Mark

Are you actually missing results from computers for SUA, or do you believe that the error message is there only because the scan files are no longer under the UploadManagerData folder? I don’t manage the SUA piece personally but I have been clearing out that directory for a few weeks now, and have not heard back from the SUA guys. Just curious if this is just cosmetic or clearing out the directory actually causes a data issue.

The issue with my PMR is this … For some endpoints, ILMT’s scan results widget reports missing software scan results even though when I log onto these endpoints, I can see that the software scan has run and I can see the software scan data files in the UploadManagerData folder. I don’t know how ILMT determines that a software scan has run but its results are missing. Maybe it’s a caching issue, where new results aren’t getting put in UploadManagerData, but I don’t know.

I’m not deleting anything from UploadManagerData unless I get guidance from IBM that it is OK to do so. Nothing definitive out of the PMR yet.

(ILMT is SUA “lite” where it only sees IBM software; SUA can see other vendors’ software),

–Mark

If the scans are periodic , it would replace the files you clean out. I have observed that after cleaning out the uploadmanagerdata folder and running a data import, I get lot of missing scan results.

I am assuming you have no warnings in the import logs wrt the scan results. Keep us posted :slight_smile:

Correct @rohit … ILMT Import appears to run fine and no related errors are reported.

–Mark

Hi all … a review of the data I sent as part of the PMR seems to indicate that ILMT’s detection of software scans is working as designed … the issues seem to point to the scans themselves:

  • software scan timeouts (scan return code 9 or 29)
  • memory issues on two endpoints (scan return code of 125), although we checked these servers and there appears to be plenty of memory on them)
  • scan return codes from the operating system, for instance:
    138 - SIGBUS
    139 - SIGSEGV
    143 - SIGTERM
    (not sure what this means right now, but that’s what we were told)

The general recommendation for most of these is either to increase the scan timeout (from the default of 12 hours) and check for large folders that can be excluded from the scan.

More to come, I’m sure.

–Mark

Does the UploadManagerData setting really mean anything? I am over the directory size by 1 GB and the number of files by about 150,000. It seems like the warning is just reporting that the limits have been exceeded, but there is no functional effect.

In terms of the IBM License Metric Tool and BigFix Inventory, yes, it does … that is sometimes the reason for these tools to report “missing scan data”.

–Mark

What criteria are you using to determine which files should be removed?

I’m not removing any files from UploadManagerData. --Mark

@mbartosh, as I mentioned earlier in the thread, we run a daily script against our Upload Manager folder structure to do several things. First it moves out files based on naming convention into respective shares by data type. Using a regex for uploaded file naming convention is very helpful as we have several different types of data uploaded daily that is co-mingled in the Upload Manager structure.

The second part of the script clears the structure to be ready for tomorrows data. We have it timed so that our uploads take place overnight. These scripts run first thing each morning. As respective data is transferred to alternative shares, it is then consumed by other sources.

Helpful hints include having a very structured naming convention for all uploaded data. Then build a script on the fly using relevance substitution with the regex of your convention. This can be batch, powershell, etc. Have a daily policy action on your root server to dynamically recreate that script, then execute it on a daily basis. Rebuilding it daily ensures that endpoints that have been added/removed/upload requirement changes are all accounted for. Be mindful of embedded quotes, carriage returns, etc when building your script.

We found it useful to create a file containing the endpoint computer name in it that is included in the upload archive. This makes it easy to parse out which archive belongs to which endpoint at the root. Our endpoints have a very structured naming convention as well which we can also regex to lift out whichever endpoints we need by attributes.

Here is an example line:
appendfile {(“xcopy %22” & item 0 of it & “*applog_*.log%22 %22e:\applog” & item 1 of it & “%22 /E /C /I /Y%0d%0a” ) of ((pathnames of it, lines 1 of files “Name_0_computername.txt” of it as string) of folders of folders of it ) of folders “e:\BigFix Enterprise\BES Server\UploadManagerData\BufferDir\sha1” whose (exists file “name_0_computername.txt” of folders of folders of it)}

Repeating similar lines, one can change out the source and destination criteria. Similar lines can also be used to clean up afterwards.

Hopefully this gets you started. A bit tricky to get set up correctly, but once it is in place it works like a champ. We’ve been using this approach successfully for several years with up to several GB of data daily.

Just to point out there are two APARs opened in the context of UploadManager

IV89719 - to fix problems with BufferDirectoryMaxSize and BufferDirectoryMaxCount settings
IV91535 - to add a cleanup mechanism based on BES Computer Remover approach

These problems were found in the last patches of 9.2 and 9.5 …

I’m just now updating our environment to 9.5.5 and have actually run into a pre-req failure due to having too much in our Upload Manager Directory. Looking at the APARs, I see they haven’t been touched since December 2016. Would any progress be updated there, or would that be communicated with the customer that had it generated? I’d be curious to know if there’s going to be anything done, or if we’ll need to continue to manage that folder ourselves. Thanks.

The APAR is updated just when its has been identified the patch/release that will include it.
To be notified about the APAR status it is needed to be subscribed to the APAR.

The first APAR IV89719 has been fixed and delivered with 9.5 patch 5.

About the second apar IV91535 the activity is ongoing, the current plan is to deliver it by the end of the year.

@mtrain looks like the RFE you opened was uncommitted. Does anyone know if this issue is fixed in 9.5 and obsolete scan results are cleaned up?

Looks like that idea is gone since it was on IBM site and unlikely it was recreated.