My [simplistic] view is that the minor savings in the smaller deltas may not be worth all the additional efforts in dealing with them. In my BigFix infrastructure, I can handle the larger files and I would gladly dismiss them for simplicity.
But to answer your question, I don’t have enough data to say if it is BF or M$. I can say that when we have to manually correct failed Deltas, we deploy the CU (via BF or while on the system). They typically work and if they fail, its obvious that it is an endpoint thing.
Now does that mean they run better on their own than in a Baseline? Could be, but we will re-try one time if any component fails in the Baseline; and that helps. So failing a second time, after a reboot, kind of points to a Delta. Not to mention that such a high failure rate are the Deltas.
It would be laborious, but I gather I could manually deploy just the Deltas or just the CUs to some of my failed systems and see which one wins. But that wouldn’t be fun…But I’ll try a small group.
Better would being able to perform a patch cycle with only CU (I keep historical data on the patching results each month) and see if the numbers improve in a noticeable amount.
I’m sure it would be easier and preferred for IBM to just drop the Deltas, but there are customers that want them (probably).
UPDATE: I manually installed the Delta on a few Win10 1803 endpoints where the Delta deployment failed. The manual process failed as well with:
I then manually installed the CU and that was successful. For the remainders, I deployed the CU via BigFix changing the original relevance to “true”.
My vote is to experience amnesia with regards to Delta’s.