OSD driver selection criteria?

Is there a document describing the criteria OSD uses to determine which drivers should apply to a platform?

I am deploying Windows 10 through OSD, both Bare-metal and reimaging. I’ve encountered a problem with an inbox video driver, and I’ve uploaded a newer driver version through the Driver Library dashboard.

Although the new video driver is a signed driver, and is a higher version than the inbox driver, it is still not being applied to clients when I deploy the image. I can manually install the driver after imaging without a problem, but I’m encountering an issue with the old driver during OS installation when I have 8 monitors connected via DisplayPort connections and want the OS install to use the new driver.

I understand I could go through every OS image, select every Model of machine with this video card, and force a driver binding; but I’d rather not do that, as I have a large number of images and a large number of models.

Is there any way to determine why OSD prefers the builtin driver rather than the updated version of the driver in the driver library? Back when I was using MDT natively, there was a well-defined criteria for driver selection; Signed drivers are preferred over Unsigned drivers, then most specific device match, then highest driver version. What criteria does OSD use? Does OSD not use any out-of-box driver if there’s a matching builtin driver or something like that?

Hi Jason,
It was the intent of the OSD design to privilege stability (aka keep a previous working state of the system) v/s the use of the latest drivers available in the system.
If we would use the latest drivers available (newer driver found in the driver library) during the deployments, this would cause your deployment to change the installed driver under the cover from one day to the other just because a set of drivers were loaded.

What you indicate however is an interesting use case. In recent years we have also seen cases where the builtin driver just did not work on certain models and needed a manual bind operation to overcome the problem. It is probably the time to reconsider the precedence we give to builtin drivers during driver selections. We should provide a mechanism that makes the addressing of your problem much easier, like some sort of specific build in override.

You should definitely open an enhancement request on this subject if you desire to peruse this matter.

Unfortunately at this time your only option is to override the builtin status with a manual bind for each model/image combination.

Thank you very much for the explanation. I’ve submitted an RFE for this, at http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=108792

I think this would not be terribly difficult to implement at least a subset of automatic functionality, by allowing a new selection in the Binding Grid for “Default across all hardware models” and for “Default across all OS Images”, to set up a kind of a “Global Manual Binding” or “Preferred Driver Binding”. This would allow us to set up a default binding (which should still be overridden by the existing Model/Image Binding), and perhaps a new Per-Image and Per-Model option for to allow or block “Use Preferred Bindings”.

That would allow use to

  • Specify one Image, along with “Default across all hardware models”, to set up a binding for a new driver to that image
  • Specify one Model, along with “Default across all images”, to set up a binding for the new driver for that model regardless of image (provided the driver supports that operating system)
  • Specify the conjunction of “Default across all hardware models” and “Default across all Images” to specify the new driver should be selected as a preferred driver (when the device is present and the driver supports the OS)
  • Specify a Model, and either an image or the “Default across all images” selection and clear the “Use Preferred Binding Rules” to block these Preferred Binding Rules on one particular device (and for either one or all Images)
  • Specify an Image, and either a single Model or the “Default across all models” selection and clear the “Use Preferred Binding Rules” to block these rules on one particular Image.

The use of Preferred Binding Rules may lead to cases where more than one driver is applicable for a Model/Image/Device deployment. In that case, OSD should either use the Windows Setup rules (Signed driver preferred over Unsigned; closest matching device; higher driver version); or, OSD could simply download all of the matching drivers and let Windows Setup sort it out in that case (but each driver should be downloaded to a separate subdirectory, I’ve had an unrelated problem where two drivers, both selected, provide a binary of the same name and cause one of the drivers to fail to load).

I’m having a problem now in that even Manual Binding doesn’t seem to work for me.

I have a few lines in the rbagent.trc:

[2017/08/07 12:23:31]  W <INF> Remote copy for [DevicesBindingGrid] started. Name [Win10x64_ENT_17_5_0.dbg], size [483], remote [tem://BESROOTSERVER:52311/Uploads/46529ddcf6ff57b36814c726a5c6abc2c10e561e/Win10x64_ENT_17_5_0.dbg.BFOSD/46529DDCF6FF57B36814C726A5C6ABC2C10E561E|483|5803CC5185C3FD28A7EA84E08E8387896085DE9797B903FC5D433FF4B0CEE0C7/Win10x64_ENT_17_5_0.dbg], local [local://root/D$/Deploy_work/Win10x64_ENT_17_5_0.dbg]
...
[2017/08/07 12:24:52]  W <NOT> Model [HP Z420 Workstation] not found in binding grid [local://root/D$/Deploy_work/Win10x64_ENT_17_5_0.dbg].

But when I grab the binding grid myself from that uploads folder, its content shows

[dbg]
[HP Z400 Workstation]
modelName="HP Z400 Workstation"
10DE.0FFD.103C.0967=1C5D388B75BA5A3DF788099E7119BFC69FE181FA
[HP Z420 WORKSTATION]
modelName="HP Z420 WORKSTATION"
10DE.0FFD.103C.0967=1C5D388B75BA5A3DF788099E7119BFC69FE181FA
[HP Z420 Workstation]
modelName="HP Z420 Workstation"
10DE.0FFD.10DE.0967=1C5D388B75BA5A3DF788099E7119BFC69FE181FA
10DE.0FFD.103C.0967=1C5D388B75BA5A3DF788099E7119BFC69FE181FA

[Root]
Arch="x86-64"
imageName="Win10x64_ENT_17_5_0.WIM"

I’m not sure why rbagent is not finding the HP Z420 Workstation model in that grid, except possibly it’s confused that there appear to be two different models of Z420 in my infrastructure, one showing the model in all uppercase and the other in mixed-case. We have both Windows and Linux flavors on this model, so I don’t know whether that’s causing the mixed-case detection, or if maybe the model is reported in different uppercase/mixed-case depending on BIOS levels or something like that.

I’ve opened PMR 21612,004,000 with more complete logs, if that helps.