Many board partners do it, and hardly any of the end users know how it really works. But everyone is talking about it. We are talking about GPU binning, i.e. the pre-selection and categorisation of GPUs of a model series according to different criteria. In this way, different clusters are created, figuratively speaking. The very good, less good and those that meet the framework specifications, e.g. the achievable clock rates.
So far, so well known. Some of the board partners use certain stencils in order to be able to insert the chips in advance without soldering and measure the electrical resistance or Voltage drop between certain pins. Together with a constantly refining own database of measured values and the real data of the later assembled card, one can already make reasonably certain statements about the overclocking quality of a chip after a short time.
As an effect of such selection procedures, one then also encounters different products of the manufacturers within a chip class. A fine example is, for example, Zotac, where you can bint in two stages and contrast the butter-and-bread cards with the higher-order Amp! series. That you can get these top models back in Amp! and Amp! Extreme is the result of testing the real maps, where you pick out the raisins again after the first functional tests.
But Nvidia wouldn't be Nvidia if you didn't assemble such things into your own business model. After all, the chips are sorted after production (full expansion for Quadro, laser cut for consumer products), so it is certainly obvious to do so according to overclocking points of view. And so, as I have learned from several independent sources, the board partners are now offered three quality levels.
At the top end of the pixel food chain are the OC chips, followed by those for lightweight OC and the standard chips, where OC is unlikely to be possible. Even if everything is probably only classifications based on quite exact forecasts and manufacturing data, the board partners have to pay for the respective quality of the chip memory bundles already when purchasing. The better the overclocking requirements, the higher the price. Logical.
This is not something of honour, but it limits the GPU lottery from the outset. Lucky shots will therefore be rarely found and the "high flashing" on the respective top model of a manufacturer will thus also become obsolete. Together with the 1-Click-OC, this may also be a real dampener for a popular popular sport. You get what you pay. Sounds fair, but it's almost boring.