Yeah, we've had this discussion before here on B3D, this is not true. NVIDIA doesn't use any more software scheduling than AMD.
It's interesting to know about the scheduling (i.e. no real difference), but it's definitely true that Nvidia DX11 can gain more from more CPU cores, and that this does come with some degree of overhead (even if slight). So if you can gain more performance with additional CPU cores, it must be true that you can therefore lose some of those gains if you lose CPU (or the ability to exploit that CPU).
I've done some lazy Googling and found a "new to me" Intel developer presentation on DX11, where they go into some detail investigating this.
https://www.intel.com/content/www/u...es-of-directx-11-multithreaded-rendering.html
Interestingly, there are indeed cases where you can lose some of the advantages of Nvidia's approach. Intel say that dividing work by "chunks" (best scaling with cores) rather than by passes, can mean work that is order dependant (they use the example of semi-transparent objects) means that deferring can lose efficacy.
"The multithreaded rendering method that divides rendering tasks by Chunk can achieve a significant performance improvement, and the performance is not affected by the number of passes and increases with the increase of the number of CPU cores. The shortcoming is that for certain situations that require orderly rendering (such as rendering semi-transparent objects), the strategy of distributing Chunks is limited, and it is easy to lose the load balance among the threads, thereby affecting the performance scalability."
I have no idea what GoW is doing, but if transparency is involved it could conceiveably be hurting NXGamer's frame rates because he's not getting the benefits that NVidia's driver normally delivers. And this wouldn't necessarily show as CPU as a bottleneck depending on how it being sampled, averaged etc. Even though a faster CPU might boost frame rates.
I mean, accurate rendering of dense vegetation through fog has to involve some degree of "orderly rendering", doesn't it? Could it be a factor?
Perhaps AMD wouldn't be faring any better in absolute terms (I admit I don't know) but I'm more convinced than ever that something about his CPU is holding the 2070 back, because his results don't do it justice IMO.
This video is nothing more than some naive conjectures.
Perhaps, but some of them turned out to be correct.