Nvidia's 3000 Series RTX GPU [3090s with different memory capacity]

On a powerhouse system with fast RAM, large caches, and lots of CPU cores (but still a hard ceiling on single core performance) then you want to let NV's DX11 driver run wild and try to spread the load across as many cores as possible, even if by doing so it consumes an entire CPU core worth of overhead.
It's my understanding that Nvidia's multicore optimizations for DX11 didn't expand beyond 4 cores, and was heavily optimized for 4 core Intel CPUs since that was the gaming staple for eons before AMD finally forced Intel out of stagnation.
 
Asus believes NVIDIAs yields being too low is one of the big issues with availability
https://www.tomshardware.com/news/asus-nvidia-geforce-rtx-shipments-drop

"As for when we will be able to resolve this gap [between supply and demand], it is really hard to tell," an executive from the company said. "Our guess is that the gap might have been caused by lower yields upstream. As for when [Nvidia] can increase that yield is something hard for us to predict."
 
What about stop selling directly to miners for a start ? (By that I mean a lot of cards don't even hit retails).
 
Yet they still haven't fulfilled even all the launch day orders (RTX 3080).
Who's "they"? Asus? Nvidia to Asus? All AIBs? One eshop which disclosed this information?
Note that we're discussing a literal guess on part of Asus ("Our guess is...") which is not backed up by any of the available data.
For example if the issue would in fact be yields we would probably saw a launch of 3070Ti already on a further cut down GA102 die.
 
Who's "they"? Asus? Nvidia to Asus? All AIBs? One eshop which disclosed this information?
Note that we're discussing a literal guess on part of Asus ("Our guess is...") which is not backed up by any of the available data.
For example if the issue would in fact be yields we would probably saw a launch of 3070Ti already on a further cut down GA102 die.
At least two major etailers, one serving several countries (Proshop) and other being biggest in Finland (Jimm's). At least Asus, Gigabyte and MSI have still unfulfilled RTX 3080 orders from launch.
 
This is a much better benchmark on HUB's part and there are definitely several interesting results worth discussing, especially those where the tables turn between DX11 and DX12.
It also somewhat proves that the issue is only happening in D3D12.
 
So maybe they should start fulfilling these orders instead of bulk selling cards to miners and blaming Nvidia for yields? As an idea.
Do you have any actual proof for any of the mentioned companies selling cards in bulk or in any other direct way to miners? I know some companies have done so, but haven't heard MSI, Gigabyte or Asus doing it.
 
Still, it is conjecture and does not constitute proof of knowledge to construct arguments based on it. Its basically a waste of internet bandwidth. Also why I find the "know-it-all" attitudes from some here far removed from reality.
 
Do you have any actual proof for any of the mentioned companies selling cards in bulk or in any other direct way to miners? I know some companies have done so, but haven't heard MSI, Gigabyte or Asus doing it.
1. What are the chances that three biggest AIBs ignore mining letting the smaller ones to eat up all profits from it?
2. Do you have any proof for yields issues? Asus is "guessing", every other piece of data we have doesn't show any sign of a supply problem - in a sense of there actually being a problem beyond the production capacity in general.
 
Still, it is conjecture and does not constitute proof of knowledge to construct arguments based on it. Its basically a waste of internet bandwidth. Also why I find the "know-it-all" attitudes from some here far removed from reality.


The only proof would be Gigabyte, Asus, xxx, admitting to do that. They won't right now, it would be a PR nightmare. Reading some big miners or mining groups telling they buy from them is enough for me.
It's a tech forum, most of the stuff we talk about is without proof, even very technical stuff (a lot of low level arch. details are not divulged). Educated guess is fine and all we have.
 

Some very interesting results.


I'm not sure I like this one. I'll have to watch it when I can pay more attention, but they seem to be making a lot of assumptions about "software scheduling" on nvidia drivers which seems to have been debunked as a misunderstanding everywhere I've looked. I think there are a lot of ways they could explore their testing to isolate the issue, but they seemed to be locked into this software scheduling idea, which has steered the testing in a particular way.

Edit: For example in the following image the results I highlighted are gpu limited. There's a lot of data on the graph, but some of it seems irrelevant. I don't know why they didn't try 720p to see if it scales different, and see if they can put the lower end gpus into cpu bottlenecks. Maybe the 5600xt would scale even higher.

upload_2021-3-26_13-34-53.png
 
Last edited:
I'm not sure I like this one. I'll have to watch it when I can pay more attention, but they seem to be making a lot of assumptions about "software scheduling" on nvidia drivers which seems to have been debunked as a misunderstanding everywhere I've looked. I think there are a lot of ways they could explore their testing to isolate the issue, but they seemed to be locked into this software scheduling idea, which has steered the testing in a particular way.
The benchmark results are solid, their analysis is not.

It seems that they are also deleting all comments which would cast doubt onto their analysis and their idea that this due to scheduling. But I dunno, just read it elsewhere.
 
I'm not sure I like this one. I'll have to watch it when I can pay more attention, but they seem to be making a lot of assumptions about "software scheduling" on nvidia drivers which seems to have been debunked as a misunderstanding everywhere I've looked. I think there are a lot of ways they could explore their testing to isolate the issue, but they seemed to be locked into this software scheduling idea, which has steered the testing in a particular way.

There was a previous video of theirs (I think it was just a general news roundup/discussion) where they talked about this, and basically echoed the NedTechGasm video explanation as the reason, not "this might be why this is happening" but the 'hardware vs. software scheduler' is why it's happening, and that Nvidia can't 'fix' this until a full hardware refresh, full stop. So they seem particularly wed to this explanation, you would think that would be a good opportunity to try and reach out to other game developers to get their insight, even if they were anonymous.

Again a simple addition I think that could provide some more insight would be to test all these games at a locked framerate of 60fps and record per-core CPU usage and frametimes over a run, you can see how hard the CPU has to work to deliver the same performance. You still want uncapped results of course to determine if it's actually a 'bottleneck' to the final fps, but this method will at least separate out the CPU workload more clearly imo.
 
Back
Top