NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
I will gladly use DLSS performance mode when it doesn't break up in motion. As it stands, I don't use anything else other than Quality and whatever is below Balanced might as well not exist.
Yes. Quality is acceptable in something like CP2077 because the value RT brings to the game.

Balanced, Performance and Ultra Performance may as well not exist for me.
 
@Flappy Pannus I think he means native vs native against the 3090ti. It's actually 168%, not double, but looking at the numbers I can see how it looks roughly double.
It’s not 168%. That would be 2.68x as fast. Apples to apples a 4090 is 50-70% faster than a 3090ti judging by the limited examples we have.
 
Last edited:
I will gladly use DLSS performance mode when it doesn't break up in motion. As it stands, I don't use anything else other than Quality and whatever is below Balanced might as well not exist.
I find it in general pretty hard to notice the difference between DLSS modes. The only changes which are visible are DLSS on vs off and DLSS Ultra Performance.
 
I find it in general pretty hard to notice the difference between DLSS modes. The only changes which are visible are DLSS on vs off and DLSS Ultra Performance.

When I had a 1080p monitor, anything less than Quality was horrible. When I got a 1440p monitor, I can entertain balanced sometimes. On a 4k monitor I think balanced would be pretty good as a tradeoff for higher settings or overall performance.
 
When I had a 1080p monitor, anything less than Quality was horrible. When I got a 1440p monitor, I can entertain balanced sometimes. On a 4k monitor I think balanced would be pretty good as a tradeoff for higher settings or overall performance.
Yeah, I'm on 4K, and in case of TV I sometimes struggle to notice any difference with a native 1080p from my viewing distance. So DLSS certainly works fine.
 
I find it in general pretty hard to notice the difference between DLSS modes. The only changes which are visible are DLSS on vs off and DLSS Ultra Performance.
At 4K it’s harder since as you know it upscales from a higher res anyway. I game at 3440x1440. You notice the difference going from Quality to Balanced. Performance and Ultra Performance are nonstarters for me.
 
When I read the piece and saw those reported temps at 50-57C with full clocks, my first though was was 'well ok guess that's over ambient', but with the chart showing the 3090ti at it's expected 70-75C by comparison, apparently not. That is an astoundingly low temp range for a modern GPU running flat-out, even before counting in DLSS 3.

Of course it's omg xboxone huge, but for a card this expensive and power hungry, I'll give Nvidia kudos for at least seemingly over-engineering the cooling solution. It seems they could have gotten away with maybe even a 2.5 slot cooler but if those temps are accurate, it at doesn't look like it will be difficult to keep these cards very quiet under full load. So points for that.

qr0TiOu.png
Remains to be seen what the temperatures actually are I'd say.
These look like the card was running at 100% RPM cooling or under a water block really.

Worth remembering that the clock is dependent on the temperature and thus you are getting higher performance with better cooling - something which Nvidia controlled demos would be inclined to do.
4090 at 2.85 GHz would be hitting 93TFs instead of 82.6 - a boost of 12.5%.
 
I don’t believe for a second that these cards will generally run at 50° on the standard cooler at normal fan settings in a typical user’s case.
 
Temps will come down to your case cooling, ambients and fan/voltage profile. And of course the workload itself.

If you look at the CP2077 comparison linked earlier, one of them ‘only’ draws 350w. Given the size and weight of the heat sink on the FE, and that GPU’s are direct die with a large surface area, low temps on the core aren’t a shock.

What I would be concerned about are the AIO models with 240mm rads. Even if the heat transfer to the block is efficient, the loop still has to dump that heat out of the loop. Non FE cards which will hit 550w+ will heat soak such a setup during extended use.
 
If you look at the CP2077 comparison linked earlier, one of them ‘only’ draws 350w.
This result is also a bit suspect to me since it suggests that the GPU isn't actually loaded to 100% - as you would expect for any benchmark.
I can believe that there would be some wattage drop when using DLSS due to lower rendering resolution and thus lower amount of data being moved to and from VRAM but 110W is a bit too much for this.
As it is it looks more like the game is CPU limited in this mode to a significant degree. Or maybe something like NVOF limited?
Anyway, none of these benchmarks are "true benchmarks".
 
I don’t believe for a second that these cards will generally run at 50° on the standard cooler at normal fan settings in a typical user’s case.
Most of these 4090 cards have monstrous 3+ slot coolers with vapor chamber and 10+ heat pipes. They are designed for 600W OC. At stock speed, they are overkill. Really nothing surprising with these reported temps.
What is surprising is that every AIB is going all Uber instead of also offering reasonable solutions, like previous gen. 80~85 degrees Celsius is perfectly fine for a GPU and it lowers the cost significantly on the cooler.
 
Just some thoughts :

RTX 3070 ~95% of GA104 -> 392mm2 on Samsung 8nm with 8GB of GDDR6 --> 500 USD
RTX 4080 16GB ~90% of AD103 --> 378mm2 on TSMC 5nm with 16GB of GDDR6X --> 1200 USD

I am not saying the RTX 4080 16GB should be 500 USD, but a 800-900 USD price would have been much better.
Tsmc 5nm must be triple the price of Samsung's 8nm.. nvidia's margoms will be low if the 4000 series doesn't sell well
 
It is at least 3x. I read on twitter that Apple paid double for 7nm <- 10nm and again double for 5nm <-7nm. TSMC 4nm process is the current cutting edge from TSMC so the price will as high as possible... Samsung's 8mn was 18 months old when Ampere was launched.
 
Seen some guy on Twitter claim that Nvidia are moving the low/mid tier 3000 series GPU's to 5nm and will use the shrink to increase clock speeds by ~50-60% and increase performance that way.

That would keep DLSS3 exclusive to the bigger 4000 series cards while offering 40-50% performance increase through clock speed bumps for the low/mid 4000 cards.

If they did that I'm not sure how I would feel about it, on the one hand they would be offering a good performance uplift but at the same time still locking out the new tech innovations.
 
Last edited:
Seen some guy on Twitter claim that Nvidia are moving the low/mid tier 3000 series GPU's to 5nm and will use the shrink to increase clock speeds by ~50-60% and increase performance that way.

That would keep DLSS3 exclusive to the bigger 4000 series cards while offering 40-50% performance increase through clock speed bumps for the low/mid 4000 cards.

If they did to that I'm not sure how I would feel about them doing that, on the one hand they would be offering a good performance uplift but at the same time still locking out the new tech innovations out.
AD106 looks like it's only going to be about as fast as the 3070 in rasterisation. I doubt that will cut it against Navi 33, so it makes sense to use the 3000 series to fill the gap.
 
Status
Not open for further replies.
Back
Top