NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
The NVIDIA Ada Lovelace Architecture has a new optical flow accelerator, NVOFA, that is 2.5x more performant than the NVIDIA Ampere Architecture NVOFA. It provides a 15% quality improvement on popular benchmarks including KITTI and MPI Sintel.
 
This new fake frame thing will also cause weird and unique situations.
Imagine Flight Simulator's super CPU intensive regions, such as being on a Boeign cockpit on JFK Airport in NYK. This combination, from what I remember, becomes CPU bound near 60-70 FPS on a 12900k. Now, in that region with ultra settings, 3090ti is almost able to get 4K/60.
Now 4080 will suddenly provide 120+ FPS in that place (fake, predicted or whatever it is).
That kind of performance will not be able to be replicated with anything on AMD. AMD can push 5x power, they won't still be able to budge above 70 FPS in that location due to huge CPU limitations.
This is also apparent with their Spiderman video. Supposedly they're getting 200 framerate or so with DLSS+Ray Tracing. Even the mightiest 12900k chokes around 90-100 in super CPU intensive areas in the game. This creates the problem where even if AMD provides a super duper Ray Tracing/Raster powered GPU, once they get into that very high refresh rate/frame rate territory they will appear to lag behind due to huge CPU limitations.
I gather either AMD will have to invent this gimmick for themselves, or they will have to rely and bank on that it will be a flop for end user. If it does not end up as a flop and most people like and accept and embrace it: then it will be really troublesome for AMD sadly.
If it ends up being reallly good and reaaly usable, it will also create a weird situation with reviewers, I'd assume.
Well until FSR 2.0 AMD was also at a 2X performance disadvantage for situations where DLSS could be leveraged. The key thing helping them was that reviewers were still using non-upscaled results for their core benchmarks, which I think will continue. With DLSS 3.0, it looks like the extra boost is only around 50%, and on the CPU side some of that will be recovered by CPU performance improvements, assuming consoles are the limiting factor.
 
They do nothing of the sorts. Decoder sends uncompressed video to the screen in it's native fps. Otherwise how would frame interpolation even work with games and PC signal?
Modern motion smoothing tech does exactly that. You do know that even terrestrial TV is an MPEG4 stream these days (and before that MPEG2)? The whole thing relies on the temporal information contained in a compressed video stream.

It doesn't work for an uncompressed stream of images, but TVs do other motion compensating nonsense such as cadence detection/adjustment, (selective) frame doubling, black/red frame insertion or other tricks to exploit the visual retention of discrete images in humans.

DLSS and other frame amplification/temporal aggregation techniques are, in essence, making game rendering more like a compressed video stream whereas it before was more like an uncompressed one. Work smarter not (un)necessarily harder.

Edit: Unless you mean post-process motion estimation techniques, which would compare rendered images (and, yes, calculate motion vectors between blocks in them) to aid interpolation, but the whole point of game modes and suchlike is to mostly bypass those things because they introduce lag and mostly suck (despite all the buzzwords used to sell them).
 
Last edited:
Just going to post this to give a perspective on where the pricing disconnect is -

Cost per SM [cost per tflop] -

RTX 4090 - 12.5 [19.38]
RTX 4080 16G - 15.8 [22.45]
RTX 4080 12G - 15 [24.62]

Hypothetically if pricing all followed the 4090's ratio -

RTX 4080 16G - $950 [$944.58]
RTX 4080 12G - $750 [$776.94]

The added issue and departure from the past is that typically the highest SKU typically carries the premium (halo pricing) and almost certainly you don't get worse value to that extent going down the stack.

RTX 3090ti - 23.8 [50.00]
RTX 3080ti - 15 [35.19]
RTX 3070ti - 14.6 [32.26]

RTX 3090 - 18.3 [42.16]
RTX 3080 - 10.3 [ 23.51
RTX 3070 - 10.9 [24.62]
RTX 3060ti - 10.5 [24.69]
RTX 3060 - 11.8 [25.90]

RTX 2080ti - 14.7 [74.35] or 17.6 [89.22] (using FE price)
RTX 2080 - 15.2 [69.51] or 17.4 [79.44] (using FE price)
RTX 2070 - 13.9 [66.93] or 16.7 [80.32] (using FE price)
RTX 2060 - 11.7 [54.26]

RTX 2080 Super - 14.6 [62.78]
RTX 2070 Super - 12.5 [55.19]
RTX 2060 Super - 11.8 [55.71]


Edit: Interestingly because of this issue for the halo segment this is actually the biggest jump in value since Maxwell->Pascal. However with how the rest of the stack currently sits it's arguably worse than Pascal->Turing in terms of value jump for every other segment.
 
Last edited:
Just going to post this to give a perspective on where the pricing disconnect is -

Cost per SM [cost per tflop] -

RTX 4090 - 12.5 [19.38]
RTX 4080 16G - 15.8 [22.45]
RTX 4080 12G - 15 [24.62]

Hypothetically if pricing all followed the 4090's ratio -

RTX 4080 16G - $950 [$944.58]
RTX 4080 12G - $750 [$776.94]

The added issue and departure from the past is that typically the highest SKU typically carries the premium (halo pricing) and almost certainly you don't get worse value to that extent going down the stack.

RTX 3090ti - 23.8 [50.00]
RTX 3080ti - 15 [35.19]
RTX 3070ti - 14.6 [32.26]

RTX 3090 - 18.3 [42.16]
RTX 3080 - 10.3 [ 23.51
RTX 3070 - 10.9 [24.62]
RTX 3060ti - 10.5 [24.69]
RTX 3060 - 11.8 [25.90]

RTX 2080ti - 14.7 [74.35] or 17.6 [89.22] (using FE price)
RTX 2080 - 15.2 [69.51] or 17.4 [79.44] (using FE price)
RTX 2070 - 13.9 [66.93] or 16.7 [80.32] (using FE price)
RTX 2060 - 11.7 [54.26]

RTX 2080 Super - 14.6 [62.78]
RTX 2070 Super - 12.5 [55.19]
RTX 2060 Super - 11.8 [55.71]
Prices are skewed to make the 4090 seem reasonable. Nvidia doesn’t care to sell 4080s when they have stockpiled 3xxx series GPUs, mining affects prices even post-mortem. My hope is that prices will adjust once 3xxx GPUs dry out, but I’m not holding my breath for it.
 
DLSS and other frame amplification/temporal aggregation techniques are, in essence, making game rendering more like a compressed video stream whereas it before was more like an uncompressed one. Work smarter not (un)necessarily harder.
Rasterizing is the compressed format. Raytracing is the uncompressed format. DLSS helps to make the uncompressed format usable and allows to provide more quality while delivering the same advantages as the compressed format.
 
I game at native 1080p with a 3060ti to avoid the need for DLSS as it's not that good looking but some games have such poor IQ at native 1080p there's no difference between using DLSS and native.

Dyling Light 2 is a great example, the difference between native 1080p and DLSS quality is so small that I have to go in to the video settings and see if DLSS is turned on or not.

What I would like to see is the ray tracing side upscaled using DLSS and raster side left as native.
 
Last edited:
Prices are skewed to make the 4090 seem reasonable. Nvidia doesn’t care to sell 4080s when they have stockpiled 3xxx series GPUs, mining affects prices even post-mortem. My hope is that prices will adjust once 3xxx GPUs dry out, but I’m not holding my breath for it.

If we look at 2xxx which came off a mining glut situation as well (although we can debate the extent here, but that's another topic) the releases that ended up being considered good value for the series next year had a pretty substantial ~20% drop in cost per SM. If we say applied a 0.8 per SM price drop to the AD104, 58 SM unit would end up at $696 (akin to the 2070 -> 2060 Super). If applied to the 4090's value we'd end up at an even $600.

Interestingly that 56 SM 10GB AD104 rumored configuration using the 12.5 per SM price from the 4090 would end up an even $700 and slotted in as a direct price replacement for the 3080.
 
While I understand the desire to have "native resolution" rendering, 3D graphics is, like many things, a balance of compromises and you want to have the best possible outcome with reasonable performance.
For example, which is better, native @ 4K but with lower texture resolution and lighting effects, and native @ 2K with high resolution textures and full lighting effects? It depends, of course. But the beauty of PC gaming is that generally you have the options to choose from, base on your personal preference.
DLSS2, FSR, and DLSS3 can be seen as providing another options, that you may get something close to 4K resolution but with better texture resolution and lighting effects previously only possible on 2K resolution. It's just an additional option for you to take if you want.

However, since these techniques won't be "perfect," for benchmarks, I think the best way right now (and many reviewers are already doing) is simply compare cards at native resolution, but also point out the available options (e.g. it can show a chart with different cards running @ 4K native, and another chart with cards running FSR/DLSS2/DLSS3 to show the possible better performance if you take these options). The same goes for ray tracing as well. This way, people are able to see how these cards compare with baseline settings and also their full potentials if specific options were enabled.
 
Wtf is this graphic. I can't tell which card is which. I can assume 4090 is top, 3080 ti is bottom, but I can't tell the difference between the middle two most likely because of my colour blindess. Honestly no idea how websites etc are still absolutely brain damaged about designed graphics for the 5% of the world with some form of colour vision anomaly.
Having worked many years as gfx artist, i can assure we just don't know about the potentially many forms and effects of color blindness. So the problem is missing education. It never was a topic in the art shool i was, for example.

I have assumed, some people have issues with detecting difference in hue, for example confusing red and green.
But those charts all have the same hue of green. So you can't see the difference in brightness? :O That can't be?

Let's compare: I agree the difference between the 2nd. and 3rd. chart is much too subtle. I have issues detecting the difference at all and agree that's a design flaw.
But i can see the difference between 1 and 2, and also 3 and 4. How's that for you?

It's also very hard to match the charts with the little squares on top. Another design flaw.
But because that's gradual changes relative to background, that's hard for anybody, indepenmdent of color blindness.
 
However, since these techniques won't be "perfect," for benchmarks, I think the best way right now (and many reviewers are already doing) is simply compare cards at native resolution, but also point out the available options (e.g. it can show a chart with different cards running @ 4K native, and another chart with cards running FSR/DLSS2/DLSS3 to show the possible better performance if you take these options). The same goes for ray tracing as well. This way, people are able to see how these cards compare with baseline settings and also their full potentials if specific options were enabled.

Non corporate friendly reviews are done in this segmented manner. It's just that nvidia and others have a lot of influence and strong 'encouragement' to follow their one sided review guides.
 
I've spent the morning looking at GPU's and I'm now pretty pissed that I don't have a reasonable upgrade path from Nvidia, my 3060ti at times can beat a 6900XT when using ray tracing and as ray tracing is my number one priority going to RDNA2 isn't an option.

I want at least a 50% increase at a bare minimum over my 3060ti to justify making the upgrade so I'm looking at a 3090/3090ti, so I need to spend 2.5x what I paid for my 3060ti just to get a 50% increase.

If AMD don't knock it out the park with RDNA3's performance and pricing I'm effectively stuck with no realistic upgrade option to go with, so in that case what's the point? I may as well just sell the PC and get a PS5.

I feel PC is slowly being priced out of relevance as a gaming platform.
 
I've spent the morning looking at GPU's and I'm now pretty pissed that I don't have a reasonable upgrade path from Nvidia, my 3060ti at times can beat a 6900XT when using ray tracing and as ray tracing is my number one priority going to RDNA2 isn't an option.

I want at least a 50% increase at a bare minimum over my 3060ti to justify making the upgrade so I'm looking at a 3090/3090ti, so I need to spend 2.5x what I paid for my 3060ti just to get a 50% increase.

If AMD don't knock it out the park with RDNA3's performance and pricing I'm effectively stuck with no realistic upgrade option to go with, so in that case what's the point? I may as well just sell the PC and get a PS5.

I feel PC is slowly being priced out of relevance as a gaming platform.


Just wait ? In 6 months prices can be lower, RDNA3 will be out (?), maybe intel have a roadmap for high end gpu... 3060ti is still a nice card for now imo :)
 
If AMD don't knock it out the park with RDNA3's performance and pricing I'm effectively stuck with no realistic upgrade option to go with, so in that case what's the point? I may as well just sell the PC and get a PS5.

And get even lower performance.

I feel PC is slowly being priced out of relevance as a gaming platform.

Thats what i feel for the Playstation consoles future.
 
We've been hearing that for the last 2/3 years now....

The lockdowns and miners stuff are mostly over, It will help I guess ? But, yeah, I agree that maybe prices will just stay the same. Honestly, the best factor will be competition. Intel is not there yet, neither is AMD right know with their RT performances. If RDNA3 is still lagging a lot, nVidia won't feel the need to lower prices ? Time will tell.
 
Status
Not open for further replies.
Back
Top