NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
I think it's the good old "faster in rasterization" argument. But it's more or less irrelevant since no one will be comparing these upcoming GPUs in games from 2015, and the rest are all about RT and compute.

Its not like Ampere was behind in rasterization anyway, and i dont see it happening with lovelace either.
 
I’ve lost count of how many AMD architectures were supposed to be Nvidia killers. Not sure why anyone would expect this to be any different.

Faster in RT and compute doesn’t tell us much though. Which compute? Clearly RDNA 2 is plenty fast at compute in games. The more important thing is will RDNA 3 RT performance be sufficient to not hinder further adoption of RT. Hopefully it’s a FSR 2 vs DLSS 2 sort of situation where both IHV solutions are good enough.

126 of 144 SMs is a pretty deep cut. Leaves room for the inevitable Ti refresh. Or maybe Titan is back…
 
I think it's the good old "faster in rasterization" argument. But it's more or less irrelevant since no one will be comparing these upcoming GPUs in games from 2015, and the rest are all about RT and compute.

Well no one except HWUB :) Benchmark reviews in 2023 will still be dominated by games that have no RT. Best we can hope for is that reviewers include both RT on and off results in titles that support it.
 
Rasterization performance.
As I've said it's very much irrelevant. Even in this generation it's close to being irrelevant - who cares if some $1000 card can run some game from 2018 faster in 1080p in 2022?
This whole argument stemmed from the fact that Ampere didn't scale as well in non-RT or compute heavy games because of how it was scaled up from Turing.
RDNA2 did better since it was more or less a straight scaling of WGP counts compared to RDNA1. My guess is that IC has helped considerably in some older games too. (Nvidia being CPU limited in DX12 more often didn't help either.)
With that RDNA2 reached about the same performance in "rasterization" as Ampere which was an outlier of sorts considering that it was thoroughly beaten in RT and compute otherwise.
But with RDNA3 everything points to a different type of scaling up from RDNA2, with more SIMDs per WGP, which is closer to how Ampere was scaled up from Turing than RDNA1->2.
So the idea that RDNA3 will win in "rasterization" this time seems dubious. I'd expect them both to scale similarly there, mostly because both IHVs understand that it's basically irrelevant for modern software.

RDNA3's biggest issue will likely be its multichip design. I expect a lot of problems coming out of this - and it's not a given that all of them will be solved by s/w (drivers) or that they will be solved fast enough for the solution to be used in benchmarks against Lovelace.
 
Just because you think so doesn't mean they have to think so.

AMd will have to start somewhere right, they cant be clinging on to hand-tuned temporal reconstruction forever either. There's probably many more reasons for performance AI/ML in gaming going forward. Also remember that AMD could be a very likely candidate for the 10th generation of consoles (if they happen), we dont want them to fall even more behind then with the launch of current generation.
 
As I've said it's very much irrelevant. Even in this generation it's close to being irrelevant - who cares if some $1000 card can run some game from 2018 faster in 1080p in 2022?
This whole argument stemmed from the fact that Ampere didn't scale as well in non-RT or compute heavy games because of how it was scaled up from Turing.
RDNA2 did better since it was more or less a straight scaling of WGP counts compared to RDNA1. My guess is that IC has helped considerably in some older games too. (Nvidia being CPU limited in DX12 more often didn't help either.)
With that RDNA2 reached about the same performance in "rasterization" as Ampere which was an outlier of sorts considering that it was thoroughly beaten in RT and compute otherwise.
But with RDNA3 everything points to a different type of scaling up from RDNA2, with more SIMDs per WGP, which is closer to how Ampere was scaled up from Turing than RDNA1->2.
So the idea that RDNA3 will win in "rasterization" this time seems dubious. I'd expect them both to scale similarly there, mostly because both IHVs understand that it's basically irrelevant for modern software.

RDNA3's biggest issue will likely be its multichip design. I expect a lot of problems coming out of this - and it's not a given that all of them will be solved by s/w (drivers) or that they will be solved fast enough for the solution to be used in benchmarks against Lovelace.
I don’t agree that rasterization performance no longer matters. I also didn’t say RDNA 3 will win in rasterization, I just don't expect a complete lack of competition like I do in RT and peak FP32 throughput. I consider RDNA 2 better at rasterization than Ampere. Its a more efficient and elegant design only hampered at 4k by lack of faster VRAM.
 
If reconstruction continues its path of normalization on PC(which I think is basically guaranteed to happen), then GPU performance in the 1080-1440p range is actually going to remain quite relevant.

I also still think ray tracing needs to become more impactful before most PC gamers choose to actually use it. Obviously it's more likely with people using these very high end parts, but ray tracing will be far from the only graphics featureset that sees big increased demands in upcoming next gen titles. So there will likely still be some tough choices to make in terms of graphics vs performance, particularly for those trying to play at higher framerates. And ray tracing will likely still be a common item on people's chopping block to some degree(some or all), especially for those trying to play with higher framerates.
 
If reconstruction continues its path of normalization on PC(which I think is basically guaranteed to happen), then GPU performance in the 1080-1440p range is actually going to remain quite relevant.
Not in "rasterization" though.
I also don't consider RDNA2 as being any better in this, it has just scaled better than Ampere in low resolutions when compared to their predecessors. But it is often falls apart in 4K which you would actually consider a prime point of comparison for rasterization.

And ray tracing will likely still be a common item on people's chopping block to some degree(some or all)
That's assuming that there will be such option to begin with. Which is quite an assumption to make considering that we already have MEEE now.
 
That's assuming that there will be such option to begin with. Which is quite an assumption to make considering that we already have MEEE now.
No, it's not. You know perfectly well only reason MEEE doesn't have the option is simply because there's already ME filling that job.
 
No, it's not. You know perfectly well only reason MEEE doesn't have the option is simply because there's already ME filling that job.
No, I don't know that. What I know is that there is the original version because there were consoles which didn't have RT h/w. Once the support for such consoles will be dropped there will not be a need for such versions of games.
 
If reconstruction continues its path of normalization on PC(which I think is basically guaranteed to happen), then GPU performance in the 1080-1440p range is actually going to remain quite relevant.

I also still think ray tracing needs to become more impactful before most PC gamers choose to actually use it. Obviously it's more likely with people using these very high end parts, but ray tracing will be far from the only graphics featureset that sees big increased demands in upcoming next gen titles. So there will likely still be some tough choices to make in terms of graphics vs performance, particularly for those trying to play at higher framerates. And ray tracing will likely still be a common item on people's chopping block to some degree(some or all), especially for those trying to play with higher framerates.

I think the benefits of RT on the dev side are under-appreciated at this point, and that we aren't too far away (~12-18 months) from RT becoming the primary development target, with non-RT lighting in particular still being usable, but a second class citizen of sorts.

The ME:EE developers' presentation on just how much easier and simpler RT is from an art perspective was really eye-opening for me. I think we're going to come to a point soon where developers rely on things like RTGI to 'just work' and save hundreds if not thousands of hours of artists' time hand-placing grids of GI probes, manually twiddling/fixing light leakage through geometry, etc.

That's not to say that non-RT lighting techniques won't still be available, or that the games won't load at all without an RT enabled card, just that the herculean time and effort spent manually adjusting things in the game world to hide the deficiencies of the older non-RT techniques will get pushed lower and lower down the list of priorities. To the point where games will be built from an art and engine standpoint with RT as goal#1, and the 'fallback' path to conventional rasterized fake lighting will be the last-minute add-on with much less effort put into making it as perfect as possible.

The powerpoint escapes me at the moment, but they talk about that a little bit here, 1/3 of the way down: https://www.eurogamer.net/digitalfoundry-2021-metro-exodus-tech-interview
 
That's not to say that non-RT lighting techniques won't still be available, or that the games won't load at all without an RT enabled card
Why not? You can only save time by avoiding the old workflow. And for that you'll need to drop the old code completely. And you have every incentive to do this if all your target platforms support h/w accelerated RT.

This outgoing gen was transitional because of the long console crossgen period. Next GPU generation will sell in an environment where old consoles no longer will be a development target for the majority of new AAA releases.
 
In the near future, we should see many RT only games as soon as the old generation gets dropped.

I assume software Raytracing is used if you do not have an RT enabled card. The next gen Avatar game is confirmed to do this. That way nobody gets left out.

But as we progress into the generation, I could see some games requiring the DX12_2 featureset, so by then a software RT path won't be needed anyway.
 
Status
Not open for further replies.
Back
Top