AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

I think there's no question to anyone here on whether or not RTRT will become crucial to almost all 3D games within the next 5 or so years.
But for almost all PC gamers, i.e. the ones without a high-end RTX card (that no one can buy anyways) that aren't playing one of the couple dozen games with RT+DLSS, real-time raytracing is, at the moment, essentially meaningless like swaaye mentioned.

So like 16GB VRAM?
 
Back when Vega launched there were rumours floating around that a lot of engineers were suddenly diverted from it to Navi. I didn't put much stock in it at the time, but it's possible nvidia demonstrated their prototype ray tracing to Sony/MS and they demanded AMD feature match.

Maybe, but i doubt NV was ever in consideration for Sony/MS, if thats what you mean? I think that AMD was always going to enter the ray tracing market, a push from sony/ms or not. Sony/MS probably had a good following of AMD's tech though, while keeping to the seven year generation shifts.

The divert from Vega to RDNA most likely had not that much to do with MS or sony either. Rather then Vega being not all that great, and the need for a better architecture. Looking at it like this, they have come long ways from the vega days.
GCN was very competitive (or even better then NV), then things happened to the worse, now they are keeping up in rasterization at the least (IC helping out the BW, but 4k can take hits).
 
Because noticing these has gradually become an increasingly larger part of your job, since it's your personal and professional inclination to do so.
And it started with the fact that you studied and became equipped with the tools and knowledge to discern the methods just by looking at screenshots.

I hope you're able to acknowledge that you are in the far, far end of the gaming population that will notice "RT vs. rasterization-tricks" the most.
At least in what relates to these earlier implementations of RTRT.
Even though we do not have Alex's training to meticulously tear each frame apart, our brains do subconsciously detect the inconsistencies. Decades of living in the real world has trained our brains to recognize what feels "right". We may feel the scene is "gamey" but without being able to precisely articulate what's wrong. Imperfections in geometry are easy to articulate even to the untrained eye, but lighting and shadows are much more subconscious. Subconscious does not mean it is insignificant. To me personally, switching from inaccurate to a more accurate lighting yields a "hmm, yeah I suppose this is better" response. But switching back from accurate to inaccurate lighting yields a "oh this is garbage!" response. The more time you spend with the accurate modeling, the stronger this response is. There's simply no going back.
 
My guess is that AMD had the tech or was working on it already for RDNA2, doubt MS and Sony 'pushed' amd for RT so much. A serious GPU manufacturer as AMD, they most likely have been working on RT for their hardware.
They definitely did and if you'll look into the timeline of AMD's RT patent fillment you'll see that it happened after the launch of Turing. There's also this rumor of PS5 being initially planned for launch at the end of 2019 on RDNA1 but delayed from there to 2020 to re-base on RDNA2.
 
Back when Vega launched there were rumours floating around that a lot of engineers were suddenly diverted from it to Navi. I didn't put much stock in it at the time, but it's possible nvidia demonstrated their prototype ray tracing to Sony/MS and they demanded AMD feature match.

I think it's more likely that the console win got significantly more likely around that time and so they realised their priority should be with Navi rather than Vega.
 
To me personally, switching from inaccurate to a more accurate lighting yields a "hmm, yeah I suppose this is better" response. But switching back from accurate to inaccurate lighting yields a "oh this is garbage!" response.

Exactly! Advances in rendering are much more obvious looking backward. That's just how the human brain works, we don't appreciate anything until it's gone.
 
I think NVIDIA simply surprised AMD.
Not with raytracing features, AMD knew they were comming in NVIDIA's GPU's a
But with the fact that DLSS made realtime hybrid rendering possible.

The whole DXR thing (without DLSS) would have been 2-3 generarions out from where we are now.

That is also why AMD scrambled to announce they also would have "upscaling technology"...abd since have shown nothing...because they got caught on the wrong foot.

NVIDIA dangled DXR (Going all-in and pushing the RTX branding so hard it even less informed posters calls DXR for RTX to this day) right infront if AMD...and AMD never saw DLSS coming in from the side IMHO.
 
Exactly! Advances in rendering are much more obvious looking backward. That's just how the human brain works, we don't appreciate anything until it's gone.

Like going from AA to no AA...terribad!
Or sitting at a faster PC, then going back to a slower one...unpossible!

DXR lighting is very hard to ignore once it has been seen...going back looks more "fake" and my brain picks that up right away.
 
Back when Vega launched there were rumours floating around that a lot of engineers were suddenly diverted from it to Navi. I didn't put much stock in it at the time, but it's possible nvidia demonstrated their prototype ray tracing to Sony/MS and they demanded AMD feature match.

No demanding necessary. All MS did was state that RT would be in an upcoming version of DX (2018).

AMD likely planned to have RT at launch in RDNA in the same timeframe as NV, 2018. That was the original target date for RDNA with PS5 rumored to be targeting a holiday 2019 launch.

Basically, both NV and AMD knew RT would be coming in DX in 2018 and it was up to them to get the hardware ready. BTW - for those that get their panties in a bunch this doesn't mean that NV and/or AMD weren't already looking into hardware accelerated RT prior to this, but it's when MS decided (likely with consultation between both NV and AMD) that silicon with hardware accelerated RT would be feasible to launch.

Obviously something went wrong and not only was RDNA delayed a year but RT got delayed 2 years. Meanwhile NV as they had been doing for the past few years executed well and they had hardware ready for RTs introduction into DX.

Regards,
SB
 
BTW - for those that get their panties in a bunch this doesn't mean that NV and/or AMD weren't already looking into hardware accelerated RT prior to this, but it's when MS decided (likely with consultation between both NV and AMD) that silicon with hardware accelerated RT would be feasible to launch.
MS doesn't decide anything on when some silicon is ready to launch in GPUs, GPU vendors do that and MS is then standardize their proposals - if it's even possible. The inclusion of DXR in DX was a result of a GPU vendor coming to MS and saying that they will have RT h/w in their next gen GPUs which they would like to be accessible via DX.
 
I think NVIDIA simply surprised AMD.
Not with raytracing features, AMD knew they were comming in NVIDIA's GPU's a
But with the fact that DLSS made realtime hybrid rendering possible.

The whole DXR thing (without DLSS) would have been 2-3 generarions out from where we are now.

That is also why AMD scrambled to announce they also would have "upscaling technology"...abd since have shown nothing...because they got caught on the wrong foot.

NVIDIA dangled DXR (Going all-in and pushing the RTX branding so hard it even less informed posters calls DXR for RTX to this day) right infront if AMD...and AMD never saw DLSS coming in from the side IMHO.

Its not only dlss i think, even without it the rt gap is rather large, to the point where playing games with rt and no dlss is actually plausible if you can live with 4k/30, 1440p etc. (cp2077)
DLSS is just another enabler for those who crave highest resolutions and framerates while rting at a high level.
A balance between those is killer, ofcourse.

Raster performance they are doing well imo, just with the 4k/128mb limit in high bw rate situations.

Consoles basically are behind in every category, quite much so, seeing this is just now. Rdna3/rtx4000 will improve alot i guess, nv feels even more need to improve and AMD is on the right track.
 
The idea that RDNA 1 has utterly broken DXR support is fun. But I'm dubious. Though it might explain why RDNA 1 and RDNA 2 compute units appear to be pretty much exactly the same size.

Then again the dedicated ray acceleration hardware in RDNA seems to be so low in complexity that there's practically nothing to see anyway.

So I'm going with AMD being blindsided. NVidia has spent quite a long time and employed seemingly dozens if not hundreds of ray-tracing focused engineers for long enough that it was able to present Microsoft with a fait accompli.

I'm going to guess that inline tracing (DXR 1.1) is something that AMD asked for, because it suits them.

Once GDC arrives I suppose we'll get a clearer idea about techniques to run well on AMD and perhaps some historical insight.
 
The idea that RDNA 1 has utterly broken DXR support is fun. But I'm dubious. Though it might explain why RDNA 1 and RDNA 2 compute units appear to be pretty much exactly the same size.

Then again the dedicated ray acceleration hardware in RDNA seems to be so low in complexity that there's practically nothing to see anyway.

So I'm going with AMD being blindsided. NVidia has spent quite a long time and employed seemingly dozens if not hundreds of ray-tracing focused engineers for long enough that it was able to present Microsoft with a fait accompli.

I'm going to guess that inline tracing (DXR 1.1) is something that AMD asked for, because it suits them.

Once GDC arrives I suppose we'll get a clearer idea about techniques to run well on AMD and perhaps some historical insight.

Inline RT helps nvidia too, right ?
 
Dunno. Is there evidence that NVidia performance is improved by inline techniques?
I was under the assumption that inline was the ideal method of calling for rays go forward, that way you wouldn't need separate draw calls for RT. Nvidia shouldn't perform any worse with going with inline. It just may not perform better than it currently is.
 
So like 16GB VRAM?
Not sure if someone thought this would be some gotcha comment, but yes. Of course 16GB in a GPU are mostly useless at the moment.


Difference being the current and next >180 million 9th-gen consoles userbase will have >12GB of available VRAM and RDNA2 levels of RT performance, not 8/10GB available VRAM with RTX30 levels of RT performance.

VRAM utilisation in PC games started to skyrocket after the 8th gen consoles released, and 4GB highend cards like the RX290 and GTX980 had pretty bad performance in 2015 onwards.
 
VRAM utilisation in PC games started to skyrocket after the 8th gen consoles released, and 4GB highend cards like the RX290 and GTX980 had pretty bad performance in 2015 onwards.
8th gen consoles bumped the RAM sizes by 16x, from 512MB to 8GB while staying with HDDs as the main storage.

9th gen consoles bumped the RAM sizes only by 2x, from 8 to 16 GBs while moving to ultra fast SSDs for storage.

If you're expecting this gen have a similar effect on PC side RAM and VRAM sizes then you should really think more on this.

I imagine that no cards with 8+ GB VRAM will ever have any issues running multiplatform games with console level IQ. Devs will have to be careful and "creative" to support the cards with less than 16 GBs (with 16 they can just not care at all) but these cards will likely do fine, just as 4GB cards did in fact for the overwhelming majority of 8th gen titles.
 
Back
Top