AMD RDNA3 Specifications Discussion Thread

Not that either any of the cards announced so far are for me (too expensive), but if I were to get one then getting RTX 4090 Raster performance for 1k USD or less would be the thing for me. The only reason I even want a HDMI 2.1 card is for 120 Hz locked gaming at minimum 3200x1800 without needing to rely on questionable upscaling (when it looks OK it's worth it, when it doesn't it's worthless).

And while there's a card that costs well over 1.5k USD that can do that with RT in some games, it can't do it in all games, so RT is still mostly just turn it on, look at it, then turn it off so I can play the game.

Hell, even 1k USD is in the HAHAHAHAHAHAHAHAHHAHA, No Thanks price range for me. :) I have far more interesting things in life that I'd rather spend money on.

It'll be interesting to see what either NV or AMD can offer in the 300-500 USD price range ... if anything.

As it is, it's far more likely that I'll grab an RTX 3070 for 300 USD once it gets that low (used or new) or a 6800 XT for 400 USD if it gets that low (used or new).

Also, 300+ watts for a GPU? Nope, no thanks. I'm likely bowing out of AAA PC gaming here soon with PC graphics cards going the direction they are going. Not that this is a bad thing since for me most good games aren't made by AAA developers.

Regards,
SB
 
I find it incredibly depressing that immediately after RDNA3's launch we are still seeing the "RT performance doesn't matter" arguments hauled out to justify it's underwhelming performance. I'd truly hoped this launch would see an end to such arguments.

Who on Earth buys a $900+ GPU to not play games at their maxed out settings? If you don't care about maxing games out (or at least turning on settings that provide a genuine, noticeable improvement to the overall presentation), then don't get a $900 GPU, get a console. As far as I'm concerned, pure raster performance is mostly irrelevant at this performance tier - it's already more than fast enough at 3080/6800XT levels. The only time those GPU's are really challenged is when RT it turned on, especially with modern upscaling capabilities.
 
AMD published numbers for three additional games with raytracing:
30-1080.c30708a3.png


And Doom is not "relatively lightweights ray traced game". And even here the 4090 will be ~80% faster while only beeing 15% bigger. Perf/W will be 50%+ better. Navi31 is just bad. Worse than RDNA2 but here you could give AMD credit for at least supporting Raytracing.

Anandtech has the full slide deck, crucially with all of the 'endnotes' like RX-832 referenced above: https://www.anandtech.com/Gallery/Album/8202#75

I really wish they explicitly stated that they were identical systems testing the 6950XT and 7900XTX, in a lot of the end notes they're cagey and mention that they are 'comparable' systems.

In this slide FSR2 performance mode was enabled.
 

Attachments

  • rx832.PNG
    rx832.PNG
    64.1 KB · Views: 26
This was always the only logical outcome. People somehow expect fantasy numbers every generation.
Well, I was wrong to give AMD credit for being able to improve ray tracing efficiency. It looks like they went backwards and RDNA 3 is even less efficient than RDNA 2 for RT.
 
How can they have a clock regression on 5nm ?.. or they downclocked so they can have 50% performance per watt.. so they.ll lose to a 4090 and 3090ti in some games.. the teraflops number is to low if they want to compete with nvidia in rt and raster 😐😼
 
I find it incredibly depressing that immediately after RDNA3's launch we are still seeing the "RT performance doesn't matter" arguments hauled out to justify it's underwhelming performance. I'd truly hoped this launch would see an end to such arguments.

Who on Earth buys a $900+ GPU to not play games at their maxed out settings? If you don't care about maxing games out (or at least turning on settings that provide a genuine, noticeable improvement to the overall presentation), then don't get a $900 GPU, get a console. As far as I'm concerned, pure raster performance is mostly irrelevant at this performance tier - it's already more than fast enough at 3080/6800XT levels. The only time those GPU's are really challenged is when RT it turned on, especially with modern upscaling capabilities.

Considering even the RTX 4090 isn't fast enough in RT for me, I'd be using that with RT off mostly anyway. So at least from me it's not due to AMD being worse in RT. I already expected it to not beat the 4090 and the 4090 was already not good enough.

Regards,
SB
 
This layout make no sense.
It does make a perfect sense. Memory controllers and sram don't scale well with new tech processes, so they did an obvious stuff and moved them to cheaper nodes. Though, packaging is not free, so whether they were able to cut down costs is beyond me.
Making many small GCDs is what does not make any sense in gaming market, this would do only bad.
 
What's about huge perf dips in Cyberpunk 2077 when you simply stand in front of a mirror with a single quarter res planar reflection?

That's the cost of rendering geometry twice but it's nice not having to worry about building and maintaining any acceleration structure in exchange. That's let's us elegantly handle any dynamic and deformable geometry (sans reflective surface itself) happening in the scene ...

Having no acceleration structure is arguably more elegant in the above case I described as opposed to dealing with the warts of acceleration structures such as need to reduce LoDs or the distance of geometry included ...

Gosh, noise comes from physically correct brdf with stochastic sampling. RT reflections obviously don't produce any noise for absolutely flat surfaces, but they do for rough ones. Planar reflections don't support stochastic sampling due to rasterization limitations and thus can't be physically correct at all.

At that point you could get away with using lower resolution representations like voxels or SDFs for rough reflections (indirect diffuse) ...

There are still some more hacks left to be had in rasterization to increase quality and they're becoming appealing alternatives in the face of RT ...
 
Who on Earth buys a $900+ GPU to not play games at their maxed out settings?

"Maxxed out settings" can mean high resolutions at high framerates too. The person purchasing a $900 GPU may not want to play these games at 1080p to get access to the 120fps mode.

If you don't care about maxing games out (or at least turning on settings that provide a genuine, noticeable improvement to the overall presentation), then don't get a $900 GPU, get a console.

Consoles still provide significantly different experiences in many aspects of gaming, the PC as a gaming platform did not only gain worth simply because Nvidia had better performing RT. Like you're saying there's no point to PC gaming unless you go full RT, good luck watching the quality of ports shit the bed even harder then when your install base is so minuscule.

And look around at the ASP of cards these days! $900-$1k isn't some exotic price tier anymore, I sure as fuck wish it was the endpoint for the high end, but far from it.
 
Last edited:
That's let's us elegantly handle any dynamic and deformable geometry (sans reflective surface itself) happening in the scene
BVH allows to handle all that stuff

There are still some more hacks left to be had in rasterization to increase quality and they're becoming appealing alternatives in the face of RT ...
I just don't get it why would somebody want to get rid of BVH and bolt on tons of hacks in the meantime.
 
Think positive. For instance, I haven't seen a single example of people complaining about "fake frames" or "fake pixels" now that AMD has announced FSR3. I wonder why..
Must not be looking hard enough tbh, Ive seen multiple people complaining that AMD is following the fake frame Nvidia with FSR3.

Also people complaining that AMD didn't do many performance numbers in their presentations when Nvidia had even less.
 
Huh, I assumed they were dedicated from what I was reading. After all RDNA2 already supports Int8, just adding BF16 would be a bit lame.
I just re-watched the part, it seems a bit shady. They're saying "dedicated, 2 per CU" which are sharing all the register files, caches and instruction scheduling as streamprocessors. But whatever they are, they're not matrix accelerators like Tensor, Matrix and XMX cores.
 
Think positive. For instance, I haven't seen a single example of people complaining about "fake frames" or "fake pixels" now that AMD has announced FSR3. I wonder why..
They are just as bullcrap with AMD as they are with NVIDIA, this is literally repeat of FSR debut, some people trying to paint the picture that people would somehow be OK with it now that AMD did it, while those who actually were pissing over DLSS were doing exactly same over FSR (me included). Steve at GN also pissed at this framegen-BS.
 
Probably because they haven't actually showed anything to talk about.
That is where I am at. I will bitch and moan when we get details about exactly what they are doing.
I have been upset about a bunch of GPU changes over the years.
Started with the vendor based post processing stuff, now we can't have exact performance comparisons.
Then dynamic GPU clocks in ~2012, not knowing exactly what performance to expect when buying the same product. (obviously not as big of a deal as I thought)
Then we got DLSS and other features that can dynamically change IQ to boost performance, so now the entire frame being rendered isn't a direct comparison.
Now we are getting fake frames injected for more FPS, yay interpolation....
 
Back
Top