AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
There will be key architectural improvements over RDNA 2

They include:
Zen 3 like cache
Scalability
Improved front end.
Much better ray tracing performance
Greatly upgraded geometry capabilities, based on their work for PS5.
some actual answer to DLSS. hardware? software?
TSMC 5nm, likely late Q2 2022 release.

Isn't that pretty much what Infinity Cache is in RDNA2?
 
There will be key architectural improvements over RDNA 2

They include:
Zen 3 like cache
Scalability
Improved front end.
Much better ray tracing performance
Greatly upgraded geometry capabilities, based on their work for PS5.
some actual answer to DLSS. hardware? software?
TSMC 5nm, likely late Q2 2022 release.


Oh well, I guess I'll just wait another 15 months then.
Perhaps I'll be able to buy a 6800XT at MSRP by then.
 
There will be key architectural improvements over RDNA 2

They include:
Greatly upgraded geometry capabilities, based on their work for PS5.
The what now? Everything I've seen is saying PS5 is inferior to PC-RDNA2/Xbox Series when it comes to geometry, not better?

edit:
as for the "DLSS answer", personally I hope DLSS and any such image quality degrading nonsense will just be buried, but they already have "tensor cores" in CDNA which apparently fit DLSS-type loads well

edit2: The impression I've gotten from PS5 geometry is Vega/RDNA on steroids, while RDNA2 takes it further with mesh shaders etc?
 
Last edited:
There will be key architectural improvements over RDNA 2

They include:
Zen 3 like cache
Scalability
Improved front end.
Much better ray tracing performance
Greatly upgraded geometry capabilities, based on their work for PS5.
some actual answer to DLSS. hardware? software?
TSMC 5nm, likely late Q2 2022 release.

I wonder why people seem to couple a DLSS answer to a GPU generation again and again. RDNA2 has the same machine learning instructions as RDNA1 (including 8/4-bit variants) and unless RDNA3 goes full tensor-cores/mAI or dedicated upscaling HW there shouldn't be a significant HW barrier to bringing this to the entire RDNA lineup.

Also now that AMD is consistently enabling primitive shaders with culling, is the geometry actually a problem for these GPUs? If there is indeed a >2x perf target then sure there might be some need to make it scale with that, but there are a large number of scaling limitations that they have to rethink with a larger chip and MCM. (e.g. throughput of launching waves, keeping the CUs fed with data etc.)

Edit for the above with respect to mesh shaders: RDNA1 had most of the core capabilities, just some missing bits to be 100% feature complete to actually be able to expose it to applications through the Direct3D API. I would assume that such a thing would be exposed for consoles in their native API though.
 
I wonder why people seem to couple a DLSS answer to a GPU generation again and again. RDNA2 has the same machine learning instructions as RDNA1 (including 8/4-bit variants)
RDNA2 shares the machine learning capabilities of Navi12 (and Vega20). Navi 10 & 14 (which most people think when you say RDNA) are inferior on that front and don't include faster INT4/8 support
 
Isn't that pretty much what Infinity Cache is in RDNA2?

Yeah, that's exactly what it is. These rumors don't make much sense. Another vast increase in performance per watt in just two years to hit 400 watts alone. "An answer to DLSS" which they uhhh, already announced. And such a ridiculously large "chiplet" makes no sense whatsoever. The point of MCM is a cost down, increase those yield to 90%+, get those engineering costs down, etc. Making two giant low yield dies at the same time doesn't fit that at all.
 
RDNA2 shares the machine learning capabilities of Navi12 (and Vega20). Navi 10 & 14 (which most people think when you say RDNA) are inferior on that front and don't include faster INT4/8 support
Maybe it's just not enabled in the gaming-focused products?

edit:
RDNA whitepaper, p.14: "Some variants of the dual compute unit expose additional mixed-precision dot-product modes in the ALUs, primarily for accelerating machine learning inference. A mixed-precision FMA dot2 will compute two half-precision multiplications and then add the results to a single-precision accumulator. For even greater throughput, some ALUs will support 8-bit integer dot4 operations and 4-bit dot8 operations, all of which use 32-bit accumulators to avoid any overflows."
 

Attachments

  • RDNA_Whitepaper_INT-capabilities.PNG
    RDNA_Whitepaper_INT-capabilities.PNG
    139.6 KB · Views: 15
Maybe it's just not enabled in the gaming-focused products?

edit:
RDNA whitepaper, p.14: "Some variants of the dual compute unit expose additional mixed-precision dot-product modes in the ALUs, primarily for accelerating machine learning inference. A mixed-precision FMA dot2 will compute two half-precision multiplications and then add the results to a single-precision accumulator. For even greater throughput, some ALUs will support 8-bit integer dot4 operations and 4-bit dot8 operations, all of which use 32-bit accumulators to avoid any overflows."
The whitepaper is referring to DCUs used by Navi12 there. They're different versions of the architecture, it was same with Vega generation, ".1" versions if you like of said architectures, but with RDNA2 they just included it in everything.
 
I have not heard any sort of anything from developers about Geometry Engine being some sort of better than RDNA 2 front end change. Not a thing at all.
IIRC, the geometry engine rumors are coming from the same sources that leaked Radeon VII, Infinity Cache, clocks and a bunch of other AMD and Sony stuff well before anyone else and with very good accuracy.

If this was coming from rumor mongers with a poorer track record like e.g. AdoredTV I wouldn't be giving it much thought.
Incidentally, the same sources who described Sony's custom Geometry Engine also mentioned the PS5's CPU has unified L3 (which could be responsible for the better measured performance at very high framerates). I think he started mentioning these in his videos during the Summer.

So when a x-ray picture of the PS5 SoC comes out, if we see separate L3 for each 4-core CCX then I'll assume the Geometry Engine rumor is probably true. If not, then it's probably not true.



Are we really going for secret sauce again?
I have flashbacks from Xbox one launch.
You don't need to go back that far to find secret sauce theories.

The SeriesX launched 3 months ago, and when DigitalFoundry couldn't observe its predicted 15-20% performance advantage over the PS5 in multiplatform titles, we started hearing about how there's a future devkit that will unlock the full power of the console.
Aside from Hitman 3 which for all we know it's the exception to the rule (and where we can't really compare both consoles because they're running different resolutions at over 60FPS), we're yet to see anything running better on the SeriesX in any measurably substantial way.


Regardless, I don't think anyone ever suggested the PS5 has unlocked geometry potential that will put it one step above the competition.
RGT's statements are simpy that the PS5's geometry engine is more advanced/flexible and the console's apparent lack of RDNA2's VRS stems from hardware design decisions taken around the geometry engine.
Note: I don't know how the console's geometry engine can influence the shading precision (and at first thought it doesn't even make a lot of sense), but then again I also haven't read Sony's multitude of patents around foveated rendering.
 
Status
Not open for further replies.
Back
Top