That RDNA 1.8 Consoles Rumor *spawn*

Also ... how are these Youtoobers getting insight into AMD's innermost RDNA3 architectural secrets given that RDNA2 cards haven't launched yet, and AMD haven't talked about RDNA2 at Hotchips or in a whitepaper yet?
It's Cerny who said AMD would end up with some PS5 features, developped as part of their partnership, in a future gpu. That lead to speculation ps5 has rdna3 features, which would be technically true, but Cerny is trying to explain it's the other way around. I can't blame people for misunderstandng this part. The rumor mongering feedback loop twists these things beyond recognition.

It looks like the nature of their partnership could mean whatever sony develop or ask custom is equally owned by AMD so they can add them to their rdna roadmap if they want to. After all, it's still AMD doing the job, it's still their tech, sony asks for features and changes. He pointed out the cache scrubbers being something AMD wasn't finding useful in their PC gpus, so that part would be entirely exclusive to ps5. It's not because AMD cannot it's just because they don't think they need it. And we have the reverse about some (one?) rdna2 features that would be on the upcoming PC gpus but sony decided not to have them. This is why Cerny is saying the rdna2 feature set is "malleable".

They might have removed features to trim the fat and keep the die size manageable, or because some of their customizations supercede those features or make them redundant.
 
Last edited:
It's Cerny who said AMD would end up with some PS5 features, developped as part of their partnership, in a future gpu. That lead to speculation ps5 has rdna3 features, which would be technically true, but Cerny is trying to explain it's the other way around. I can't blame people for misunderstandng this part. The rumor mongering feedback loop twists these things beyond recognition.

Speculation is fine. But that's not, and never will be, any excuse for claiming you have insider information from AMDs architecture engineers. Getting an architectural heads up - as a none developer - for a geometry engine beyond the one that has not yet been revealed and that you're hoping to challenge Nvidia with in the generation before ....

... I mean if you're risking your career two generations early, why would you do it for a random Youtube bullshitter?
 
RedGamingTech has a decent track record, so I guess he really has sources and he's not a bullshitter.
But the rumors, even when true, are not that specific, most of them are just logical (nvidia and amd improve things, yeeaaaah). So I doubt anyone is really risking anything.
 
It's Cerny who said AMD would end up with some PS5 features, developped as part of their partnership, in a future gpu. That lead to speculation ps5 has rdna3 features, which would be technically true, but Cerny is trying to explain it's the other way around. I can't blame people for misunderstandng this part. The rumor mongering feedback loop twists these things beyond recognition.

It looks like the nature of their partnership could mean whatever sony develop or ask custom is equally owned by AMD so they can add them to their rdna roadmap if they want to. After all, it's still AMD doing the job, it's still their tech, sony asks for features and changes. He pointed out the cache scrubbers being something AMD wasn't finding useful in their PC gpus, so that part would be entirely exclusive to ps5. It's not because AMD cannot it's just because they don't think they need it. And we have the reverse about some (one?) rdna2 features that would be on the upcoming PC gpus but sony decided not to have them. This is why Cerny is saying the rdna2 feature set is "malleable".

They might have removed features to trim the fat and keep the die size manageable, or because some of their customizations supercede those features or make them redundant.
That wasn’t my take from Cerny. I’m under the impression that RDNA 2 is new and any features found in the new (RDNA2) cards released around the time are just showing a successful collaboration, however sone features are exclusive to PS5. This was around clarify that if you can’t compare the PS5 GPU to those released around the same time - it may have some of the same features but some are ours and not for AMD, then specially mentioning cache scrubbers.

I think it’s entirely plausible that when working in collaboration this arrangement is possible.
 
I thought it had to be symmetrical (N disabled per SA per SE) because of the way the GCP distributes workloads.
Why? Wouldn't that defeat the purpose of having a number of compute units that are supposed to work asynchronously?
The only problem I could see is unpredictable performance variations from the fact that the non-CU elements in each shader array would be serving uneven amounts of CUs, though I wonder if that difference would ever be substantial. Those elements are probably engineered to serve 5 WGPs concurrently, anyways.
At least in the Navi 10 chip, of course. It could be that for the PS5 SoC there's e.g. less L1 per shader array because they'll never want it to serve more than 4 WGPs.


New Even if it was possible, is it even worth even a small yield sacrifice for getting 10.85TF instead of 10.28TF?
It would depend on how small the yield sacrifice is. On an edge hypothetical case, if they're losing like 1 die every 8 wafers from the 36->38 CUs upgrade then wouldn't you say the sacrifice is worth it?


New RedGamingTech has a decent track record, so I guess he really has sources and he's not a bullshitter.
They leaked the Radeon VII with pictures of the GPU 2 weeks before it was made public. They definitely did have a very solid source, at least back then.
 
Paul from RGT knows he gets bullshitted by new sources all the time, and is very honest about it. He always indicates if they consider the source of the rumors reliable or not, based on how often the sources told them verifiable or corroborated information. He always say "this is a new contact giving questionable info so take this with grain of salt" or otherwise "this is a long time source who told me XYZ which ended up real so I trust this source more".
 
Why? Wouldn't that defeat the purpose of having a number of compute units that are supposed to work asynchronously?
The only problem I could see is unpredictable performance variations from the fact that the non-CU elements in each shader array would be serving uneven amounts of CUs, though I wonder if that difference would ever be substantial. Those elements are probably engineered to serve 5 WGPs concurrently, anyways.
At least in the Navi 10 chip, of course. It could be that for the PS5 SoC there's e.g. less L1 per shader array because they'll never want it to serve more than 4 WGPs.
I was wondering if the screen was divided up equally amongst shader engines, so it'd be a dispatching issue, but I suppose since it doesn't hold between Shader Arrays, it might be moot.

I dunno! :p (just throwing ideas out there)
 
It's Cerny who said AMD would end up with some PS5 features, developped as part of their partnership, in a future gpu. That lead to speculation ps5 has rdna3 features, which would be technically true, but Cerny is trying to explain it's the other way around. I can't blame people for misunderstandng this part. The rumor mongering feedback loop twists these things beyond recognition.

Oh I'm sure AMD have ended up adopting some Sony specced features, and that working with Sony gives additional input that benefits AMD. But neither that nor Cerny's comments can lead in a meaningful way to "Cerny's Geometry engine is in RDNA3 and Sony came up with the idea of larger caches (wat), which AMD are using for RDNA3".

That's some real high level sources you'd need to know about RDNA3 caches. Also, good job that in 2017 Sony came up with the idea of bigger caches, but too bad AMD couldn't implement them till 2022.

It looks like the nature of their partnership could mean whatever sony develop or ask custom is equally owned by AMD so they can add them to their rdna roadmap if they want to. After all, it's still AMD doing the job, it's still their tech, sony asks for features and changes. He pointed out the cache scrubbers being something AMD wasn't finding useful in their PC gpus, so that part would be entirely exclusive to ps5. It's not because AMD cannot it's just because they don't think they need it. And we have the reverse about some (one?) rdna2 features that would be on the upcoming PC gpus but sony decided not to have them. This is why Cerny is saying the rdna2 feature set is "malleable".

They might have removed features to trim the fat and keep the die size manageable, or because some of their customizations supercede those features or make them redundant.

I'm totally down with this idea and always have been, and I understood what Cerny was talking about in The Fury Road to PS5. It's not the idea of Sony presented concepts becoming part of PC RDNA that I'm finding funny, it's these highly specific, far away, highly sensitive "leaks" and the narrative behind their presentation.

On the subject of Sony introduced concepts coming to RDNA, Cerny himself said this:

"... if we bring concepts to AMD that are felt to be widely useful, then they can be adopted into RDNA2, and used broadly, including in PC GPUs."

[broadly includes Xbox...?]

"If you see a similar discreet GPU available as a PC card at roughly the same time as we release our console, that means our collaboration with AMD succeeded."

So Cerny himself is definitely not talking about RDNA3 cards!

 
@ function

I highly doubt anyone sane is seriously thinking PS5 GPU is anything close to RDNA3. It will perhaps be close to full RDNA2 like XSX, and have a something custom in it that seperates it from anything else. That doesn't make it RDNA3 :p
 
I feel like Sony could launch a PS5 Pro as early as 2022, using 2x PS5 SoCs (or 8c/72 CU) but they won't. PS5 of 2020 will last the whole generation, aside from a PS5 Slim on 5nm EUV.

PS6 will be out Holiday 2026 using
TSMC 3nm/2nm GAAFeT (gate-all-around, nanosheet/wire) EUV,
2 CPU Chiplets (8c/16t x2)
2 GPU Chipets (72 CU 2x) + RT block
64 GB High Bandwidth Memory (HBM3+/3E) at 2.4 TB/s -
CPU: Zen 5+ core
GPU arch: RDNA5
SSD: 25-30 GB/s raw - 50-60 GB/s compressed

Massively improved ray tracing performance is the main "next-gen" draw, with a large increase in RAM and memory bandwidth over PS5/XSX. - Native 4K will still be the target for PS6 games at 60fps, even reconstructed 4K. RT will become more common and more extensive. And some games will be native 4K 30fps with high percentage of the rendering pipe doing RT, only some rasterization. Overall, if PS5 generation graphics are 90-95% Raster, 5-10% RT, PS6 games will be like 20-40% RT, 60-80% Raster.
 
Last edited:
@ function

I highly doubt anyone sane is seriously thinking PS5 GPU is anything close to RDNA3. It will perhaps be close to full RDNA2 like XSX, and have a something custom in it that seperates it from anything else. That doesn't make it RDNA3 :p
It's a hope [emoji1]
 
I feel like Sony could launch a PS5 Pro as early as 2022, using 2x PS5 SoCs (or 8c/72 CU) but they won't. PS5 of 2020 will last the whole generation, aside from a PS5 Slim on 5nm EUV.

PS6 will be out Holiday 2026 using
TSMC 3nm/2nm GAAFeT (gate-all-around, nanosheet/wire) EUV,
2 CPU Chiplets (8c/16t x2)
2 GPU Chipets (72 CU 2x) + RT block
64 GB High Bandwidth Memory (HBM3+/3E) at 2.4 TB/s -
CPU: Zen 5+ core
GPU arch: RDNA5
SSD: 25-30 GB/s raw - 50-60 GB/s compressed

Massively improved ray tracing performance is the main "next-gen" draw, with a large increase in RAM and memory bandwidth over PS5/XSX. - Native 4K will still be the target for PS6 games at 60fps, even reconstructed 4K. RT will become more common and more extensive. And some games will be native 4K 30fps with high percentage of the rendering pipe doing RT, only some rasterization. Overall, if PS5 generation graphics are 90-95% Raster, 5-10% RT, PS6 games will be like 20-40% RT, 60-80% Raster.
So better waiting for ps6 [emoji1]
 
I feel like Sony could launch a PS5 Pro as early as 2022, using 2x PS5 SoCs (or 8c/72 CU) but they won't. PS5 of 2020 will last the whole generation, aside from a PS5 Slim on 5nm EUV.
.

I wish, but I just don't think 5nm will be cheap enough for a console in 2022 and more importantly, I don't think it will offer enough power savings to fit two PS5 SOC's in a console.
 
...

...



Page table mappings on the GPU are managed using the UpdateTileMappings API and it's likely done using the CPU as AMD and Nvidia suggests on page 44 so it's an expensive operation. What I want to know is if whether or not RDNA2 has a different way of updating the page table mappings but based off of AMD's recommendation in their RDNA optimization guide, I'm not holding out hope that they've fixed this issue at all ...

Years later we still have performance problems binding a tiled resource to a memory heap ...

A little late contribution (was busy with FS 2020) .... AMD's/NVIDIA's embrace of sampler feedback and of the velocity architecture tends to confirm that SFS is a new generation of PRT. As it was pointed out months ago, it is the identification of the desired non-resident tiles (polling for desired mip level and checking for residency) that was time consuming/opaque and which has now been solved with two MIP mapping structures implemented in GPU cache itself. The mapping of the PRT from residency map to memory address via the hardware pagetable has been solved in hardware five years ago (the so-called hardware acceleration provided by Southern Islands/Fermi GPUs for sparse resources).
Also, no where has it been stated that there are performance problems in binding a sparse resource to a heap specifically (use of CPU time is no proof of it).
 
Back
Top