AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
That's literally the figure for the 2017 EPYC .... things happened long time ago.

I'm still not holding my breath for consumer chiplet gpu's. Datacenter on the other hand I think is inevitable. I'm happy to be wrong in this one though if consumer chiplet gpu's with no downside on perf turn out sooner than later.
 
AI/HPC workloads in datacenter/professional use. There is huge incentive there to crunch data that doesn't either fit in one gpu or reasonable compute time requires multiple gpu's. Datacenter full of these types of things: https://www.nvidia.com/en-us/data-center/hgx/ John Carmack for example got one of these for ai work he is pursuing: https://www.nvidia.com/en-us/data-center/dgx-station-a100/

CPU's as chiplets make sense in consumer world as the nature of cpu tasks is such that cores can often operate independently of each other. Though it's not always the case and we did see perf issues initially(os scheduler update, better implementation in zen3, cp2077 issues requiring chiplet specific tuning etc).

AMD won multiple big supercomputer deals. My guess is chiplets go there first. Perhaps also as prosumer devices as they could be tempting in various non gaming use cases like research university/companies do. Perhaps chiplet goes to frontier or el capitan: https://www.amd.com/en/products/exascale-era


But they're not talking about that. RDNA 3 is their futur gaming arch.

CDNA X is there compute / science arch.
 
But they're not talking about that. RDNA 3 is their futur gaming arch.

CDNA X is there compute / science arch.

It can be same kind of red herring as gddr6x turned out to be. Sometimes internet thinks it knows and it doesn't know.
 
But N31 went past the power-on, lol.

I wouldn't know. Perhaps you can link the material that tells what n31 is so rest of us would also know? In internet posts like this are easily ignored without solid sources.
ah, I guess this is the source:

80cu chiplet in 5nm sounds odd. It should be possible to fit those 160cu's to single die. Maybe a test vehicle that isn't necessarily intended to be sold to consumers? 80cu chiplet in 7nm could make more sense considering increasing prices of more advanced processes and that 80cu chip is already in production in 7nm
 
Last edited:
80cu chiplet in 7nm could make more sense considering increasing prices of more advanced processes and that 80cu chip is already in production in 7nm
They are also hitting 300W power so putting two of them in one product sounds kinda unrealistic. And that's where 5nm comes in.
 
Ok, here we go, again.

A reminder: Bandwidth for desktop GPUs is insanely cheap compared to their compute. The entire GDDR6X bus on a 3090 uses up maybe a handful of watts at most. 7.25 picojoules per byte. Picjoules is ^-12 joules and 1 joule = 1 watt without time. So even a terabyte isn't that much (like 7 watts). Bandwidth is only dear compared to mobile power usage. Compared to desktop/HPC stuff, where compute frequencies hit the exponential growth curve hard while bandwidth costs remain constant, bandwidth power usage is negligible. Remember when citing math, to actually do the math.
 
It's important to remember the difference between what die to die, or die to chiplet, energy costs using a propriety in socket interconnect, vs what an external interface like DDR or HBM costs.
Also while GPU workloads, probably have higher inter-thread communication, they are also more able to deal with more latency in these workloads - compared to CPU's anyway.

While I'm no expert in these things, I think that taking the existing 80CU RDNA2 core - with 256Mb SRAM, a slight upgrade to the Raytracing functionality, and modification to the memory interface to work with a host I/O die,
shrinking it to 5nm, putting 2 of them, and a I/O die, is a realistic option for a potential 7900 type card.

That 256Mb SRAM on each chiplet helps saves a lot of the bandwidth, some smartish cache management, would allow for some texture duplication in both the SRAM caches.
 
Status
Not open for further replies.
Back
Top