Jawed
Legend
The slides designed to out leakers?Dawg this is from AMD slides.
The wonky looking internal ones at that.
The slides designed to out leakers?Dawg this is from AMD slides.
The wonky looking internal ones at that.
Don't think so, those aren't partner-distributed at all.The slides designed to out leakers?
More like 30% denser. Or this is specific to AMDs SRAM implementation.N5 SRAM scaling is miserable, like 1.15x or so?
At IEDM 2019, the 5nm process was quoted to have 1.84x logic density improvement compared to 1.35x SRAM density improvement.More like 30% denser. Or this is specific to AMDs SRAM implementation.
Yea and SRAM scaling there isn't 1.35X either.Since we only really have two mobile SoCs
Like the 3D V-cache you mean?GCDs not having IMCs is pretty obvious given that's the whole point of chiplets to put the badly scaling analog blocks on the larger cheaper process. Furthermore it would be a clusterfuck to have the L3 on the MCD and then have traffic from that interleave back through the GCDs to DRAM, it's utter nonsense.
Like the 3D V-cache you mean?
As mentioned, V-cache is an extension of the existing L3 - it's just additional banks and zero change to the data flow. That's not what's happening with the MCD. Incidentally I believe AMD will at some point move to a stacked giant L4 in the future, because it cannot be anything else but an L4 because it has to be centralised because of coherency.Like the 3D V-cache you mean?
It should be on MCD.
I imagine AMD would take the best of both worlds. N5P GCD for absolute logic density and performance and N6 MCD with HD/SRAM optimized libraries for lower cost per MB IC.
N5P SRAM density gain over N7/N6 is very mediocre.
512MB SRAM on N6 with optimized libraries would only be 280-300m2 (Figures estimated from wikichip data, behind paywall). On N5 hardly any better around 250+mm2 but much costlier.
But all those logic blocks can scale very high almost 1.48x with N5P (assuming AMD goes with N5P for GPUs else 1.85x on plain N5)
I suppose 2x GCD + 1x MCD would be closing in around 1000mm2 or maybe even more. Will cost a pretty penny.
More like 30% denser. Or this is specific to AMDs SRAM implementation.
Err, was basing the "high density sram" off: https://www.anandtech.com/show/15219/early-tsmc-5nm-test-chip-yields-80-hvm-coming-in-h1-2020
Was this wrong? I just assumed the calculations for sram density at 5nm were right, never bothered to check. But 128MB Sram from this is what, just over 20 mm^2 there, so... not huge?
N7 has a HD SRAM cell size of 27000 nm^2, so you're looking at a ~28% density improvement on paper.
Critically, this density is never even close to achieved IRL. With their zen 3 v-cache, AMD fit 64 MB of L3$ in a 36 mm^2 die. That's ~67000 nm^2 per bit, less than half the theoretical density. And, also from AMD, this was about twice the density of the L3 on the zen 3 CCD and RDNA2.
So from your post I assume Navi 33 has only 128bit GDDR6 bus, right?Ehh, ballpark ~440mm^2 but it's also less mem than N22.
Feasiable for 450 buck.
Make it 16 gigs then? GDDR6 supports clamshellingGreetings to all members of beyond3d forum.
.
So from your post I assume Navi 33 has only 128bit GDDR6 bus, right?
The question is who would want to buy Navi 33 for at least $450 with not even 12GB Vram next year? Even If It performs like RX 6900XT with only 8GB Vram It's a hard sell in my opinion.
Uh I mean do you really have a choice?The question is who would want to buy Navi 33 for at least $450
It's a mobile first part much the same way N23 is.Even If It performs like RX 6900XT with only 8GB Vram It's a hard sell in my opinion.
I was going by this, which seems to be based on newer data:Err, was basing the "high density sram" off: https://www.anandtech.com/show/15219/early-tsmc-5nm-test-chip-yields-80-hvm-coming-in-h1-2020
Was this wrong? I just assumed the calculations for sram density at 5nm were right, never bothered to check. But 128MB Sram from this is what, just over 20 mm^2 there, so... not huge?
MCD at around 300mm²:
will not have enough perimeter for 256-bit GDDR6 and all the other GPU IO and 2x 2TB/s (guess) L3 interfaces to each GCD.
Nope.Probably bigger