AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
This article is quite nice:

AMD hinted at possible further use of Infinity Cache | Diit.cz

and it seems to be written by someone we know quite well around here! (Translate to English works pretty well on this page.)

So the article predates Navi 22 and 23, but the table of hitrates for capacities at various resolutions is a useful reference I think (constructed by no-X rather than quoting AMD, I believe).

The 32MB that we actually got with Navi 23 is listed in the table at a 55% hitrate, with an implied equivalence of 497GB/s.

Assuming Navi 33 also targets 1080p, I have to wonder whether Infinity Cache would be smaller than expected, e.g. 64MB. From the table that gives a 72% hitrate with 993GB/s equivalence. 72% hitrate is very similar to Navi 21 at 1440p, 74%.

A 1080p render target is genuinely going to be around one-quarter of the size of a 4K render target in the worst case (high-complexity deferred rendering with many G-buffer attributes). Additionally, with only 64 ROPs, the demand for bandwidth is actually halved, before taking account of any clock increases seen with Navi 3x.

So it seems like 64MB of Infinity Cache would be "just right" for 1080p at 2x 6600XT performance.

That would save another 64mm² in Navi 21, taking us to (520-128)/1.15 = 340mm².
 
Sigh, when I was writing that I felt something was wrong, but couldn't put my finger on it. Bed resolved the problem: I hadn't accounted for bytes! So 8x 2%. ARGH.

Also 6tr/bit is too low. That's how much the actual storage takes, but on top you also need the access tree, which is ~1tr/bit (1/2 + 1/4 + 1/8 + 1/16 ... ~= 1) , and then the tags and stuff, which rounds up to another one. 8tr/bit is probably as low as a cache can go.
 
Also 6tr/bit is too low. That's how much the actual storage takes, but on top you also need the access tree, which is ~1tr/bit (1/2 + 1/4 + 1/8 + 1/16 ... ~= 1) , and then the tags and stuff, which rounds up to another one. 8tr/bit is probably as low as a cache can go.
no-X's article shows a slide from AMD that explicitly references the density-optimised cache from Zen L3, "Server bandwidth requirements match those of a GPU last level cache".

It could be argued that it's kinda pointless to talk about transistor count directly and it's simpler to refer to area. The article talks about 1mm² per MB in L3.

With Zen 3D V-Cache the density seems to be a lot higher: 64MB occupies 36mm². It seems this is because this is just an SRAM array and the supporting hardware lives "entirely" on the Zen chiplet.

The TechInsights investigation referenced here:

AMD's Been Planning Its V-Cache Ryzen Chips for Quite Some Time - ExtremeTech

shows how the Zen chiplet allocates a substantial portion of an L3 cache block to TSV connections. 23,000 TSVs are estimated, presumably across the entire V-Cache chiplet that is bonded to the Zen chiplet.

That quantity of TSVs provides a useful reference point when contemplating the connection of GCD to MCD in Navi 31/32, which is presumably in the 2 to 4 TB/s region per GCD.
 
Assuming Navi 33 also targets 1080p
If popular rumors are at all true, there is no chance N33 will target 1080p gaming as N33 will still be a higher end GPU by current standards. Basically, RDNA3 isn't gonna be a top down lineup or complete replacement for RDNA2. This is why there's talk of refreshing N22 and N23 to fit below the RDNA3 entries.

Still just rumors obviously, but they are looking fairly consistent and probably the only way it really makes sense if we trust that N31 really is gonna be this giant dual tile beast.

But yea, I wouldn't expect anything less than 128MB of L3 for N33 if it's being paired with a 128-bit bus.
 
The below is dying, or to be more precise, dead.
The whole PC gaming industry and market cannot be sustained with >$500 products alone. There have to be 'reasonably priced' options available or you're just gonna have more than half the market up and walk away. Obviously from a business perspective, this could still work fine for a GPU manufacturer assuming their margins are good enough, but with TSMC expansions and whatnot, it would be silly to leave money on the table by not catering to such a large crowd whatsoever.

As for 8GB, you can still do 1440p gaming on 8GB comfortably, and reconstruction techniques plus the potential for DirectStorage in next gen titles down the line could keep VRAM demands from scaling as we'd normally expect.
 
The whole PC gaming industry and market cannot be sustained with >$500 products alone
Of course it can; the upgrade cycles just gonna extend by a bit just like they did in say, phones.
1k dorra flagships were a travesty back when they started and now they're the norm!
but with TSMC expansions and whatnot
AMD has the infinity of the laptop/server market to paint bright red looooooong before setlling for selling 250mm^2 pieces of bleeding edge Si for 300 buck.
As for 8GB, you can still do 1440p gaming on 8GB comfortably, and reconstruction techniques plus the potential for DirectStorage in next gen titles down the line could keep VRAM demands from scaling as we'd normally expect.
Yes but actually no.
 
Of course it can; the upgrade cycles just gonna extend by a bit just like they did in say, phones.
1k dorra flagships were a travesty back when they started and now they're the norm!

AMD has the infinity of the laptop/server market to paint bright red looooooong before setlling for selling 250mm^2 pieces of bleeding edge Si for 300 buck.

Yes but actually no.
The phone market isn't sustained on high end phones alone, either. Very different industry anyways, where you can subsidize costs through contracts and whatnot. People also view them differently than they would a dedicated graphics processor. I usually like to say there's no reason you cant compare apples and oranges, but it really doesn't make sense as an analogy in this case.

And I didn't say AMD would be selling lower end 'bleeding edge' GPU's. I was specifically referring to refreshing N22 and 23, which shouldn't be an issue at all, be it on 7 or 6nm. They're already making them. All they have to do is keep doing so. And lower the prices a bit.
 
The phone market isn't sustained on high end phones alone, either.
For some vendors it surely is.
Very different industry anyways, where you can subsidize costs through contracts and whatnot.
Not how most of the world buys them.
People also view them differently than they would a dedicated graphics processor.
Toys are toys.
And I didn't say AMD would be selling lower end 'bleeding edge' GPU's
That's exactly what you're suggesting.
I was specifically referring to refreshing N22 and 23, which shouldn't be an issue at all, be it on 7 or 6nm.
Bigger than RMB sold for less money and less OEM leverage.
Why even?
They're already making them
Oh come on.
You know the answer already.
And lower the prices a bit.
Yes of course they're gonna sell more Si for even cheaper in a market nowhere close to being filled with enough CPU goodness.

Looks like some of you guys still don't get it.
The days of client dGP being the silica charity with huge (relatively) dies for cheap are over.
o v e r.
Thrice over, now and forever.
 
The days of client dGP being the silica charity with huge (relatively) dies for cheap are over.
o v e r.
Thrice over, now and forever.

You keep saying this, and I keep saying that this situation is entirely temporary. It is going on right now because there is no fab capacity to spare, and when they have to make choices about which customers to serve, dropping the low end hurts them the least.

This situation will not last forever. Most industry sources now agree that there will be excess capacity in 2023. Some point before that, the market will normalize. At that point, there will again be cards in every market segment. For AMD or nVidia, selling a 250mm^2 GPU at $250 is massively profitable. The only reason they are not doing it right now is that they cannot do that, because TSMC and Samsung will only sell them so many wafer starts.

People always overextrapolate current trends. I remember someone arguing with me that substantial leaps in GPU perf and perf/w were over ... just before the first 7nm products were about to be released. Things plateaued, because the manufacturing plateaued. Then there was a new process, and things improved again. The low end is currently dead, because fabs are capacity constrained. The low end will come back when this stops being the case.
 
That's like your opinion dude.
That's not an opinion.
Selling 500-600mm^2 of the good stuff + some fairly pricey memory for $600-700 was an utter generosity bonanza versus what CPU guys or pretty much anyone else had it.
This situation will not last forever
Of course it will.
It's about trailing edge anyway; not anything GPU-related in particular.
xtor cost scaling died already.
For AMD or nVidia, selling a 250mm^2 GPU at $250 is massively profitable.
Not anymore lol.
N5 and N3 and N2 wafers are gonna be even higher cost per xtor you see.
 
Not anymore lol.
N5 and N3 and N2 wafers are gonna be even higher cost per xtor you see.

Going forwards, I think low-end will probably use an older node than the high-end. There will be massive amount of capacity on the ~6-8nm in a little more than a year, while the high-end of everything will want to move to 2-3nm processes for performance. So the low-end will not match the high-end in perf/power, but it will be there, which it currently isn't.
 
I think low-end will probably use an older node than the high-end.
That's still very expensive since you need bigger dies to get bigger lowend perf then.
There's no winning; for Moore's law is truly and finally dead.
while the high-end of everything will want to move to 2-3nm processes for performance
Those nodes have shit perf/power gains over each other.
You use them to just stuff more into the same package; i.e. literally win more devices.
 
The phone market isn't sustained on high end phones alone, either. Very different industry anyways, where you can subsidize costs through contracts and whatnot. People also view them differently than they would a dedicated graphics processor. I usually like to say there's no reason you cant compare apples and oranges, but it really doesn't make sense as an analogy in this case.

And I didn't say AMD would be selling lower end 'bleeding edge' GPU's. I was specifically referring to refreshing N22 and 23, which shouldn't be an issue at all, be it on 7 or 6nm. They're already making them. All they have to do is keep doing so. And lower the prices a bit.

APU's are starting to gobble up that market. I see that continuing to happen. We might see anything under $300 disappear
 
APU's are starting to gobble up that market. I see that continuing to happen. We might see anything under $300 disappear
Bingo.
Your cutting edge APU still has to pay the perimeter tax for all the I/O it needs, while the shrinks bring a ton of logic to throw around.
Xtors gonna be spent on them delicious GPUs and SLCs and whatnot.
 
APU's are starting to gobble up that market. I see that continuing to happen. We might see anything under $300 disappear
If we are looking at a normally price GPU market, the best APUs are still only comparable to $50 GPU. I don't see this changing significantly enough to in the future. APUs are as expensive as GPU/CPUs to manufacture and have to be sold for less margin due to performance constraints.
 
Selling 500-600mm^2 of the good stuff + some fairly pricey memory for $600-700 was an utter generosity bonanza versus what CPU guys or pretty much anyone else had it.

xtor cost scaling died already.

N5 and N3 and N2 wafers are gonna be even higher cost per xtor you see.

There's no winning; for Moore's law is truly and finally dead.

Those nodes have shit perf/power gains over each other.

You use them to just stuff more into the same package; i.e. literally win more devices.
Hate to admit it but these comments are painfully accurate.

Yeah sure demand right now is exacerbating the problem but maybe it's just an early taste of what's to come. Manufacturing costs ain't going anywhere but up.
 
but maybe it's just an early taste of what's to come.
Yes!
It's gonna get way way worse once we get to nice and funny scaling boosters a-la buried power rails or BSPDs at N2 or equivalent nodes.
You get the perf/power but the costs outright do a moonshot!
Winning more turned up to 11.
 
Status
Not open for further replies.
Back
Top